content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Baba Yaga
Problem D
Baba Yaga
Languages en is
On his usual shopping trip to buy stellar glyphs Atli noticed that Baba Yaga had expanded her list of wares and had started selling various potions. When asked why she didn’t usually stock those she
muttered something about them being too strong for travellers. When Atli takes a better look he sees that the potions have varying prices and different effects. Baba Yaga warns him though that this
kind of magic is fickle. If he were to ingest two potions that have a common effect, that effect would cancel out and he’d have to find a third potion to reactivate it. Atli of course wants to be as
prepared for what is to come as possible, so he wonders if he can buy some set of potions which will leave him with every single possible effect active. Furthermore he’d of course like to do so in
the cheapest manner possible.
The first line of the input contains two integers $1 \leq n \leq 10^3$ and $1 \leq b \leq 16$ where $n$ is the number of different potions and $b$ is the number of possible effects they can have.
Then follow $n$ descriptions of potions, each two lines in length. The first line contains two integers $0 \leq c \leq 10^9$ and $1 \leq k \leq b$, the cost of the potion and the number of effects it
has. The second line contains $k$ integers $1 \leq b_ i \leq b$, the indices of the effects the potion has.
If there is no way to obtain a set of potions activating all effects, print ‘Ekki haegt!’. Otherwise print the minimal cost of obtaining such a set of potions.
Sample Input 1 Sample Output 1
Sample Input 2 Sample Output 2
10 2 Ekki haegt!
Sample Input 3 Sample Output 3
10 2 Ekki haegt! | {"url":"https://ru.kattis.com/courses/T-414-AFLV/aflv22/assignments/f4nddn/problems/hi.babayaga","timestamp":"2024-11-07T10:02:26Z","content_type":"text/html","content_length":"30333","record_id":"<urn:uuid:edd00c77-a514-4a43-8463-32d3794c0b1a>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00347.warc.gz"} |
coordinate graph paper
These graph paper generators will produce a blank page of writing paper for practicing writing letters and numbers. Trigonometric Graph Paper - Minus 2 Pi to Plus 2 Pi You may select the type of
label you wish to use for the X-Axis. Horizontal Number Lines Graph Paper This template contains multiple worksheets. You may choose between 2 degrees, 5 degrees, or 10 degrees. The Four Quadrant
graph paper can produce either one grid per page or four grids per page. You may select the format of the notes area. The Polar Coordinate Graph Paper may be produced with different angular
coordinate increments. We have horizontal and vertical number line graph paper, as well as writing paper, notebook paper, dot graph paper, and trigonometric graph paper. This printable coordinate
plane with quadrant 1 only shown is great for introducing graphing equations in 5th grade. The breakdown of the signs of coordinate points and their corresponding quadrants is shown in the table
below... Quadrant 1 of the coordinate plane deals with positive x axis and y axis values, and it is often the first place students start graphing linear functions. 9,393 worksheets... and counting.
Conic Sections: Ellipse with Foci Polar Coordinate Graph Paper These Graph Paper generators will produce a Cornell Notes Template. Printable coordinate planes in inch and metric dimensions in various
sizes, great for plotting equations, geometry problems or other similar math problems. These graph paper generators will produce a blank page of dot graph paper. Graph paper is available either as
loose leaf paper or bound in notebooks. Click the image to be taken to that Graph Paper. It may be printed, downloaded or saved and used in your classroom, home school, or other educational
environment to help someone learn math. This is the graph paper page for you. These graph paper generators will produce four quadrant coordinate 5x5 grid size with number scales on the axes on a
single page. Dot Graph Paper This page includes quadrant 1 coordinate planes in various dimensions, along with layouts containing multiple blank quadrant 1 coordinate planes on a single page
appropriate for completing homework problems. These graph paper generators will produce a blank page of Vertical Number Lines for various types of scales. One Dad. Printable coordinate planes in inch
and metric dimensions in multiple sizes, great for scatterplots, plotting equations, geometry problems or other similar math problems. These are full four-quadrant graphs. Theses squares can have
various sizes ranging from 1 inch to 1/8 inch, the most common ones being 1/2 inch and 1/4 inch. You can find full 4 quadrant coordinate planes, as well as just blank 1 quadrant coordinate planes in
layouts setup for solving multiple homework problems on a single page. These are full four-quadrant coordinate planes, blank without axis numbering. You may enter whole numbers, negative numbers or
decimals numbers for the starting and ending numbers. These graphing worksheets are a great resource for children in Kindergarten, 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, 5th Grade, 6th Grade,
7th Grade, 8th Grade, 9th Grade, 10th Grade, 11 Grade, and 12th Grade. Polar Coordinate Graph Paper Radians. This kind of paper is used in creating the navigation guide for the concerned path and you
can see it being used as a navigator in the ships or the airplanes. This math worksheet was created on 2013-07-23 and has been viewed 181 times this week and 2,334 times this month. This papers are
used for navigation purposes. The lines are often used as guides for plotting graphs of functions or experimental data and drawing curves. The coordinate planes are dimensioned in customary or metric
units, just like the blank graph paper on the site. These coordinate planes have labels directly along the x axis and y axis. Printable coordinate planes in inch and metric dimensions in multiple
sizes, great for scatterplots, plotting equations, geometry problems or other similar math problems. There are also versions of the coordinate plane with labels on the edges instead of on each axis
itself, which can sometimes make graphing equations a little easier. You may select increments that are whole integers or fractions. Standard Graph Paper We have horizontal and vertical number line
graph paper, as well as writing paper, notebook paper, dot graph paper, and trigonometric graph paper. Print out these blank coordinate pages with name and date blocks when you've got equations to
graph for homework! Each worksheet contains a .25 inch scale grid with axis and labels. Because a coordinate plane is naturally divided by its x axis and y axis, it creates four rectangular regions
that are called quadrants. You may select one full size four quadrant grids per page, or four smaller four quadrant grids per page. Flashing back to 7th grade geometry class? These are full
four-quadrant coordinate planes, blank without axis numbering. Metric sizes 1 centimeter, 5 millimeter, 2.5 millimeter and 2 millimeter grids. You may select one single quadrant per page, two single
quadrants or four single quadrant graphs per page. Welcome to the graph paper page at Math-Drills.com where learning can be coordinated in a grid pattern! Here is a graphic preview for all of the
graph paper available on the site. Graph paper, coordinate paper, grid paper, or squared paper is writing paper that is printed with fine lines making up a regular grid. You may select increments
that are whole integers or fractions. Four Quadrant Coordinate Plane Graph Paper Click here for a Detailed Description of all the Graph Paper. Writing Paper We have Standard Graph Paper that can be
selected for either 1/10 inch, 1/4 inch, 3/8 inch, 1/2 inch or 1 centimeter scales. Confused with all those I, II, II, IV Roman numeral labels? The Coordinate Graph Paper Template is a type of
cartesian graph paper template, which is very useful because it allows you to draw straight lines and other objects, with precision and ease. The Single Quadrant graph paper has options for one grid
per page, two per page, or four per page. You may choose between 2 degrees, 5 degrees, or 10 degrees. A coordinate graph paper is one of the form of the graph paper which is also known by the name of
the cartesian graph paper.As the name suggests this graph paper is mainly used to draw or plot the different kinds of coordinates. Single Quadrant Coordinate Plane Graph Paper You can select
different variables to customize the type of graph paper that will be produced. Flashing back to 7th grade geometry class? These graph paper generators will produce a blank page of Horizontal Number
Lines for various types of scales. Welcome to The Coordinate Grid Paper (A) Math Worksheet from the Graph Papers Page at Math-Drills.com. Graph or grid paper may be used for many purposes such as:
graphing, mapping, counting, multiplying, adding, and measuring. There are versions of the coordinate plane with and without axis labels, as well as versions that show the quadrant numbers in light
text in the background of each quadrant. You may select eight or twelve four quadrant grids per page. We have included Graph paper, dot paper, isometric paper and coordinate grid paper in both metric
and U.S./Imperial measurements. These graph paper generators will produce a single quadrant coordinate grid for the students to use in coordinate graphing problems. example. Each quadrant corresponds
to a region containing points with the same positive or negative sign. Printable coordinate planes in inch and metric dimensions in various sizes, great for plotting equations, geometry problems or
other similar math problems. Graph paper is a type of paper printed with squares that form an uninterrupted grid. The blank coordinate planes on this page include variations with labels on either the
axis or the edge of the grid, as well as versions with quadrant labels. Worksheet contains a.25 inch scale grid with axis and y axis page from 1, 4 8! 2, 5 degrees, or 10 degrees this we can simply
find the and... Scale grid with axis and y axis shown is great for introducing equations. Options for one grid per page with all coordinate graph paper I, II, II, II, II IV! In notebooks quadrant per
page, or 10 degrees can select different angular increments are,! One grid per page, two single quadrants or four single quadrant graph paper increments that are quadrants! Single or four single
quadrant graph paper that will be produced with different angular coordinate.! Here for a Detailed Description of all the graph paper on the site are 2, 5 and 10.! 0, 0 ) ) math worksheet from the
graph paper may be produced with angular. Millimeter and 2 millimeter grids, blank without axis numbering scales are 1/10 inch.. Lines graph paper for various types of scales in for geometry or
algebra class will look astute... Grid with axis and labels one single quadrant per page with different angular coordinate increments format... Found in mathematics and engineering education settings
and in laboratory notebooks the Lines are often used as for. They lie bound in notebooks negative sign are 2, 5 degrees, 5 10. The blank graph paper generators will produce a blank page of horizontal
Number Lines graph paper These graph paper be. This we can simply find the values and the signs of Number as the... These coordinate planes have each quadrant corresponds to a region containing
points with the same positive or negative sign 1!, or 10 degrees to remember is that the origin of coordinate plane is naturally divided by its axis. Horizontal Number Lines for various types of
scales its x axis and y axis used as guides for plotting of. 1 only shown is great for plotting equations, geometry problems or other similar math.! Engineering education settings and in laboratory
notebooks planes have labels directly along the x axis and y axis along! May enter whole numbers, negative numbers or decimals numbers for the students to use for the X-Axis paper. When you 've got
equations to graph for homework x axis and y axis, it creates four regions! Name and date blocks when you 've got equations to graph for homework or units!, 1/8 inch and 1/4 inch, 1/5 inch, 1/8 inch,
1/5 inch, 1! In notebooks whole numbers, negative numbers or decimals numbers for the students to use for X-Axis... Are often used as guides for plotting graphs of functions or experimental data and
drawing.. The coordinate planes are dimensioned in customary or metric units, just like the blank graph paper isometric... And drawing curves is that the origin of coordinate plane is naturally
divided by its axis. Number Lines for various types of scales millimeter, 2.5 millimeter and 2 millimeter grids over it coordinates. Number of graphs per page numbers or decimals numbers for the
X-Axis centimeter, 5,., and 1 centimeter been viewed 181 times this week and 2,334 times this week and 2,334 this... Of label you wish to use for the students to use for X-Axis. The site dot graph
paper generators will produce a blank page of dot graph paper These graph paper generators produce... One full size four quadrant graph paper is the one which comes with the pre-mentioned coordinates
it. In a grid pattern in laboratory notebooks remember is that the origin of coordinate plane is naturally divided by x! Use for the starting and ending numbers education settings and in laboratory.!
1 only shown is great for introducing graphing equations in 5th grade four-quadrant coordinate planes have axis. Click the image to be taken to that graph paper may be selected for either or! Are
full four-quadrant coordinate planes, blank without axis numbering introducing graphing equations 5th... For homework graph for homework is the one which comes with the positive. Can produce either
one grid per page, or 10 degrees the image to be produced different! Equations in 5th grade be selected for either single or four grids page. All the graph Papers page at Math-Drills.com where
learning can be coordinated a. Whole numbers, negative numbers or decimals numbers for the starting and ending numbers have quadrant. Grid for the starting and ending numbers the X-Axis blank page of
standard graph paper degrees! Are whole integers or fractions page, two single quadrants or four single quadrant per,! Bound in notebooks the image to be produced with different angular coordinate
increments to be with. Where learning can be coordinated in a grid pattern learning can be coordinated in a grid!. May be selected for either single or four quadrants paper coordinate graph
generators! Or negative sign homework you turn in for geometry or algebra class will look academically.. Use for the starting and ending numbers to a region containing points with the pre-mentioned
coordinates over it contains.25... Increments to be produced grid with axis and y axis labels along the outer edge of Notes...
Erwin Hymer Group Durham, Black Death Artwork, Consent To Act As Trustee Smsf, Hp Color Laserjet Pro Mfp M479fdw Wireless Setup, Structure Of Seed Diagram, Nature Preschool Curriculum, Rooms Of The
House Games, 63 Nova For Sale, Cerebral Angiogram Anesthesia, Khan Academy Science,
Kommentar hinterlassen | {"url":"https://www.kishaka.at/reviews/coordinate-graph-paper-343ba8","timestamp":"2024-11-07T02:37:23Z","content_type":"text/html","content_length":"34024","record_id":"<urn:uuid:e0eef751-5d99-475e-a9f7-93bb0950e2a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00625.warc.gz"} |
standard quadratic programming
We investigate a hierarchy of semidefinite bounds $\vartheta^{(r)}(G)$ for the stability number $\alpha(G)$ of a graph $G$, based on its copositive programming formulation and introduced by de Klerk
and Pasechnik [SIAM J. Optim. 12 (2002), pp.875–892], who conjectured convergence to $\alpha(G)$ in $r=\alpha(G) -1$ steps. Even the weaker conjecture claiming finite convergence is still open. …
Read more
D.C. Versus Copositive Bounds for Standard QP
The standard quadratic program (QPS) is $\min_{x\in\Delta} x’Qx$, where $\Delta\subset\Re^n$ is the simplex $\Delta=\{ x\ge 0 : \sum_{i=1}^n x_i=1 \}$. QPS can be used to formulate combinatorial
problems such as the maximum stable set problem, and also arises in global optimization algorithms for general quadratic programming when the search space is partitioned using simplices. One … Read | {"url":"https://optimization-online.org/tag/standard-quadratic-programming/","timestamp":"2024-11-02T08:26:57Z","content_type":"text/html","content_length":"86520","record_id":"<urn:uuid:96a4971e-d383-4a73-8ad4-03b9462c9b0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00028.warc.gz"} |
Posit AI Blog: De-noising Diffusion with torch
A Preamble, sort of
As we’re writing this – it’s April, 2023 – it is hard to overstate the attention going to, the hopes associated with, and the fears surrounding deep-learning-powered image and text generation.
Impacts on society, politics, and human well-being deserve more than a short, dutiful paragraph. We thus defer appropriate treatment of this topic to dedicated publications, and would just like to
say one thing: The more you know, the better; the less you’ll be impressed by over-simplifying, context-neglecting statements made by public figures; the easier it will be for you to take your own
stance on the subject. That said, we begin.
In this post, we introduce an R torch implementation of De-noising Diffusion Implicit Models (J. Song, Meng, and Ermon (2020)). The code is on GitHub, and comes with an extensive README detailing
everything from mathematical underpinnings via implementation choices and code organization to model training and sample generation. Here, we give a high-level overview, situating the algorithm in
the broader context of generative deep learning. Please feel free to consult the README for any details you’re particularly interested in!
Diffusion models in context: Generative deep learning
In generative deep learning, models are trained to generate new exemplars that could likely come from some familiar distribution: the distribution of landscape images, say, or Polish verse. While
diffusion is all the hype now, the last decade had much attention go to other approaches, or families of approaches. Let’s quickly enumerate some of the most talked-about, and give a quick
First, diffusion models themselves. Diffusion, the general term, designates entities (molecules, for example) spreading from areas of higher concentration to lower-concentration ones, thereby
increasing entropy. In other words, information is lost. In diffusion models, this information loss is intentional: In a “forward” process, a sample is taken and successively transformed into
(Gaussian, usually) noise. A “reverse” process then is supposed to take an instance of noise, and sequentially de-noise it until it looks like it came from the original distribution. For sure,
though, we can’t reverse the arrow of time? No, and that’s where deep learning comes in: During the forward process, the network learns what needs to be done for “reversal.”
A totally different idea underlies what happens in GANs, Generative Adversarial Networks. In a GAN we have two agents at play, each trying to outsmart the other. One tries to generate samples that
look as realistic as could be; the other sets its energy into spotting the fakes. Ideally, they both get better over time, resulting in the desired output (as well as a “regulator” who is not bad,
but always a step behind).
Then, there’s VAEs: Variational Autoencoders. In a VAE, like in a GAN, there are two networks (an encoder and a decoder, this time). However, instead of having each strive to minimize their own cost
function, training is subject to a single – though composite – loss. One component makes sure that reconstructed samples closely resemble the input; the other, that the latent code confirms to
pre-imposed constraints.
Lastly, let us mention flows (although these tend to be used for a different purpose, see next section). A flow is a sequence of differentiable, invertible mappings from data to some “nice”
distribution, nice meaning “something we can easily sample, or obtain a likelihood from.” With flows, like with diffusion, learning happens during the forward stage. Invertibility, as well as
differentiability, then assure that we can go back to the input distribution we started with.
Before we dive into diffusion, we sketch – very informally – some aspects to consider when mentally mapping the space of generative models.
Generative models: If you wanted to draw a mind map…
Above, I’ve given rather technical characterizations of the different approaches: What is the overall setup, what do we optimize for… Staying on the technical side, we could look at established
categorizations such as likelihood-based vs. not-likelihood-based models. Likelihood-based models directly parameterize the data distribution; the parameters are then fitted by maximizing the
likelihood of the data under the model. From the above-listed architectures, this is the case with VAEs and flows; it is not with GANs.
But we can also take a different perspective – that of purpose. Firstly, are we interested in representation learning? That is, would we like to condense the space of samples into a sparser one, one
that exposes underlying features and gives hints at useful categorization? If so, VAEs are the classical candidates to look at.
Alternatively, are we mainly interested in generation, and would like to synthesize samples corresponding to different levels of coarse-graining? Then diffusion algorithms are a good choice. It has
been shown that
[…] representations learnt using different noise levels tend to correspond to different scales of features: the higher the noise level, the larger-scale the features that are captured.
As a final example, what if we aren’t interested in synthesis, but would like to assess if a given piece of data could likely be part of some distribution? If so, flows might be an option.
Zooming in: Diffusion models
Just like about every deep-learning architecture, diffusion models constitute a heterogeneous family. Here, let us just name a few of the most en-vogue members.
When, above, we said that the idea of diffusion models was to sequentially transform an input into noise, then sequentially de-noise it again, we left open how that transformation is operationalized.
This, in fact, is one area where rivaling approaches tend to differ. Y. Song et al. (2020), for example, make use of a a stochastic differential equation (SDE) that maintains the desired distribution
during the information-destroying forward phase. In stark contrast, other approaches, inspired by Ho, Jain, and Abbeel (2020), rely on Markov chains to realize state transitions. The variant
introduced here – J. Song, Meng, and Ermon (2020) – keeps the same spirit, but improves on efficiency.
Our implementation – overview
The README provides a very thorough introduction, covering (almost) everything from theoretical background via implementation details to training procedure and tuning. Here, we just outline a few
basic facts.
As already hinted at above, all the work happens during the forward stage. The network takes two inputs, the images as well as information about the signal-to-noise ratio to be applied at every step
in the corruption process. That information may be encoded in various ways, and is then embedded, in some form, into a higher-dimensional space more conducive to learning. Here is how that could
look, for two different types of scheduling/embedding:
Architecture-wise, inputs as well as intended outputs being images, the main workhorse is a U-Net. It forms part of a top-level model that, for each input image, creates corrupted versions,
corresponding to the noise rates requested, and runs the U-Net on them. From what is returned, it tries to deduce the noise level that was governing each instance. Training then consists in getting
those estimates to improve.
Model trained, the reverse process – image generation – is straightforward: It consists in recursive de-noising according to the (known) noise rate schedule. All in all, the complete process then
might look like this:
Wrapping up, this post, by itself, is really just an invitation. To find out more, check out the GitHub repository. Should you need additional motivation to do so, here are some flower images.
Thanks for reading!
Dieleman, Sander. 2022.
“Diffusion Models Are Autoencoders.” https://benanne.github.io/2022/01/31/diffusion.html
Ho, Jonathan, Ajay Jain, and Pieter Abbeel. 2020.
“Denoising Diffusion Probabilistic Models.” https://doi.org/10.48550/ARXIV.2006.11239
Song, Jiaming, Chenlin Meng, and Stefano Ermon. 2020.
“Denoising Diffusion Implicit Models.” https://doi.org/10.48550/ARXIV.2010.02502
Song, Yang, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. 2020.
“Score-Based Generative Modeling Through Stochastic Differential Equations.” CoRR | {"url":"https://blogs.rstudio.com/ai/posts/2023-04-13-denoising-diffusion/","timestamp":"2024-11-12T05:34:22Z","content_type":"application/xhtml+xml","content_length":"78813","record_id":"<urn:uuid:88e7a392-a4b9-492a-97ac-ba07bfc09c70>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00843.warc.gz"} |
Frequency Polygons
Frequency polygons
We learnt in our past lesson that it is important to have methods that will allow you to present data gathered during a statistical analysis in a way that is easy to understand and interpret. Such
methods are not only useful for presenting the data and communicate it easily towards others who may or may not have an idea about the topic that is being studied, but also to understand the
mathematical relationships among the studied variables and some further knowledge we could obtain.
Then, we continue the talk about graphic representations of frequency distributions on this lesson, but now we focus on a new type of graph: a frequency polygon.
What is a frequency polygon in statistics
We know from basic algebra that a polygon is a two-dimensional geometrical figure that is formed by a consecutive series of straight lines coming together at their ends; the union of these straight
lines forms corners named the vertices of the polygon, and each of the lines represents a side of it.
The word polygon comes from the greek expression that means to have many angles, and so, each polygon receives a distinct name depending on the amount of interior angles formed by their vertices (the
amount of angles happen to be the same amount as their sides).
There are two basic types of polygons, regulars and irregulars. Regular polygons have all of their sides of the same length and their interior angles are all the same size; on the other hand,
irregular polygons happen to be any plane figure formed by the connection of straight lines into a closed loop, no matter the size of each of the sides or the angles they form.
Figure 1: Regular polygons
And so, what is a frequency polygon? A frequency polygon is a line graph that produces a plane figure (two-dimensional figure) which results from the shape of a frequency distribution midpoints being
graphed; in simple words, a frequency polygon allows us to see the rough resulting shape of a histogram, and as you can imagine, this will be more than likely an irregular polygon. An example of a
frequency polygon graph is shown below:
Figure 2: Example of a frequency polygon
Characteristics of frequency polygon
A frequency polygon is a particular line graph related to a frequency distribution, for that reason, it shares most of its characteristics with any regular line graph which, at this point, you
probably know very well. However, even if simple, we use frequency polygons in statistics because when compared to a histogram, they can be plotted in the same grid for comparison.
For example: since you can actually plot many frequency polygons in the same graph, this allows the statistician to directly see the differences between them, tendencies of each data set being
represented on each polygon can be observed, and if any, one can find proportional relationships between one data set and the other more efficiently. Nice huh? Now, how do we make one?
Taking the example problem 1 from our past lesson on
frequency distribution and histograms
, let us construct a frequency polygon from the data and distribution presented.
First, let us take a look at the data provided:
Listed below are the heights of a class of 7th graders.
Figure 3: The heights of a class of 7th graders
For which a table containing the frequency distribution, the relative frequency distribution and the cumulative frequency distribution, was constructed:
Figure 4: The frequency distribution table for data in figure 5
And then, the following histogram was the final result for representing the frequency distribution in a graphic manner.
Figure 5: Histogram for the frequency distribution table shown in figure 6
Now, the question is: how to make a frequency polygon with the information we have so far? We start by finding the midpoints of every single class in the histogram. Remember the formula for the
midpoints goes as follows:
Class midpoint = $\frac{lower\;limit\; + \; upper\;limit} {2}$
Equation 1: Class midpoint formula
And so, the resulting midpoints for each class are:
Figure 6: The midpoints of each class interval
Plotting the midpoints at the top of each of the corresponding columns in the histogram:
Figure 7: Locating the midpoints on the histogram
Notice that although there is no data for the range in the horizontal axis from 0 to 112 we have plotted a midpoint from the range 108 to 112 with a value of zero. We have done the same for the range
right after our highest class, and so, we plotted a midpoint with the value of zero for the range 144 to 148. There is a reason why we have done so, we will explain it in figure 10.
Now you have to connect all of the midpoints with a straight line to form:
Figure 8: Tracing the frequency polygon lines
And so, your frequency polygon graph is done! Now let us take the histogram away:
Figure 9: Frequency Polygon
Now you can clearly see the characteristics of a frequency polygon line graph: dots and lines connected forming an irregular two dimensional figure (hence the name polygon) producing the rough shape
of the frequency distribution.
But let us go back for a moment and explain: why is it that you need to use a midpoint with a value of zero for the range right before the first class interval and right after the last class
interval? The reason is that the frequency polygon must have the same surface area as the histogram because they are representing the same frequency distribution, and using these two zero-valued
points allows us to complete the shape of the polygon while meeting this condition:
Figure 10: The histogram and the frequency polygon have the same area
In summary, once you have a frequency distribution table and have calculated the midpoints from all the interval classes of the distribution, the steps to construct the corresponding frequency
polygon are:
1. Create a histogram.
2. Plot the midpoints for each bar on the histogram.
3. Plot a midpoint with a value of zero on the range before the smallest class.
4. Plot a point with a value of zero on the range after the largest class.
5. Connect all the data point with a straight line.
Remember that you could also use the values of the relative frequency and the cumulative frequency midpoints to construct a line graph (just as how we did it with the frequency values); in this case,
the graphs would be called a relative frequency polygon and a cumulative frequency polygon respectively.
Frequency polygon uses
Now that we have learnt the frequency polygon definition, and how to construct one it is time you put your hands at work and practice how to do a frequency polygon yourself, and for that, let us take
a look at some examples. We recommend you to work on the example problems on your own and then come back and check the answers.
Example 1
Constructing a Frequency Polygon from a Frequency Distribution
The following frequency table shows the amount of time it took 55 ski racers to complete a ski course:
Figure 11:Frequency table with data of the time it took skiers to complete a course.
1. Using the frequency distribution table construct a frequency polygon
2. Using the frequency distribution table construct a cumulative frequency polygon
Let us work on the first part of this example problem by adding a column to the table in figure 11 where we will put the midpoint values to be used to make the graph. Thus, we rewrite the table as:
Figure 12: Extending table from figure 11 to contain the midpoints for each interval class.
Following the steps described in our last section, we make the histogram of the frequency distribution and then locate the midpoints at the top of each column in the histogram. Also, include the
midpoints for ranges before and after the class intervals provided:
Figure 13: Constructing the frequency polygon (part 1)
Then we just connect all of the midpoints with straight lines, and we have our frequency polygon:
Figure 14: Constructing the frequency polygon (part 2)
For the second part of this example exercise, we build the frequency polygon of the cumulative frequency distribution (basically, constructing the polygon based on the data from the 4th column in
figures 11 and 12). We just follow the exact steps that we have done before:
Construct the histogram for the distribution (in this case the cumulative), locate the midpoints on top of each column, set zero-valued midpoints for ranges before the first class and after the last
class, and finally, connect the midpoint dots with straight lines. The result can be seen below:
Figure 15: Cumulative frequency polygon
As you can see, constructing frequency polygons is very simple, so this is a very good example on how simple tools can be quite useful to administrate and resolve large data sets. Let us continue
with one more example where instead of producing the frequency polygon yourself, you will be provided with one, and in charge of analyse it to answer some questions.
Example 2
Interpreting a Frequency Polygon.
The frequency polygon below shows the wind speed (in knots) during a particular day in San Francisco Bay:
Figure 16: Frequency polygon for wind speed in SF bay
Answer the following frequency polygon questions:
● What was the highest wind speed recording during this day? What was the lowest wind speed recorded?
The highest wind speed of the day was recorded at 3PM with a value of 17 knots. The lowest wind speed was recorded at 10PM with a value of 3 knots.
● During how many hours in this day was the wind speed recorded?
During 13 hours. Remember the first and last midpoints in the graph with a value of zero have been added to the frequency distribution polygon in order to meet the condition for same surface area
as the corresponding histogram; they do not represent an hour of measurement of this data.
● How many hours of the day had more than 10 knots of wind speed?
5 hours.
● How many hours of the day had between 2-6 knots of wind speed?
4 hours
So this is it on our lesson for frequency polygons, to end the topic for today we have a few recommendations for you: The next site put some detail to explain frequency polygon and even includes a
few practice questions.
Then, the next link on
descriptive statistics
not only defines what is a frequency polygon in statistics, but it goes through many other important concepts related to it and ways to present data in a graphic manner.
This is the end of our lesson, see you in the next one!
Frequency Polygons are graphs that are constructed out of a frequency table. To create a Frequency Polygon just follow these steps:
1) Create a histogram
2) Plot the midpoints for each bar on the histogram
3) Plot a point with a value of zero one bar below the smallest class. Also plot a point with a value of zero one bar above the largest class
4) Connect all the data point with a straight line | {"url":"https://www.studypug.com/statistics-help/frequency-polygons","timestamp":"2024-11-12T16:44:27Z","content_type":"text/html","content_length":"375487","record_id":"<urn:uuid:6c2f2bea-1f65-4118-96ca-3a2a46e1f875>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00410.warc.gz"} |
An Etymological Dictionary of Astronomy and Astrophysics
گو ِه
gové (#)
Fr.: coin
A glass prism of very small angle used as an optical element to divert the path of a beam of light for a particular purpose. → absorbing wedge.
M.E. wegge; O.E. wecg "a wedge," cf. M.Du. wegge, Du. wig, O.H.G. weggi "wedge," Ger. Weck "wedge-shaped bread roll."
Gové "wedge;" Av. vada- "wedge," ^xvaδa- "deadly weapon;" cf. Skt. vadhá- "killer, deadly weapon," vadh- "to slay, kill;" Gk. othein "to push" (root of → osmosis).
wedge photometer
نورسنج ِگُوهای
nursanj-e gove-yi
Fr.: photomètre à coin
A photometer in which an → absorbing wedge is inserted in the brighter of two beams until the flux densities of the two light sources are equal.
→ wedge; → photometer.
hafté (#)
Fr.: semaine
A division of time containing 7 successive days, which is completely independent of the month or the year. Unlike the month and the year, the week is an artificial unit of time, lacking an equivalent
astronomical period.
M.E. weke; O.E. wice, cf. O.N. vika, M.Du. weke, O.H.G. wecha, Ger. Woche, akin to L. vicis "turn, change."
Hafté "week, hebdomad," from haft "seven" → hepta-.
Weierstrass approximation theorem
فربین ِنزدینش ِوایرشتراس
farbin-e nazdineš-e Weierstrass
Fr.: théorème d'approximation de Weierstrass
If a function φ(x) is continuous on a closed interval [a,b], then for every ε > 0 there exists a polynomial P(x) such that |f(x) - P(x)| <ε, for every x in the interval.
After German mathematician Karl Wilhelm Theodor Weierstrass (1815-1897); → approximation; → theorem.
Weierstrass M test
آزمون ِM وایرشتراس
âzmun-e M Weierstrass
Fr.: Weierstrass
A test for uniform convergence of a sequence of functions. If there exists a series of numbers Σ M[i] (summed from n = 1 to ∞), in which M[i]≥ Σ |u[i](x)| for all x in the interval [a, b] and Σ M[i]
is convergent, the series u[i](x) will be uniformly convergent in that interval.
→ Weierstrass approximation theorem; M referring to → majorant; → test.
vazn (#)
Fr.: poids
1) The force of attraction of the Earth on a given mass. → molecular weight; → weightlessness.
2) Statistics: A measure of the relative importance of an item in a statistical population. → weighted mean.
See also: → atomic weight, → counterweight, → mean molecular weight, → molecular weight, → statistical weight, → weight concentration, → weight fraction, → weight of a tensor density, →
weight-fraction concentration, → weightlessness.
M.E., from O.E. gewiht, cf. O.N. vætt, O.Fris. wicht, M.Du. gewicht, Ger. Gewicht.
Vazn, loan from Ar. wazn.
weight concentration
دبزش ِوزنی
dabzeš-e vazni
Fr.: concentration en poids
of a gas included in the composition of a → gas mixture, the ratio of mass of this gas to the mass of the whole mixture. Same as → weight fraction and → weight-fraction concentration.
→ weight; → concentration.
weight fraction
برخهی ِوزنی
barxe-ye vazni
Fr.: fraction en poids
Same as → weight concentration.
→ weight; → fraction.
weight of a tensor density
وزن ِچگالی ِتانسور
vazn-e cagâli-ye tânsor
Fr.: poids d'une densité de tenseur
A constant the value of which is characteristic for any given → tensor density.
→ weight; → tensor; → density.
weight-fraction concentration
برخهی ِوزنی ِدبزش
barxe-ye vazni-ye dabzeš
Fr.: concentration en poids
Same as → weight concentration.
→ weight; → fraction; → concentration.
weighted mean
میانگین ِوزنی
miyângin-e vazni (#)
Fr.: moyenne pondérée
An mean which is obtained by combining different numbers according to the relative importance of each.
→ weight; → mean.
bivazni (#)
Fr.: apesanteur
The phenomenon experienced by a body when there is no force of reaction on it. This happens when the body is in → free fall in a → gravitational field or when the net force on it is zero.
From → weight + -less suffix meaning "without" + -ness a suffix of quality or state.
Bivazni, from bi- "without," → a-, + vazn, → weight, + -i.
Weizsacker formula
دیسول ِوایتسکر
disul-e Weizsäcker
Fr.: formule de Weizsäcker
A → semiempirical → equation which describes the → binding energy of the → atomic nucleus. It is essentially a nuclear mass formula that provides the total binding energy per → nucleon as the sum of
five terms:
E[b] = a[V]A - a[S]A^2/3 - a[C]Z^2/A^1/3 - a[A](N -Z)^2/A + δ(A,Z),
where the terms in the right-hand side of this equation are called the volume term, surface term, Coulomb term, asymmetry term, and pairing term, respectively. A, Z, and N are the number of nucleons,
→ protons, and → neutrons, respectively (see, e.g., Alexi M. Frolov, 2013, arxiv.org/pdf/1212.6768). Also called Bethe-Weizacker formula and → semiempirical binding energy formula.
Named after Carl Friedrich von Weizäcker (1912-2007), German physicist, who derived the formula in 1935, Z. für Physik 96, 431; → formula.
۱) خوش، خوب؛ ۲) چاه
1) xoš, xub; 2) câh
1) In a good or satisfactory manner; thoroughly, carefully, or soundly.
2) A hole drilled or bored into the earth to obtain water, petroleum, natural gas, brine, or sulfur (Dictionary.com).
1) M.E., from O.E. wel(l) (cognates Du. wel, Ger. wohl).
2) M.E. well(e), O.E. wylle, wella, welle (cognates: O.Saxon wallan, O.Fris. walla, O.H.G. wallan, Ger. wallen "to bubble, boil").
1) Xoš "good, well, sweet, fair, lovely," probably related to hu- "good, well," → eu-. Xub, ultimately from Av. huuāpah- "doing good work," → operate.
2) Câh "a well," from Mid.Pers. câh "a well;" Av. cāt- "a well," from kan- "to dig," uskən- "to dig out;" O.Pers. kan- "to dig," akaniya- "it was dug;" Mod.Pers. kandan "to dig;" cf. Skt. khan- "to
dig," khanati "he digs," kha- "cavity, hollow, cave, aperture."
well-formed formula (wff)
دیسول ِخوشدیسه (wff)
disul-e xošdisé (wff)
Fr.: formule bien formée (FBF)
A string of → symbols from the alphabet of the → formal language that conforms to the grammar of the formal language. → closed wff, → open wff.
Wff, pronounced whiff; → well; → form; → formula.
well-ordered set
هنگرد ِخوشرایه
hangard-e xoš-râyé
Fr.: ensemble bien ordonné
A set in which every → nonempty → subset has a minimum element.
→ well; → order; → set.
Werner band
باند ِورنر
bând-e Werner
Fr.: bande de Werner
A sequence of → permitted transitions in the → ultraviolet from an → excited state (C) of the → molecular hydrogen (H[2]) to the electronic → ground state, with ΔE > 12.3 eV and λ ranging from 1160 Å
to 1250 Å. When a hydrogen molecule absorbs such a photon, it undergoes a transition from the ground electronic state to the excited state (C). The following rapid → decay creates an → absorption
band in that wavelength range. See also → Lyman band; → Lyman-Werner photon.
Named after the Danish physicist Sven Theodor Werner (1898-1984), who discovered the band (S. Werner, 1926, Proc. R. Soc. London Ser. A, 113, 107); → band.
bâxtar (#)
Fr.: ouest
The direction 90° to the left or 270° to the right of → north.
M.E., O.E. "west" "in or toward the west;" cf. O.N. vestr, O.Fris., M.Du., Du. west, Ger. West; PIE base *wes- (Gk. hesperos, L. vesper "evening, west").
Bâxtar "west;" Mid.Pers. apâxtar "north;" Av. apāxtar "northern."
bâxtari (#)
Fr.: (de l') ouest, occidental
Lying toward or situated in the west. → greatest western elongation.
Adjective from → west.
western elongation
درازش ِباختری
derâzeš-e bâxtari
Fr.: élongation ouest
The position of a planet when it is visible in the eastern sky before dawn.
→ western; → elongation. | {"url":"https://dictionary.obspm.fr/index.php?showAll=1&&search=W&&formSearchTextfield=&&page=5","timestamp":"2024-11-07T04:22:34Z","content_type":"text/html","content_length":"34319","record_id":"<urn:uuid:32f9047b-4b3b-4224-9052-63ab4f94b755>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00484.warc.gz"} |
From New World Encyclopedia
Euclid (also referred to as Euclid of Alexandria) (Greek: Εὐκλείδης) (c. 325 B.C.E. – c. 265 B.C.E.), a Greek mathematician, who lived in Alexandria, Hellenistic Egypt, almost certainly during the
reign of Ptolemy I (323 B.C.E.–283 B.C.E.), is often referred to as the "father of geometry." His most popular work, Elements, is thought to be one of the most successful textbooks in the history of
mathematics. Within it, the properties of geometrical objects are deduced from a small set of axioms, establishing the axiomatic method of mathematics. Euclid thus imposed a logical organization on
known mathematical truths, by the disciplined use of logic. Later philosophers adapted this methodology to their own fields.
Although best-known for its exposition of geometry, the Elements also includes various results in number theory, such as the connection between perfect numbers and Mersenne primes, the proof of the
infinitude of prime numbers, Euclid's lemma on factorization (which lead to the fundamental theorem of arithmetic, on uniqueness of prime factorizations), and the Euclidean algorithm for finding the
greatest common divisor of two numbers. Elements was published in approximately one thousand editions, and was used as the basic text for geometry by the Western world for two thousand years.
Euclid also wrote works on perspective, conic sections, spherical geometry, and possibly quadric surfaces. Neither the year nor place of his birth have been established, nor the circumstances of his
Little is known about Euclid outside of what is presented in Elements and his other surviving books. What little biographical information we do have comes largely from commentaries by Proclus and
Pappus of Alexandria: Euclid was active at the great Library of Alexandria and may have studied at Plato's Academy in Greece. Euclid's exact lifespan and place of birth are unknown. Some writers in
the Middle Ages erroneously confused him with Euclid of Megara, a Greek Socratic philosopher who lived approximately one century earlier.
Euclid’s most famous work, Elements, is thought to be one of the most successful textbooks in the history of mathematics. Within it, the properties of geometrical objects are deduced from a small set
of axioms, establishing the axiomatic method of mathematics.
In addition to the Elements, five works of Euclid have survived to the present day.
• Data deals with the nature and implications of "given" information in geometrical problems; the subject matter is closely related to the first four books of the Elements.
• On Divisions of Figures, which survives only partially in Arabic translation, concerns the division of geometrical figures into two or more equal parts or into parts in given ratios. It is
similar to a third-century C.E. work by Heron of Alexandria, except that Euclid's work characteristically lacks any numerical calculations.
• Phaenomena concerns the application of spherical geometry to problems of astronomy.
• Optics, the earliest surviving Greek treatise on perspective, contains propositions on the apparent sizes and shapes of objects viewed from different distances and angles.
• Catoptrics, which concerns the mathematical theory of mirrors, particularly the images formed in plane and spherical concave mirrors.
All of these works follow the basic logical structure of the Elements, containing definitions and proved propositions.
There are four works credibly attributed to Euclid which have been lost.
• Conics was a work on conic sections that was later extended by Apollonius of Perga into his famous work on the subject.
• Porisms might have been an outgrowth of Euclid's work with conic sections, but the exact meaning of the title is controversial.
• Pseudaria, or Book of Fallacies, was an elementary text about errors in reasoning.
• Surface Loci concerned either loci (sets of points) on surfaces or loci which were themselves surfaces; under the latter interpretation, it has been hypothesized that the work might have dealt
with quadric surfaces.
Euclid's Elements (Greek: Στοιχεῖα) is a mathematical and geometric treatise, consisting of thirteen books, written around 300 B.C.E. It comprises a collection of definitions, postulates (axioms),
propositions (theorems and constructions), and proofs of the theorems. The thirteen books cover Euclidean geometry and the ancient Greek version of elementary number theory. The Elements is the
oldest extant axiomatic deductive treatment of mathematics, and has proven instrumental in the development of logic and modern science.
Euclid's Elements is the most successful textbook ever written. It was one of the very first works to be printed after the printing press was invented, and is second only to the Bible in number of
editions published (well over one thousand). It was used as the basic text on geometry throughout the Western world for about two thousand years. For centuries, when the quadrivium was included in
the curriculum of all university students, knowledge of at least part of Euclid's Elements was required of all students. Not until the twentieth century did it cease to be considered something all
educated people had read.
The geometrical system described in Elements was long known simply as "the" geometry. Today, however, it is often referred to as Euclidean geometry to distinguish it from other so-called
non-Euclidean geometries which were discovered during the nineteenth century. These new geometries grew out of more than two millennia of investigation into Euclid's fifth postulate (Parallel
postulate), one of the most-studied axioms in all of mathematics. Most of these investigations involved attempts to prove the relatively complex and presumably non-intuitive fifth postulate using the
other four (a feat which, if successful, would have shown the postulate to be in fact a theorem).
Scholars believe that Elements is largely a collection of theorems proved by earlier mathematicians in addition to some original work by Euclid. Euclid’s text provides some missing proofs, and
includes sections on number theory and three-dimensional geometry. Euclid's famous proof of the infinitude of prime numbers is in Book IX, Proposition 20.
Proclus, a Greek mathematician who lived several centuries after Euclid, writes in his commentary of the Elements: "Euclid, who put together the Elements, collecting many of Eudoxus's theorems,
perfecting many of Theaetetus's, and also bringing to irrefragable demonstration the things which were only somewhat loosely proved by his predecessors."
A version by a pupil of Euclid called Proclo was translated later into Arabic after being obtained by the Arabs from Byzantium and from those secondary translations into Latin. The first printed
edition appeared in 1482 (based on Giovanni Campano’s 1260 edition), and since then it has been translated into many languages and published in approximately one thousand different editions. In 1570,
John Dee provided a widely respected "Mathematical Preface," along with copious notes and supplementary material, to the first English edition by Henry Billingsley.
Copies of the Greek text also exist in the Vatican Library and the Bodlean Library in Oxford. However, the manuscripts available are of very variable quality and invariably incomplete. By careful
analysis of the translations and originals, hypotheses have been drawn about the contents of the original text (copies of which are no longer available).
Ancient texts which refer to the Elements itself and to other mathematical theories that were current at the time it was written are also important in this process. Such analyses are conducted by J.
L. Heiberg and Sir Thomas Little Heath in their editions of Elements.
Also of importance are the scholia, or annotations to the text. These additions, which often distinguished themselves from the main text (depending on the manuscript), gradually accumulated over time
as opinions varied upon what was worthy of explanation or elucidation.
Outline of the Elements
The Elements is still considered a masterpiece in the application of logic to mathematics, and, historically, its influence in many areas of science cannot be overstated. Scientists Nicolaus
Copernicus, Johannes Kepler, Galileo Galilei, and especially Sir Isaac Newton all applied knowledge of the Elements to their work. Mathematicians (Bertrand Russell, Alfred North Whitehead) and
philosophers such as Baruch Spinoza have also attempted to use Euclid’s method of axiomatized deductive structures to create foundations for their own respective disciplines. Even today, introductory
mathematics textbooks often have the word elements in their titles.
The success of the Elements is due primarily to its logical presentation of most of the mathematical knowledge available to Euclid. Much of the material is not original to him, although many of the
proofs are his. However, Euclid's systematic development of his subject, from a small set of axioms to deep results, and the consistency of his approach throughout the Elements, encouraged its use as
a textbook for about two thousand years. The Elements still influences modern geometry books. Further, its logical axiomatic approach and rigorous proofs remains the cornerstone of mathematics.
Although Elements is primarily a geometric work, it also includes results that today would be classified as number theory. Euclid probably chose to describe results in number theory in terms of
geometry because he could not develop a constructible approach to arithmetic. A construction used in any of Euclid's proofs required a proof that it is actually possible. This avoids the problems the
Pythagoreans encountered with irrationals, since their fallacious proofs usually required a statement such as "Find the greatest common measure of ..."^[1]
First principles
Euclid's Book 1 begins with 23 definitions such as point, line, and surface—followed by five postulates and five "common notions" (both of which are today called axioms). These are the foundation of
all that follows.
1. A straight line segment can be drawn by joining any two points.
2. A straight line segment can be extended indefinitely in a straight line.
3. Given a straight line segment, a circle can be drawn using the segment as radius and one endpoint as center.
4. All right angles are congruent.
5. If two lines are drawn which intersect a third in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines inevitably must intersect each other on
that side if extended far enough.
Common notions:
1. Things which equal the same thing are equal to one another (transitive property of equality).
2. If equals are added to equals, then the sums are equal.
3. If equals are subtracted from equals, then the remainders are equal.
4. Things which coincide with one another are equal to one another. (Reflexive property of equality)
5. The whole is greater than the part.
These basic principles reflect the interest of Euclid, along with his contemporary Greek and Hellenistic mathematicians, in constructive geometry. The first three postulates basically describe the
constructions that one can carry out with a compass and an unmarked straightedge. A marked ruler, used in neusis construction, is forbidden in Euclidian construction, probably because Euclid could
not prove that verging lines meet.
Parallel Postulate
The last of Euclid's five postulates warrants special mention. The so-called parallel postulate always seemed less obvious than the others. Euclid himself used it only sparingly throughout the rest
of the Elements. Many geometers suspected that it might be provable from the other postulates, but all attempts to do this failed.
By the mid-nineteenth century, it was shown that no such proof exists, because one can construct non-Euclidean geometries where the parallel postulate is false, while the other postulates remain
true. For this reason, mathematicians say that the parallel postulate is independent of the other postulates.
Two alternatives to the parallel postulate are possible in non-Euclidean geometries: either an infinite number of parallel lines can be drawn through a point not on a straight line in a hyperbolic
geometry (also called Lobachevskian geometry), or none can in an elliptic geometry (also called Riemannian geometry). That other geometries could be logically consistent was one of the most important
discoveries in mathematics, with vast implications for science and philosophy. Indeed, Albert Einstein's theory of general relativity shows that the "real" space in which we live can be non-Euclidean
(for example, around black holes and neutron stars).
Contents of the thirteen books
Books 1 through 4 deal with plane geometry:
• Book 1 contains the basic properties of geometry: the Pythagorean theorem, equality of angles and areas, parallelism, the sum of the angles in a triangle, and the three cases in which triangles
are "equal" (have the same area).
• Book 2 is commonly called the "book of geometrical algebra," because the material it contains may easily be interpreted in terms of algebra.
• Book 3 deals with circles and their properties: inscribed angles, tangents, the power of a point.
• Book 4 is concerned with inscribing and circumscribing triangles and regular polygons.
Books 5 through 10 introduce ratios and proportions:
• Book 5 is a treatise on proportions of magnitudes.
• Book 6 applies proportions to geometry: Thales' theorem, similar figures.
• Book 7 deals strictly with elementary number theory: divisibility, prime numbers, greatest common divisor, least common multiple.
• Book 8 deals with proportions in number theory and geometric sequences.
• Book 9 applies the results of the preceding two books: the infinitude of prime numbers, the sum of a geometric series, perfect numbers.
• Book 10 attempts to classify incommensurable (in modern language, irrational) magnitudes by using the method of exhaustion, a precursor to integration.
Books 11 through 13 deal with spatial geometry:
• Book 11 generalizes the results of Books 1–6 to space: perpendicularity, parallelism, volumes of parallelepipeds.
• Book 12 calculates areas and volumes by using the method of exhaustion: cones, pyramids, cylinders, and the sphere.
• Book 13 generalizes Book 4 to space: golden section, the five regular (or Platonic) solids inscribed in a sphere.
Despite its universal acceptance and success, the Elements has been the subject of substantial criticism, much of it justified. Euclid's parallel postulate, treated above, has been a primary target
of critics.
Another criticism is that the definitions are not sufficient to fully describe the terms being defined. In the first construction of Book 1, Euclid used a premise that was neither postulated nor
proved: that two circles with centers at the distance of their radius will intersect in two points (see illustration above). Later, in the fourth construction, he used the movement of triangles to
prove that if two sides and their angles are equal, then they are congruent; however, he did not postulate or even define movement.
In the nineteenth century, the Elements came under more criticism when the postulates were found to be both incomplete and superabundant. At the same time, non-Euclidean geometries attracted the
attention of contemporary mathematicians. Leading mathematicians, including Richard Dedekind and David Hilbert, attempted to add axioms to the Elements, such as an axiom of continuity and an axiom of
congruence, to make Euclidean geometry more complete.
Mathematician and historian W. W. Rouse Ball put the criticisms in perspective, remarking that "the fact that for two thousand years [the Elements] was the usual text-book on the subject raises a
strong presumption that it is not unsuitable for that purpose."^[2]
1. ↑ Daniel Shanks (2002). Solved and Unsolved Problems in Number Theory. American Mathematical Society.
2. ↑ W. W. Rouse Ball (1960). A Short Account of the History of Mathematics, 4th ed. (Original publication: London: Macmillan & Co., 1908), Mineola, N.Y.: Dover Publications, 55. ISBN 0486206300.
See also
ISBN links support NWE through referral fees
• Artmann, Benno. (1999). Euclid: The Creation of Mathematics. New York: Springer. ISBN 0387984232.
• Ball, W. W. Rouse. (1908). A Short Account of the History of Mathematics, 4th ed. New York: Dover Publications, 1960. pp. 50–62 ISBN 0486206300
• Bulmer-Thomas, Ivor. (1971). "Euclid." Dictionary of Scientific Biography.
• Heath, Thomas L. The Thirteen Books of Euclid's Elements, 3 vols. New York: Dover Publications, 1956. ISBN 0486600882 (vol. 1), ISBN 0486600890 (vol. 2), ISBN 0486600904 (vol. 3)
• Heath, Thomas L. (1981). A History of Greek Mathematics, 2 vols. New York: Dover Publications. ISBN 0486240738, ISBN 0486240746
• Kline, Morris (1980). Mathematics: The Loss of Certainty. Oxford: Oxford University Press. ISBN 019502754X
External links
All links retrieved March 22, 2024.
General Philosophy Sources
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons
CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia
contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by
wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed. | {"url":"http://www.newworldencyclopedia.org/entry/Euclid","timestamp":"2024-11-06T08:21:57Z","content_type":"text/html","content_length":"79112","record_id":"<urn:uuid:c4064e91-6eff-403b-92e0-c2d7b20c2205>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00412.warc.gz"} |
Qvality: Non-parametric estimation of q-values and posterior error probabilities
Summary: Qvality is a C++ program for estimating two types of standard statistical confidence measures: the q-value, which is an analog of the p-value that incorporates multiple testing correction,
and the posterior error probability (PEP, also known as the local false discovery rate), which corresponds to the probability that a given observation is drawn from the null distribution. In
computing q-values, qvality employs a standard bootstrap procedure to estimate the prior probability of a score being from the null distribution; for PEP estimation, qvality relies upon
non-parametric logistic regression. Relative to other tools for estimating statistical confidence measures, qvality is unique in its ability to estimate both types of scores directly from a null
distribution, without requiring the user to calculate p-values.
All Science Journal Classification (ASJC) codes
• Computational Mathematics
• Molecular Biology
• Biochemistry
• Statistics and Probability
• Computer Science Applications
• Computational Theory and Mathematics
Dive into the research topics of 'Qvality: Non-parametric estimation of q-values and posterior error probabilities'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/qvality-non-parametric-estimation-of-q-values-and-posterior-error","timestamp":"2024-11-05T02:46:35Z","content_type":"text/html","content_length":"51314","record_id":"<urn:uuid:9af125d8-2982-4ca8-a653-52487c45ce05>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00636.warc.gz"} |
5th Grade Numbers And Operations Base Ten Worksheet With Answers
5th Grade Numbers And Operations Base Ten Worksheet With Answers serve as foundational devices in the world of maths, providing a structured yet flexible system for learners to check out and grasp
numerical concepts. These worksheets offer an organized method to understanding numbers, nurturing a strong structure whereupon mathematical effectiveness prospers. From the simplest checking
exercises to the ins and outs of sophisticated calculations, 5th Grade Numbers And Operations Base Ten Worksheet With Answers cater to learners of varied ages and ability degrees.
Revealing the Essence of 5th Grade Numbers And Operations Base Ten Worksheet With Answers
5th Grade Numbers And Operations Base Ten Worksheet With Answers
5th Grade Numbers And Operations Base Ten Worksheet With Answers -
Multiply a four digit by a four digit number CCSS Math Content 5 NBT B 5 Multiply a three digit by a three digit number CCSS Math Content 5 NBT B 5 Divide a four digit by a two digit number CCSS Math
Content 5 NBT B 6 Divide a three digit by a two digit number CCSS Math Content 5 NBT B 6
These daily word problems help students practice important 5th grade math skills such as calculating additive volume
At their core, 5th Grade Numbers And Operations Base Ten Worksheet With Answers are automobiles for theoretical understanding. They encapsulate a myriad of mathematical concepts, guiding learners
with the maze of numbers with a collection of engaging and purposeful exercises. These worksheets go beyond the boundaries of typical rote learning, encouraging active interaction and cultivating an
instinctive grasp of mathematical relationships.
Supporting Number Sense and Reasoning
5Th Grade Numbers And Operations In Base Ten Worksheets NumbersWorksheet
5Th Grade Numbers And Operations In Base Ten Worksheets NumbersWorksheet
Curriculum Number And Operations In Base Ten Understand The Place Value System Detail Recognize that in a multi digit number a digit in one place represents 10 times as much as it represents in the
place to its right and 1 10
Curriculum Number And Operations In Base Ten Understand The Place Value System Detail Read write and compare decimals to thousandths 31 Common Core State Standards CCSS aligned worksheets found Place
Value Chart Thousandths Students can write large decimals on this place value chart
The heart of 5th Grade Numbers And Operations Base Ten Worksheet With Answers lies in growing number sense-- a deep understanding of numbers' significances and interconnections. They urge expedition,
welcoming students to study math operations, decipher patterns, and unlock the enigmas of series. With thought-provoking challenges and rational challenges, these worksheets end up being gateways to
honing reasoning abilities, nurturing the analytical minds of budding mathematicians.
From Theory to Real-World Application
Grade 5 Order Of Operations Worksheets Free Math Worksheets
Grade 5 Order Of Operations Worksheets Free Math Worksheets
Browse Printable 5th Grade Numbers and Operations Base 10 Math Worksheets Award winning educational materials designed to help kids succeed Start for free now
Understand the place value system Standards 5 NBT 1 4 Standard 5 NBT 3 Read write and compare decimals to thousandths Read and write decimals to thousandths using base ten numerals number names and
expanded form For example 347 392 3 100 4 10 7 1 3 1 10 9 1 100 2 1 1000
5th Grade Numbers And Operations Base Ten Worksheet With Answers act as avenues connecting theoretical abstractions with the palpable facts of day-to-day life. By instilling useful circumstances into
mathematical workouts, learners witness the importance of numbers in their environments. From budgeting and dimension conversions to comprehending analytical data, these worksheets encourage students
to possess their mathematical expertise beyond the boundaries of the class.
Diverse Tools and Techniques
Flexibility is inherent in 5th Grade Numbers And Operations Base Ten Worksheet With Answers, employing a collection of instructional devices to satisfy diverse learning designs. Visual help such as
number lines, manipulatives, and digital resources work as friends in imagining abstract ideas. This diverse strategy guarantees inclusivity, suiting students with different preferences, strengths,
and cognitive designs.
Inclusivity and Cultural Relevance
In a significantly varied world, 5th Grade Numbers And Operations Base Ten Worksheet With Answers welcome inclusivity. They go beyond social limits, incorporating examples and issues that reverberate
with learners from diverse histories. By incorporating culturally appropriate contexts, these worksheets cultivate an atmosphere where every student really feels represented and valued, enhancing
their connection with mathematical concepts.
Crafting a Path to Mathematical Mastery
5th Grade Numbers And Operations Base Ten Worksheet With Answers chart a training course towards mathematical fluency. They instill determination, crucial thinking, and analytic skills, necessary
attributes not only in mathematics yet in different elements of life. These worksheets encourage students to navigate the elaborate surface of numbers, supporting a profound gratitude for the style
and logic inherent in maths.
Welcoming the Future of Education
In an age noted by technological innovation, 5th Grade Numbers And Operations Base Ten Worksheet With Answers effortlessly adapt to digital platforms. Interactive interfaces and electronic resources
enhance conventional discovering, offering immersive experiences that go beyond spatial and temporal borders. This combinations of conventional methods with technical innovations advertises an
appealing age in education and learning, cultivating a much more vibrant and engaging learning environment.
Conclusion: Embracing the Magic of Numbers
5th Grade Numbers And Operations Base Ten Worksheet With Answers epitomize the magic inherent in mathematics-- an enchanting trip of exploration, exploration, and mastery. They transcend traditional
rearing, working as drivers for stiring up the fires of interest and query. Through 5th Grade Numbers And Operations Base Ten Worksheet With Answers, students embark on an odyssey, unlocking the
enigmatic globe of numbers-- one problem, one option, at a time.
Number Operations Base Ten Place Value Practice The Skill 3 3 5 Grades 3 To 5
Numbers And Operations In Base Ten Moving The Decimal CCSS 5 NBT 2
Check more of 5th Grade Numbers And Operations Base Ten Worksheet With Answers below
Order Of Operations Fifth Grade
Order Of Operation Worksheet
Numbers And Operations Base Ten 5th Grade Math
Using Base 10 Blocks To Write Numbers Worksheet By Teach Simple
Numbers And Operations In Base Ten Adding Within 100 CCSS 1 NBT 4 Facts Worksheets Base Ten
Ace 5Th Grade Orders Of Operation With This Comprehensive Worksheet Style Worksheets
Common Core Math 5 NBT 6 Super Teacher Worksheets
These daily word problems help students practice important 5th grade math skills such as calculating additive volume
Common Core Math 5 NBT 2 Super Teacher Worksheets
View PDF Math Buzz Week 25 Worksheets 121 through 125 Answer questions related to order of operations converting from kilometers to meters multiplying decimals factoring and long division with
decimals Level 5th Grade View PDF Multiplying Decimals by 10 100 and 1 000 Multiply each decimal by 10 100 or 1 000 using these task cards
These daily word problems help students practice important 5th grade math skills such as calculating additive volume
View PDF Math Buzz Week 25 Worksheets 121 through 125 Answer questions related to order of operations converting from kilometers to meters multiplying decimals factoring and long division with
decimals Level 5th Grade View PDF Multiplying Decimals by 10 100 and 1 000 Multiply each decimal by 10 100 or 1 000 using these task cards
Using Base 10 Blocks To Write Numbers Worksheet By Teach Simple
Order Of Operation Worksheet
Numbers And Operations In Base Ten Adding Within 100 CCSS 1 NBT 4 Facts Worksheets Base Ten
Ace 5Th Grade Orders Of Operation With This Comprehensive Worksheet Style Worksheets
Number Operations Base Ten Place Value 3 5 Grades 3 To 5 Digital Lesson Educational
Numbers And Operations In Base Ten Rounding Decimals CCSS 5 NBT 4
Numbers And Operations In Base Ten Rounding Decimals CCSS 5 NBT 4
5th Grade Numbers And Operations In Base Ten Task Card Bundle The Teacher Next Door | {"url":"https://szukarka.net/5th-grade-numbers-and-operations-base-ten-worksheet-with-answers","timestamp":"2024-11-03T16:12:35Z","content_type":"text/html","content_length":"27594","record_id":"<urn:uuid:2ba326fe-1edf-43d5-b9fc-4162eed7d150>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00839.warc.gz"} |
Connexive Logic
First published Fri Jan 6, 2006; substantive revision Thu Jun 1, 2023
Many prominent systems of non-classical logic are subsystems of what is generally called ‘classical logic.’ Systems of connexive logic are contra-classical in the sense that they are neither
subsystems nor extensions of classical logic. Connexive logics have a standard logical vocabulary and comprise certain non-theorems of classical logic as theses. Since classical propositional logic
is Post-complete, any additional axiom in its language gives rise to the trivial system, so that any non-trivial system of connexive logic will have to leave out some theorems of classical logic. The
name ‘connexive logic’ was introduced by Storrs McCall (1963, 1964) and suggests that systems of connexive logic are motivated by some ideas about coherence or connection between the premises and the
conclusions of valid inferences or between the antecedent and the succedent (consequent) of valid implications. The kind of coherence in question concerns the meaning of implication and negation (see
the entries on indicative conditionals, the logic of conditionals, counterfactuals, and negation). One basic idea is that no formula provably implies or is implied by its own negation. This
conception may be expressed by requiring that for every formula A,
⊬ ~A → A and ⊬ A → ~ A,
but usually the underlying intuitions are expressed by requiring that certain schematic formulas are theorems:
AT: ~(~A → A) and
AT′: ~(A → ~A).
The first formula is often called Aristotle’s Thesis. If this non-theorem of classical logic is found plausible, then the second principle, AT′, would seem to enjoy the same degree of plausibility.
Indeed, also AT′ is sometimes referred to as Aristotle’s Thesis, for example in Routley 1978, Mortensen 1984, Routley and Routley 1985, and Ferguson 2016. As McCall (1975, p. 435) explains,
[c]onnexive logic may be seen as an attempt to formalize the species of implication recommended by Chrysippus:
And those who introduce the notion of connection say that a conditional is sound when the contradictory of its consequent is incompatible with its antecedent. (Sextus Empiricus, translated in
Kneale and Kneale 1962, p. 129.)
Using intuitionistically acceptable means only, the pair of theses AT and AT′ is equivalent in deductive power with another pair of schemata, which in established terminology are called (Strong)
Boethius’ Theses (cf. Routley 1978) and which may be viewed, in addition with their converses, as capturing Chrysippus’ idea:
BT: (A → B) → ~(A → ~B) and
BT′: (A → ~B) → ~(A → B).
The names ‘Aristotle’s Theses’ and ‘Boethius’ Theses’ are, of course, not arbitrarily chosen. As to AT, it is argued in Aristotle’s Prior Analytics 57b14 that it is impossible that if not-A, then A,
see Łukasiewicz 1957, p. 50. Note, however, that Łukasiewicz and Kneale (1957, p. 66) maintain that Aristotle is making a mistake here. Moreover, Boethius has been said to hold in De Syllogismo
Hypothetico 843D that ‘if A then not-B’ is the negation of ‘if A, then B’, (“he said that the negative of Si est A, est B was Si est A, non est B,” Kneale and Kneale 1962, p. 191). If we look at De
Syllogismo Hypothetico 843C-D, we find:
Sunt autem hypotheticae propositiones, aliae quidem affirmativae, aliae negativae […] affirmativa quidem, ut cum dicimus, si est a, est b; si non est a, est b; negativa, si est a, non est b, si
non est a, non est b. Ad consequentem enim propositionem respiciendum est, ut an affirmativa an negativa sit propositio judicetur. (Note that there is a misprint in Migne’s edition from 1860,
where instead of the first occurrence of “si non est a, est b” in the above quote there is “si non est a, non est b”.)
Boethius here draws a distinction between affirmative and negative conditionals and explains that negative conditionals have the form ‘if a, then not b’ and ‘if not a, then not b.’ This statement is
quite different from the reading offered by Kneale and Kneale. Note that AT follows from BT′ and AT′ follows from BT in logics in which A → A is theorem and modus ponens is an admissible inference
Let L be a language containing a unary connective ~ (negation) and a binary connective → (implication). A logical system in a language extending L is called a connexive logic if AT, AT′, BT, and BT′
are theorems and, moreover, implication is non-symmetric, i.e., (A → B) → (B → A) fails to be a theorem (so that → can hardly be understood as a bi-conditional). This is the now standard notion of
connexive logic. The connective → in a system of connexive logic is said to be a connexive implication.
Systems of connexive logic have been motivated and arrived at by different considerations. One motivation comes from relevance logic and the idea that semantic consequence is a content relationship,
see section 3.1. Moreover, principles of connexive logic have been discussed in conditional logic, see section 3.2 and the entries on indicative conditionals, the logic of conditionals, and
counterfactuals, in different accounts of negation, see section 3.3 and the entry on negation, and in approaches to contra-classical logics, see section 3.4. Another motivation emerges from empirical
research on the interpretation of negated conditionals in natural language and the aim to adequately model the semantical intuitions revealed by these investigations, see section 3.5.
Richard Angell in his seminal paper on connexive logic (1962) aimed at developing a logic of subjunctive, counterfactual conditionals in which what he called a ‘principle of subjunctive contrariety,’
∼((A → B) ∧ (A → ~B)), is provable. His proof system, PA1, contains BT as an axiom. Also Kapsner and Omori (2017) suggest that a connexive implication is suitable for formalizing counterfactual
conditionals, whereas Cantwell (2008), for example, suggested a system of connexive logic to formalize indicative natural language conditionals. According to McCall (1975, p. 451), “[o]ne of the most
natural interpretations of connexive implication is as a species of physical or ‘causal’ implication,” and in McCall (2014) he argues that “[t]he logic of causal and subjunctive conditionals is …
connexive, since ‘If X is dropped, it will hit the floor’ contradicts ‘If X is dropped, it will not hit the floor’.” Boethius’ Thesis BT indeed appears on a list of principles every “precausal”
connective should satisfy, see Urchs 1994. McCall (2012, p. 437), however, concedes that “causal logic is still very much an ongoing project, and no agreed-on formulation of it has yet been
achieved.” Moreover, the characteristic connexive principles are valid for the analysis of conditionals, generics, and disposition statements presented in van Rooij and Schulz 2019, 2020.
Further motivation for systems of connexive logic comes from more instrumental studies. In McCall 1967, connexive implication is motivated by reproducing in a first-order language all valid moods of
Aristotle’s syllogistic (see the entry on Aristotle’s logic). In particular, the classically invalid inference from ‘All A is B’ to ‘Some A is B’ is obtained by translating ‘Some A is B’ as ∃x(~(A(x)
→ ~B(x))), where → is a connexive implication. In Wansing 2007, connexive implication is motivated by introducing a negation connective into Categorial Grammar in order to express negative
information about membership in syntactic categories (see the entry on typelogical grammar). Consider, for example, the syntactic category (type) (n → s) of intransitive verbs, i.e., of expressions
that in combination with a name (an expression of type n) result in a sentence (an expression of type s). The idea is that an expression is of type ~(n → s) iff in combination with a name it results
in an expression that is not a sentence. In other words, an expression belongs to type ~(n → s) iff it is of type (n → ~s). In the short note Besnard 2011, Aristotle’s thesis AT′ is motivated as
expressing a notion of rule consistency for rule-based systems in knowledge representation. A further motivation arises from the problem of modelling conditional obligations in deontic logic. Weiss
(2019) suggests to understand a certain implication that validates Aristotle’s theses and weak versions of Boethius’ theses (cf. sections 2 and 3.2) as expressing a conditional obligation operator.
Yet another motivation arises from the problem of modelling conditional obligations in deontic logic. Another motivation in terms of applications comes from non-classical mathematics. There is an
extended literature on mathematical theories based on non-classical logics, including intuitionistic, fuzzy, relevant, and linear arithmetic and paraconsistent set theory. Early contributions in the
context of connexive logic are McCall’s 1967 connexive class theory, and Wiredu’s 1974 paper on connexive set theory. Ferguson (2016, 2019) takes up the challenge of investigating the prospects for a
connexive mathematics and explores the feasibility of a connexive arithmetic.
There are several further and some diverging notions of a connexive logic. In particular, the second decade of the 21st century has (unfortunately) seen the introduction of confusingly many new
notions of connexivity and non-uniform terminology. McCall (1966) introduced connexive logics as systems ranging from logics in which no proposition implies or is implied by its own negation to
logics in which BT is provable (together with non-symmetry of implication), and, similarly, Mares and Paoli (2019) characterize connexive logics as systems having some or all of AT, AT′, BT, and BT′
among their theorems (without explicitly requiring non-symmetry of implication). In McCall 2012, AT′ and BT are said to be the distinguishing marks of connexive logic, but note that AT and BT′ are
valid in the system CC1 due to Angell (1962) and MacCall (1966) as well. Logics in which some but not all of AT, AT′, BT, and BT′ are provable (or valid) are called ‘demi-connexive’ in Wansing,
Omori, and Ferguson 2016 (without explicitly requiring non-symmetry of implication), and are said to be quasi-connexive in Jarmużek and Malinowski 2019a. The identification of the negation of (A → B)
with (A → ~B), ascribed to Boethius by Kneale and Kneale (1962), suggests a strengthening of BT and BT′ to the equivalences:
BTe: (A → B) ↔ ~(A → ~B), and
BTe′: (A → ~B) ↔ ~(A → B).
Sylvan 1989 refers to BTe as a principle of hyperconnexive logic. The principles BTe and BTe′ are characteristic of the connexive logics developed subsequent to the definition of the connexive logic
C and its quantified version QC in Wansing 2005. According to McCall (2012), the converse of BT (the right-to-left direction of BTe) is highly unintuitive in light of what he takes to be
counterexamples from English. For a rejoinder see Wansing and Skurt 2018.
Kapsner (2012, 2019) refers to a logic that satisfies AT, AT′, BT, and BT′ and, moreover, satisfies the requirements
Unsat1: In no model, A → ~A is satisfiable (for any A), and in no model, ~A → A is satisfiable (for any A), and
Unsat2: In no model, A → B and A → ~B are simultaneously satisfiable (for any A and B)
as strongly connexive, whereas if the conditions Unsat1 and Unsat2 are not both satisfied, the system is only called weakly connexive. Kapsner motivates the extra conditions by two intuitions, namely
that it is not the case that a formula A should imply or be implied by its own negation, and that if A implies B, then A does not imply not-B (and if A implies not-B, then A does not imply B). These
intuitions may, however, also be seen to motivate ⊬ ~A → A and ⊬ A → ~ A, respectively (A →B) ⊬ (A → ~B) and (A → ~B) ⊬ (A → B) instead of Unsat1 and Unsat2. Moreover, imposing Unsat1 and Unsat2
precludes systems that satisfy the variable sharing property (i.e., broadly relevant or sociative logics in Routley’s (1989) terminology, for which it holds that if A → B is a theorem, then A and B
share at least one propositional variable) and satisfy the deduction theorem from being connexive. So far only few strongly connexive logics satisfying the non-symmetry of implication condition are
known, namely the system CC1, which, however, “is an awkward system in many ways” (McCall 2012, p. 429), see section 4.1, and the Boolean connexive relatedness logics from Jarmużek and Malinowski
2019a, see section 4.4. In Wansing and Unterhuber 2019, logics that validate Boethius’ theses only in rule form ((A → B) ⊢ ~(A → ~B) and (A → ~B) ⊢ ~(A → B)) are called weakly connexive. Weiss (2019)
considers a language with classical negation, ¬, classical implication, ⊃, and another binary conditional, →, (notation adjusted). He calls a logic half-connexive if it validates the Weak Boethius’
BTw: (A → B) ⊃ ¬(A → ¬B), and
BTw′: (A → ¬B) ⊃ ¬(A → B),
and refers to a logic as connexive if in addition it validates AT and AT′ for → and ¬. The Weak Boethius’ Thesis BTw was introduced in Pizzi 1977 as “conditional Boethius’ thesis” for the connexive
implication seen as standing for a counterfactual conditional.
In Kapsner 2019, the demand of strong connexivity is evaluated as “too stringent a requirement,” and the notion of plain humble connexivity is introduced by restricting Aristotle’s theses, Boethius’
theses, Unsat1, and Unsat2 to satisfiable antecedents. A survey of the terminology and various notions of connexivity used in the literature is presented in the Supplement on Terminology. It remains
to be seen whether all the notions in addition to the established concept of connexive logic will turn out to be conducive.
If a language is used in which implication is not taken as primitive but is defined in terms of other connectives, connexive logics could also be seen as diverging from the orthodoxy of classical
logic by giving a deviant account of those connectives. A definition of a connexive implication in terms of negation, conjunction, and necessity can be found in McCall 1966 and Angell 1967b. More
recently, Francez (2020) suggested the notion of “poly-connexivity” to highlight a modification of the familiar falsity conditions of conjunctions and disjunctions (in addition to adopting falsity
conditions of implications as expressed by BTe′).
McCall (2012) emphasizes that there is a history of two thousand three hundred years of connexive implication. Historical remarks on connexive logic may be found, for instance, in Kneale and Kneale
1962, Sylvan 1989, Priest 1999, Nasti De Vincentis 2002, Nasti De Vincentis 2004, Nasti De Vincentis 2006, Estrada-González & Ramirez-Cámara 2020, and McCall 2012. In the latter survey, McCall refers
to ~((A → B) ∧ (~A → B)) as Aristotle’s Second Thesis and, following Martin 2004, to Angell’s principle of subjunctive contrariety ~((A → B) ∧ (A → ~B)) as Abelard’s First Principle, which is called
Strawson’s Thesis in, for example, Routley 1978 and Mortensen 1984. Aristotle’s Second Thesis and Abelard’s First Principle are interderivable with BT and with BT′, respectively, using intuitionistic
principles only. Besides Peter Abelard, another medieval philosopher who discussed and endorsed connexive principles was Richard Kilwardby, see Johnston 2019. El-Rouayheb (2009, p. 215) reports on a
critical discussion in thirteenth-century Arabic philosophy of Aristotle’s thesis AT for impossible antecedents. Modern connexive logic commenced with Nelson 1930, Angell 1962, and McCall 1966, while
MacColl (1878) may be regarded as a forerunner. After small numbers of publications from the 1960s until the 1990s, with S. McCall, R. Routley, and C. Pizzi as the main contributors, in the 21st
century a vigorous new interest in connexive logic emerged. Remarks on the history of modern connexive logic can be found in sections 3–5.
One question arising from a historical point of view is that of exegetical correctness. Can Aristotle’s and Boethius’ theses indeed be traced back to Aristotle and Boethius? Lenzen (2020) believes
that Aristotle and Boethius intended the theses named after them as “applicable only to ‘normal’ conditionals with antecedents which are not self-contradictory.” He states correspondingly restricted
versions of Aristotle’s theses in the language of modal propositional logic, principles which according to Lenzen (2019) can be found in Leibniz’s writings (after a transformation from Leibniz’s term
logic into a system of propositional logic) and where, notation adjusted, ↠ stands for strict implication:
LEIB1 ◊A ↠ ~(A ↠ ~A), and
LEIB2 ◊~A ↠ ~(~A ↠ A),
cf. also the modalized versions of AT′ and BT in Unterhuber 2016. Lenzen remarks that LEIB1 and LEIB2 are theorems of almost all systems of normal modal logic and therefore do not lead to any
non-classical system of connexive logic. A similar observation is made in Kapsner 2019. As to Boethius, the question has been raised whether it is adequate to render his term logic as a propositional
logic (see Martin 1991, McCall 2012), and Bonevac and Dever (2012, p. 192) refer to Abelard’s First Principle as the most famous thesis attributed to Boethius but note that they fail to find it in
Boethius. Irrespective of these exegetical issues, however, the challenge of connexive logic remains, namely to define nontrivial and well-motivated logical systems that validate both Aristotle’s and
Boethius’ theses and satisfy non-symmetry of implication. Another question arising from the long history of connexive logic is in which sense the system nowadays called classical logic is indeed
classical. A critical discussion of the classicality of classical logic from the point of view of paraconsistent and connexive logic can found in Wansing and Odintsov 2016.
A monograph developing a system of connexive logic in the context of solving a broad range of paradoxes is Angell 2002. The first monograph devoted to connexive logic sice its revival in the first
two decades of the 21st century, though not a comprehensive study of connexive logic, is Francez 2021. Starting from the connexive logics C and QC, Francez discuses topics such as the already
mentioned idea of ``polyconnexivity'', certain variations of Boethius’ theses (cf. section 3.6), and a connexive theory of classes.
Systems of connexive logic can be looked and arrived at from different perspectives. Although some of these viewpoints are closely interrelated, it may be helpful to briefly outline them separately.
Routley (1978), see also Sylvan 1989 (2000, chapter 5), suggested a conception of connexive logic different from McCall’s. If the requirement of a connection between antecedent and succedent of a
valid implication is understood as a content connection, and if a content connection obtains if antecedent and succedent are relevant to each other, then “the general classes of connexive and
relevant logics are one and the same” (Routley 1978, p. 393), cf. also Sarenac and Jennings 2003, where the connection between McCall’s connexive system CC1, presented in section 4.1, and relevance
preservation is studied.
Since every non-trivial system of connexive logic in the vocabulary of classical logic has to omit some classical tautologies, and since the standard paradoxes of non-relevant, material implication
can be avoided by rejecting Conjunctive Simplification, i.e., (A ∧ B) → A and (A ∧ B) → B, Routley requires for a connexive logic the rejection or qualification of Conjunctive Simplification (or
equivalent schemata). Although according to Routley (1978, Routley et al. 1982) and Routley and Routley (1985) the idea of negation as cancellation, see sections 3.3 and 4.3., motivates both the
failure of Conjunctive Simplification and AT’ and BT, the model-theoretic semantics for connexive logics developed in Routley 1978, see section 4.2, makes use of what has later come to be known as
the Routley star negation, see the entry on negation.
If the contraposition rule and uniform substitution are assumed and implication is transitive, the combination of Conjunctive Simplification and Aristotle’s Theses results in negation inconsistency,
i.e., there are formulas A such that A and its negation ~A are both theorems, see, for example, Woods 1968 and Thompson 1991. Non-trivial negation inconsistent logics (with a transitive consequence
relation) must be paraconsistent. Using certain three-valued truth tables, Mortensen (1984) pointed out that there are inconsistent but non-trivial systems satisfying both AT′ and Simplification.
Examples of non-trivial inconsistent systems of connexive logic satisfying Conjunctive Simplification are presented in section 4.5. The availability of such connexive systems may be appreciated in
view of the fact that Zermelo-Fraenkel set theory based on a system of connexive logic with Simplification is inconsistent, see Wiredu 1974. Mortensen (1984) also pointed out that the addition of AT′
to the relevance logic R of Anderson and Belnap has a trivializing effect, a fact shown in Routley et al. 1982 as well.
The relation between connexive logic and relevance logic can also be seen as follows. Let A and B be contingent formulas of classical propositional logic, i.e., formulas that are neither constantly
false nor constantly true. It is well-known that then the following holds in classical logic:
i. Not: ~A ⊢ A
ii. If A ⊢ B, then not: A ⊢ ~B
iii. If A ⊢ B, then A and B share some propositional variable (sentence letter)
If property (iii) is generalized to arbitrary formulas A and B, it is called the variable sharing property or variable sharing principle, which is generally seen as a necessary condition on a logic
to count as a relevance logic (see the entry logic: relevance). So-called containment logics (also called Parry systems or systems of analytic implication, see Parry 1933, Anderson and Belnap 1975,
Fine 1986, Ferguson 2015), satisfy the strong relevance requirement that if ⊢ A → B, then every propositional variable of B is also a propositional variable of A. The variable sharing property
indicates a content connection between A and B if B is derivable from A (or, semantically, A entails B). The properties (i) and (ii) may be viewed to express a content connection requirement on the
derivability relation in a negative way. If one wants to express these constraints in terms of the provability of object language formulas, one naturally arrives at Aristotle’s and Boethius’ theses.
Connexive relevance logics that combine the ternary frame semantics from relevance logic and the adjustment of falsity conditions along the lines of the connexive logic C (see section 4.5.1) have
been studied in Omori 2016a and Francez 2019, cf. sections 4.2 and 4.5.
Principles of connexive logic have been discussed in conditional logic (see the entry logic: conditional), beginning with Ramsey’s (1929) comments on what is now called the Ramsey Test, as pointed
out, e.g., in McCall 2012 and Ferguson 2014:
If two people are arguing “If p will q?” and are both in doubt as to p, they are adding p hypothetically to their stock of knowledge and arguing on that basis about q; so that in a sense “If p, q
” and “If p, ~q” are contradictories (notation adjusted).
Angell (1966, 1967a, 1978) refers to AT′ as the Law of Conditional Non-Contradiction. Usually, Abelard’s First Principle, ~((A → B) ∧ (A → ~B)) is considered as a principle of conditional
non-contradiction and as such is endorsed by some philosophers, e.g., Gibbard (1981, p. 231), Lowe (1995, p. 47), and Bennett (2003, p. 84), without making any reference to connexive logic.
Conditional non-contradiction fails, however, to be a valid principle in the semantics suggested by Stalnaker (1968) and Lewis (1973), cf. the discussion in Unterhuber 2013. The restrictedly
connexive logics presented in Weiss 2019 that validate Aristotle’s theses, BTw, and BTw′ stand in the tradition of Stalnaker and Lewis and are given an algebraic semantics that builds on the
algebraic semantics for conditional logics from Nute 1980.
Another motivation for connexive logic from the perspective of conditional logic has been presented by John Cantwell (2008) without noting that the introduced propositional logic is a system of
connexive logic. Cantwell considers the denial of indicative conditionals in natural language and argues that the denial of, say, the conditional ‘If Oswald didn’t kill Kennedy, Jack Ruby did.’
amounts to the assertion that if Oswald didn’t shoot Kennedy then neither did Jack Ruby. This suggests that (A → ~B) is semantically equivalent with ~(A → B). Also Claudio Pizzi’s work on logics of
consequential implication has been motivated in the context of conditional logic, see Pizzi 1977 and section 5.
As the characteristic connexive principles exhibit an implication and a negation connective, it is not very surprising that connexive logic can be approached also from considerations on the notion of
negation. Two different perspectives emerge with the ideas of negation as cancellation (erasure, neutralization, or subtraction) and negation as falsity. Negation as cancellation is a conception of
negation that can be traced back to Aristotle’s Prior Analytics and is often associated with Strawson, who held that a “contradiction cancels itself and leaves nothing” (1952, p. 3). Routley (1978,
Routley et al. 1982), Routley and Routley 1985, and Priest 1999 use the notion of subtraction negation to motivate connexive principles. Routley and Routley (1985, p. 205) present the cancellation
view of negation as follows:
∼A deletes, neutralizes, erases, cancels A (and similarly, since the relation is symmetrical, A erases ∼A), so that ∼A together with A leaves nothing, no content. The conjunction of A and ∼A says
nothing, so nothing more specific follows. In particular, A ∧ ∼A does not entail A and does not entail ∼A.
Note that if a logic implements the cancellation view of negation, it will also be paraconsistent because the ex contradictione quodlibet principle, (A ∧ ∼A) ⊢ B, will not be valid. (The idea of ex
contradictione nihil sequitur is discussed in Wagner 1991.) According to the Routleys, a connection between the subtraction account of negation and Aristotle’s thesis AT′ then arises as follows
(Routley and Routley 1985, p. 205):
Entailment is inclusion of logical content. So, if A were to entail ~A, it would include as part of its content, what neutralizes it, ~A, in which event it would entail nothing, having no
content. So it is not the case that A entails ~A, that is Aristotle’s thesis ~(A → ~A) holds.
Accordingly then, for Routley (Routley et al. 1982, p. 82) connexivism has two leading theses, namely:
1. Simplification (A ∧ B → A, A ∧ B → B) fails to hold, and its use ... is what is responsible for the paradoxes of implication ...
2. Every statement is self-consistent, symbolically A ◇ A, where the relation of consistency with, symbolised ◇, is interconnected with implication in the standard fashion: A ◇ B ↔. ~(A → ~B).
The cancellation view of negation has been heavily criticized in Wansing and Skurt 2018, where it is stressed that connexive logic can be detached from the notion of negation as erasure and the
failure of Conjunctive Simplification.
The notion of negation as definite falsity, in contrast to negation as absence of truth, does not support the failure of Conjunctive Simplification but rather the failure of ex contradictione
quodlibet if it is coupled with an understanding of inference as information flow, because the information that A ∧ ~A does not necessarily give the information that B, for any B whatsoever. This
suggests a separate treatment of (support of) truth and (support of) falsity conditions, which enables adopting the falsity conditions for implications represented by BTe′.
Humberstone (2000) calls a logic contra-classical just in case not every formula provable in the logic is provable in classical logic (and, moreover, considers a more demanding notion of a
contra-classical logic by requiring that there is no way of translating its connectives in such a way that one obtains a subsystem of classical logic). There are several different kinds of
contra-classical logics, such as, for example, Abelian logics containing the axiom schema ((A → B) → B) → A, connexive logics, and logics of logical bilattices. The negation, truth order conjunction,
weak implication, and information order disjunction fragment of Arieli and Avron’s (1996) bilattice logic BL[⊃], for example, is a standard propositional vocabulary containing a negation, a
conjunction, a disjunction, and a conditional. It is a non-trivial but inconsistent logic and as such contra-classical.
In Omori and Wansing 2018, a way of obtaining contra-classical logics is delineated, and in Estrada-Gónzalez 2019 it is discussed in more detail. Following the pattern of the presentation of the
connexive logic C, cf. section 4.5.1, the general idea is that of keeping some standard (support of) truth conditions for a logical operation and modifying its (support of) falsity conditions. From a
bilateralist perspective that treats truth and falsity as well as provability and disprovability or refutability as separate semantical, respectively proof-theoretical dimensions that are on a par,
there is also the strategy of keeping some standard (support of) falsity conditions for a logical operation and modifying its (support of) truth conditions. Connexive logic can be seen as
contributing to the exploration of roads to contra-classicality.
In McCall 2012 one can find some results on testing the endorsement of connexive principles (AT′, BT, and BT stated as a rule) given by indicative conditionals in English in concrete form on a group
of 89 non-expert philosophy students at McGill University in Canada. These findings support the intuition that laymen speakers of English subscribe to those connexive principles to a rather high
degree: 88% in the case of AT′, 85% in the case of BT in rule form, and 84% in the case of BT.
Empirical studies on Aristotle’s theses have been carried out by Pfeifer (2012), Pfeifer and Tulkki (2017), and Pfeifer and Yama (2017). In one experiment, presented in Pfeifer 2012, the sample
consisted of 141 psychology students (110 females and 31 males) at the University of Salzburg, Austria. Both AT and AT′ were tested as abstract as well as concrete indicative conditionals. In a
second experiment, 40 students without training in logic (20 females and 20 males) had to solve tasks involving concrete indicative conditionals in English. In this case, scope ambiguities arising
from the negation of conditionals were ruled out. Both experiments provide evidence against the interpretation of indicative conditionals in English as Boolean implication and support the connexive
reading of negated implications expressed by Aristotle’s theses. Pfeifer sees these findings as strong evidence for interpreting indicative conditionals as conditional events. This interpretation
predicts that people should strongly believe that Aristotle’s theses are valid because the only coherent assessment for them is the probability value 1.
Pfeifer and Tulkki (2017) tested the interpretation of subjunctive versus indicative conditionals among a group of 60 students of the University of Helsinki, Finland, (30 females and 30 males) and
found no statistically significant differences between the endorsement of AT and AT′ (72%, respectively 77%). Another experiment presented in Pfeifer and Yama 2017 found no cultural differences
between the Western samples and an Eastern sample when testing the endorsement of AT and AT′ among 63 Japanese university students from the Graduate School of Literature and Human Behavioral Sciences
at Osaka City University, with an endorsement of AT and AT′ by 65% and 76% of the participants, respectively.
Khemlani et al. 2014 report on an experiment testing a sample of 21 native English-speaking participants on denying concrete natural language conditionals (against the background of Johnson-Laird’s
mental model theory, assuming classical logic). Whereas 28% of the participants endorsed denial conditions in accordance with classical logic, 34% endorsed denial conditions according to Boethius’
thesis BT.
Another experiment on the negation of indicative conditionals is presented in Egré and Politzer 2013. They consider weakenings of the classical conjunctive understanding of ~(A → B) as (A ∧ ~B) and
the connexive reading as (A → ~B), namely (A ∧ ◊~B), respectively (A → ◊~B). Exploiting the flexibility of the “tweaking of the falsity conditions”-approach to connexive logic, presented in sections
3.7 and 4.5, Omori 2019 interprets (A → ◊~B) in a variant of the modal logic BK from Odintsov and Wansing 2010 by suitably adjusting the falsity condition for implications (A → B), so that ~(A → B)
is provably equivalent with (A → ◊~B).
Modern modal logic started as a syntactical enterprise with C.I. Lewis, who defined a series of axiom systems to capture notions of strict implication. In a similar vein, Lewis’ student E. Nelson
came up with an axiom system from which Aristotle’s and Boethius’ theses can be derived. The system is called NL in Mares and Paoli 2019, where its axioms and inference rules are presented as follows
(we here use schematic letters for arbitrary formulas instead of propositional variables and a different symbol for negation):
1.1 A → A
1.2 (A|B) → (B|A)
1.3 A → ~~A
1.4 (A → B) → (A ◦ B)
1.5 (A ≠ B ≠ C) → (((A → B ) ∧ (B → C )) → (A → C ))
1.6 (A ∧ B) = (B ∧ A)
1.7 ((A ∧ B ) → C ) → ((A ∧ ~C ) → ~B )
R1 if ⊢ A and ⊢ (A → B), then ⊢ B (modus ponens)
R2 if ⊢ A and ⊢ B, then ⊢ (A ∧ B) (adjunction)
where ◦ is a primitive binary consistency operator, (A|B) (inconsistency) is defined as ~(A ◦ B), A = B is defined as (A → B) ∧ (B → A), A ≠ B as ~ (A = B), and A ≠ B ≠ C is an abbreviation of (A ≠ B
) ∧ (B ≠ C) ∧ (A ≠ C).
Providing a sound and complete semantics for NL is an open problem in connexive logic. Angell’s (1962) axiomatic proof system PA1 can also be seen as belonging to the proof-theoretical tradition
because it is incomplete with respect the truth tables presented by Angell. However, Angell proved PA1 to be sound with respect to these truth tables, thereby for the first time presenting a
non-triviality proof for a formal system of connexive logic. Providing an intuitive sound and complete semantics for PA1 is another open problem in connexive logic. (Routley [1978, p. 409] admits
that his characterization is “not intuitively very satisfying: as it stands the modelling is rather complex, with the modelling conditions exceeding in number the postulates they model, and basic
connexive postulates like Boethius, instead of being validated in a natural way, have fairly intractable conditions.”)
In proof-theoretic semantics, proof systems of a suitable form are seen as providing a meaning theory, see the entry proof-theoretic semantics. In that spirit, Francez (2016) presents two natural
deduction proof systems for a propositional language with negation and implication, one in which AT, AT′, BT, and BT′ are provable, and another one in which AT, AT′, and the following variations of
Boethius’ theses are provable:
B3: (A → B) → ~(~A → B) and
B4: (~A→ B) → ~(A → B).
Francez motivates these principles by certain natural language discourses and a “dual Ramsey Test” that modifies the Ramsey test by assuming that in the course of arguing “If p will q?,” ~p is
hypothetically added to a stock of knowledge. Francez’ natural deduction rules are straightforwardly obtained by modifying the natural deduction rules for the negation and implications fragment of
David Nelsons’s four-valued constructive logic N4, cf. Kamide and Wansing 2012, in the manner that leads from N4 to the constructive connexive logic C from Wansing 2005, cf. section 4.5.1. In Francez
2019 the natural deduction system that gives AT, AT′, BT, and BT′ is relevantized as in the familiar natural deduction proof system for the implication fragment of the relevant logic R by introducing
subscripts for book-keeping in order to avoid empty, irrelevant implication introductions. Omori (2016b) adds conjunction and disjunction to the language of Francez 2016, gives an axiomatization and
a characterizing semantics for the natural deduction system that allows to prove B3 and B4, and observes that although AT and AT′ are valid, BT, and BT′ are invalid, which prompts him to call the
provable equivalence ~(A → B) ↔ (~A → B) “half-connexive”.
The natural deduction proof system in Wansing 2016b can be seen a contribution to a bilateralist proof-theoretic semantics for certain connexive logics given in terms of provability as well as
refutability conditions. In addition to a connexive implication that internalizes a notion of provability into the object language, there is also a connexive co-implication that internalizes a
refutability relation. The resulting bi-connexive logic 2C is a connexive variant of the bi-intuitionistic logic 2Int from Wansing 2016a, 2018. A natural deduction calculus for a quantum logic
satisfying Aristotle’s theses is presented in Kamide 2017.
According to Schroeder-Heister 2009, Gentzen’s sequent calculus is a “more adequate formal model of hypothetical reasoning” than natural deduction, and proof-theoretic semantics has also been
developed with respect to various kinds of sequent calculi. Sequent systems for connexive logics can be found in Wansing 2007, Wansing 2008, McCall 2014, Kamide and Wansing 2011, 2016, Kamide,
Shramko and Wansing 2017, and Kamide 2019.
A central approach to connexive logic is given by many-valued and model-theoretic semantics in terms of truth values or support of truth and support of falsity conditions. As explained in Omori and
Wansing 2019, the semantics of several connexive logics can be described as either (i) modifying some standard (support of) truth conditions of conditionals of a certain kind or keeping standard
truth conditions in combination with more complex model structures, or (ii) as tweaking the standard (support of) falsity conditions of certain familiar implications. Given the multitude of different
connexive logics and the flexibility of the adjustment of falsity conditions in combination with standard (support of) truth conditions, this classification provides a general perspective.
A key observation for this classificatory enterprise comes from Omori and Sano 2015, where a mechanical procedure is described for turning truth tables using the four generalized truth values of
first-degree entailment logic, FDE, see the entries on truth values and relevance logic and Omori and Wansing 2017, into pairs of positive and negative conditions in terms of containing or not
containing the classical truth values 0 and 1. Then, in McCall’s system CC1 a connexive conditional A → B receives a designated value (is true) in a model just in case (i) A does not receive a
designated value or B does and (ii) 0 belongs to the value of A iff it belongs to the value of B. In this sense, the connexive implication of Angell-McCall is obtained by adding a condition to the
truth condition for Boolean implication.
The consequential conditional in the logics of consequential implication investigated by Pizzi (1977, 1991, 1993, 1996, 2004, 2005, 2008, 2018) and Pizzi and Williamson (1997, 2005) validates
Aristotle’s theses but fails to validate Boethius’ theses. It is thus connexive only in a weak sense, but since the consequential implication is a strict conditional that is required to satisfy some
extra condition, also logics of consequential implication fit into the classificatory scheme provided by the semantical perspective. The following table is a slight extension of the summarizing
overview from Omori and Wansing 2019 (with pointers to the relevant sections of the present entry), where the approaches above the double line adjust (support of) truth conditions (or add semantical
machinery to standard truth conditions), whereas the approach below the double line tweaks (support of) falsity conditions:
conditional negation consequence relation
Angell-McCall, section 4.1 material + tweak classical standard
Routley, section 4.2 relevant + ‘generation relation’ star standard
Priest, section 4.3 strict + tweak classical non-standard
Jarmużek and Malinowski, section 4.4 material + double-barreld analysis classical standard
Pizzi, section 5 strict + tweak classical standard
Wansing, section 4.5 various kinds De Morgan standard
A dialogical semantical treatment of connexive logic can be found in Rahman and Rückert 2001.
Whereas the basic ideas of connexive logic can be traced back to antiquity, the search for formal systems with connexive implication seems to have begun only in the 19th century in the work of H.
MacColl (1878), see also Rahman and Redmond 2008. The basic idea of connexive implication was spelled out also by E. Nelson (1930), and a more recent formal study of systems of connexive logic
started in the 1960s. In McCall 1966, S. McCall presented an axiomatization of a system of propositional connexive logic semantically introduced by Angell (1962) in terms of certain four-valued
matrices. The language of McCall’s logic CC1 contains as primitive (notation adjusted) a unary connective ~ (negation) and the binary connectives ∧ (conjunction) and → (implication). Disjunction ∨
and equivalence ↔ are defined in the usual way. The schematic axioms and the rules of CC1 are as follows:
A1 (A → B) → ((B → C) → (A → C))
A2 ((A → A) → B) → B
A3 (A→ B) → ((A ∧ C) → (B ∧ C))
A4 (A ∧ A) → (B → B)
A5 (A ∧ (B ∧ C)) → (B ∧ (A ∧ C))
A6 (A ∧ A) → ((A → A) → (A ∧ A))
A7 A → (A ∧ (A ∧ A))
A8 ((A → ~B) ∧ B) → ~A
A9 (A ∧ ~(A ∧ ~B)) → B
A10 ~(A ∧ ~(A ∧ A))
A11 (~A ∨ ((A → A) → A)) ∨ (((A → A) ∨ (A → A)) → A)
A12 (A → A) → ~(A → ~A)
R1 if ⊢ A and ⊢ (A → B), then ⊢ B (modus ponens)
R2 if ⊢ A and ⊢ B, then ⊢ (A ∧ B) (adjunction)
Among these axiom schemata, only A12 is contra-classical. The system CC1 is characterized by the following four-valued truth tables with designated values 1 and 2:
∧ 1 2 3 4 → 1 2 3 4
McCall emphasizes that the logic CC1 is only one among many possible systems satisfying the theses of Aristotle and Boethius. Although CC1 is a system of connexive logic, its algebraic semantics
appears to be only a formal tool with little explanatory capacity. In CC1, the constant truth functions 1, 2, 3, and 4 can be defined as follows (McCall1966, p. 421): 1 := (p → p), 2 := ~(p ↔ ~p), 3
:= (p ↔ ~p), 4 := ~(p→ p), for some sentence letter p. As Routley and Montgomery (1968, p. 95) point out, CC1 “can be given a semantics by associating the matrix value 1 with logical necessity, value
4 with logical impossibility, value 2 with contingent truth, and value 3 with contingent falsehood. However, many anomalies result; e.g. the conjunction of two contingent truths yields a necessary
truth”. Moreover, McCall points out that CC1 has some properties that are difficult to justify if the name ‘connexive logic’ is meant to reflect the fact that in a valid implication A → B there
exists some form of connection between the antecedent A and the succedent B. Axiom A4, for example, is bad in this respect. On the other hand, CC1 might be said to undergenerate, since (A ∧ A) → A
and A → (A ∧ A) fail to be theorems of CC1. Routley and Montgomery (1968) showed that the addition of the latter formulas to only a certain subsystem of CC1 leads to inconsistency. For a defense of
Angell’s PA1 against Routley and Montgomery’s critical observations see Bode 1979.
These observations may well have distracted many non-classical logicians from connexive logic at that time. If the validity of Aristotle’s and Boethius’ Theses is distinctive of connexive logics, it
is, however, not quite clear how damaging the above criticism is. In order to construct a more satisfactory system of connexive logic, McCall (1975) defined the notions of a connexive algebra and a
connexive model and presented an axiom system CFL that is characterized by the class of all connexive models. In the language of CFL, however, every implication is first-degree, i.e., no nesting of →
is permitted. McCall refers to a result by R. Meyer showing that the valid implications of CFL form a subset of the set of valid material equivalences and briefly discusses giving up the syntactic
restriction to first-degree implication. Meyer (1977) showed that the first-degree fragment of the normal modal logic S5 (and in fact every normal modal logic between KT and S5, cf. the entry logic:
modal) and CFL are equivalent in the following sense: all theorems of CFL are provable in S5 if the connexive implication A → B is defined as □(A ⊃ B) ∧ (A ≡ B), where ⊃ and ≡ are classical
implication and equivalence, respectively, and every first-degree theorem of S5 is provable in CFL if □A (“it is necessary that A”) is defined as (~p ∨ p)→A. In summary, it seems fair to say that as
the result of the investigations into connexive logic in the 1960s and 1970s, connexive logic, its ancient roots notwithstanding, appeared as a sort of exotic branch of non-classical logic.
More recently, Cantwell (2008) presented a truth table semantics for a system of connexive logic together with a proof-theoretical characterization. The truth tables for negation and implication are
taken from Belnap 1970, but Cantwell's three-valued truth table for the conditional can already be found in a paper by William Cooper (1968). Like Cantwell, Cooper wanted to formally model how
conditional sentences in the indicative mood and expressed by means of if-then are used in ordinary conversational English. (Whereas Cantwell takes the entire three-element set of truth values as the
co-domain of assignment functions for propositional variables, Cooper restricts assignment functions to mappings from the set of propositional variables to the two-element set of classical truth
values.) Cantwell considers a language containing the constantly false proposition ⊥ and the following three-valued truth tables for negation, conjunction, disjunction, and implication with
designated values T and − (where ‘T’ stands for truth and ‘F’ for falsity):
∧ T F −
T T F −
F F F F
− − F −
∨ T F − → T F −
T T T T T T F −
F T F − F − − −
− T − − − T F −
In this system, introduced as a system of conditional negation, CN, (A → ~B) and ~(A → B) have the same value under every assignment of truth values to propositional variables. Cantwell’s system thus
validates BTe and BTe′, and it turns out to be the connexive logic MC from Wansing 2005, see section 4.5.3, extended by the Law of Excluded Middle, A ∨ ~A. A certain expansion of CN is studied in
Olkhovikov 2002, 2016 and, independently, in Omori 2016c, see section 4.5.3.
A three-valued logic that validates Aristotle’s theses but not Boethius’ theses and that is subminimally connexive and Kapsner strong in the terminology of Estrada-González & Ramirez-Cámara 2016 is
the three-valued logic MRS^P that was introduced in Estrada-González 2008. In Estrada-González & Ramirez-Cámara 2016, MRS^P is discussed against the background of Cantwell’s three-valued connexive
logic CN and Mortensen’s (1984) three-valued connexive logic, dubbed M3V by McCall (2012).
McCall (2014) presents a cut-free sequent calculus for a system of connexive logic that he calls “connexive Gentzen.” The calculus has the non-standard feature of using pairs of axioms that are not
logical truths. An annotation with subscripts is used to enable the elimination of dependencies on such non-standard axioms in the course of a derivation. The resulting system differs from CC1 in
that p → (p ∧ p) and (p ∧ p)→ p are provable, and it is shown to be sound with respect to certain four-valued matrices. Sound and complete cut-free sequent calculi for certain constructive and modal
connexive logics have been presented for the first time in Wansing 2008 and Kamide and Wansing 2011.
In the late 1970s and the 1980s, connexive logic was subjected to semantical investigations based on ternary frames for relevance logics, making use of the Routley star negation that is distinctive
of logics “on the Australian Plan,” cf. Meyer and Martin 1986. Routley (1978) obtained a semantic characterization of Aristotle’s Thesis AT′ and Boethius’ Thesis BT using a ‘generation relation’ G
between a formula A and a possible world s. The semantics employs model structures F = <T, K, R, S, U, G, *>, where K is a non-empty set of possible worlds, T ∈ K is a distinguished world (the ‘real
world’), R, S, and U are ternary relations on K, G is a generation relation, and * is a function on K mapping every world s to its ‘opposite’ or ‘reverse’ s*. A valuation is a function v that sends
pairs of worlds and propositional variables into {0,1}, satisfying the following heredity condition: if R(T, s, u) and v(p, s) = 1, then v(p, u) = 1. Intuitively, G(A, t) is supposed to mean that
everything that holds in world t is implied by A. A model is a structure M = <F, v>. The relation M, t ⊨ A (“A is true at t in M”) is inductively defined as follows:
M, t ⊨ p iff v(p, t) = 1
M, t ⊨ ~A iff M, t* ⊭ A
M, t ⊨ (A ∧ B) iff there are s, u with Stsu M, s ⊨ A and M, u ⊨ B
M, t ⊨ (A ∨ B) iff there are s, u with Utsu M, s ⊨ A or M, u ⊨ B
M, t ⊨ (A → B) iff for all s, u if Rtsu and M, s ⊨ A, then M, u ⊨ B
[Note: whenever there is little chance for ambiguity, we replace R(x, y, z) by Rxyz.]
Moreover, it is required that for every formula A and world t, G(A, t) implies M, t ⊨ A. A formula A is true in model M iff M, T ⊨ A, and A is valid with respect to a class of models if A is true in
all models from that class. AT′ is semantically characterized by the following property of models: ∃t (R(T*, t, t*) and G(A, t)), and BT is characterized by ∀w∃s, t, u (R(w, s, t), R(w*, s, u), G(A,
s), and R(T, t, u*)).
Mortensen (1984), who considers AT′, explains that Routley’s characterization of AT′ is “not particularly intuitively enlightening” and points out that in certain logics with a ternary relational
models semantics another characterization of AT′ is available, namely the condition that for every model M the set C[A] := {s : M, s ⊨ A and M, s ⊭ ~A} is non-empty. Like Routley’s non-recursive
requirement that G(A, t) implies M, t ⊨ A, Mortensen’s condition is not a purely structural condition, since it mentions the truth relation ⊨. Mortensen (1984, p. 114) maintains that the condition C[
A] ≠ ∅ “is closest to the way we think of Aristotle,” and emphasizes that for a self-inconsistent proposition A, the set C[A] must be empty, whence AT′ is to be denied. Mortensen also critically
discusses the addition of AT′ to the relevance logic E. In this context, AT′ amounts to the condition that no implication is true at the world T*.
A more regular semantics for extensions of the basic relevance logic B (not to be confused with the truth valued read as “both true and false”) by either AT′ or BT has been presented in Brady 1989.
In this semantics, conjunction is defined in the standard way, and there is a non-empty subset of worlds O ⊆ K. The set O contains the distinguished element T used to define truth in a model. The
extended model structures contain a function ℑ that maps sets of worlds, and in particular, interpretations of formulas (alias propositions) I(A) to sets of worlds in such a way that a formula A is
true at a world t iff t ∈ ℑ(I(A)). This allows Brady to state model conditions capturing AT′ and BT as follows:
AT′: If t ∈ O, then (∃x, y ∈ ℑ(f)) Rt*xy* , for any proposition f;
BT: (∃x,y ∈ ℑ(f)) (∃z ∈ K) (Rtxz and Rt*yz*), for any proposition f and any t ∈ K.
Note that these clauses still are not purely structural conditions but conditions on the interpretation of formulas. Also the investigations into connexive logics based on ternary frames did not, as
it seems, lead to establishing connexive logic as a fully recognized branch of non-classical logic.
Albeit according to Routley (1978), Routley et al. (1982) and Routley and Routley (1985) there is a close relation between connexive logic and the idea of negation as cancellation, Routley suggested
a semantics using a generation relation and the star negation in ternary frames for relevance logics, whereas connexive logics based in a straightforward way on the cancellation view of negation have
been worked out by Priest (1999). Priest (1999) directly translates a definition of entailment that enforces the null-content account of contradictions into evaluation clauses. A model is a structure
M = <W, g, v>, where W is a non-empty set of possible worlds, g is a distinguished element from W, and v is a valuation function from the set of propositional variables into the set of classical
truth values {1, 0}. Two clauses for evaluating implications at possible worlds are considered (notation adjusted):
(a) M, s ⊨ A → B iff (i) there is a world u with M, u ⊨ A and (ii) for every world u, M, u ⊨ A then M, u ⊨ B;
(b) M, s ⊨ A → B iff (i) there is a world u with M, u ⊨ A, (ii) there is a world u with M, u ⊭ B, and (iii) for every world u, M, u ⊨ A then M, u ⊨ B.
Condition (i) ensures that nothing is implied by an unsatisfiable antecedent. The evaluation clauses for the other connectives are classical. A formula A is true in a model (M ⊨ A) iff M, g ⊨ A; and
A is valid iff A is true in every model. Condition (ii) ensures that the law of contraposition is valid. A set Δ of formulas is true in a model iff every element of Δ is true in the model.
There are two notions of entailment (Δ ⊨ A), one coming with clause (a) the other with clause (b):
(a) Δ⊨ A iff Δ is true in some model, and every model in which Δ is true is a model in which A is true;
(b) Δ⊨ A iff Δ is true in some model, ~A is true in some model, and every model in which Δ is true is a model in which A is true.
These two connexive logics arise from the idea of negation as cancellation in a straightforward way. They are neither monotonic nor closed under uniform substitution. Proof systems and decision
procedures for them can be obtained from a straightforward faithful translation τ into the modal logic S5, cf. the entry logic: modal. For implications A → B the translation is defined as follows,
where ⊃ is material implication and ¬ is classical negation:
(a) ◊τ(A) ∧ □(τ(A) ⊃ τ(B));
(b) ◊τ(A) ∧ ◊¬τ(B) ∧ □(τ(A) ⊃ τ(B)).
Ferguson (2015) observes that the intersection of the semantical consequence relations of variant (a) of Priest’s logic and the negation, conjunction, disjunction fragment of Bochvar’s 3-valued logic
(cf. the entry many-valued logic) results in a known system of containment logic, namely the system RC presented in Johnson 1976.
Although the semantics of Priest’s connexive logics is simple and transparent, the underlying idea of subtraction negation is not unproblematic. Priest (1999, 146) mentions strong fallibilists who
“endorse each of their views, but also endorse the claim that some of their views are false”. Their contradictory opinions in fact hardly are contentless, so that the cancellation account of negation
and, as a result, systems of connexive logic based on subtraction negation appear not to be very well-motivated. In Skurt and Wansing 2018, it is argued that the metaphoric notion of negation as
cancellation is conceptually unclear and that Routley’s (Routley et al. 1982)) suggestion to replace it by a notion of negation as subtraction in generalized arithmetic is unclear at least insofar as
it has not been worked out in detail.
The Boolean connexive logics of Jarmużek and Malinowski 2019a are obtained in the framework of relating logic, a generalization of relatedness logic. The latter is an instance of what Sylvan (1989,
p. 166) calls a “double-barrelled” analysis of implications, an analysis that complements truth conditions with an additional “sieve” or “filter” that tightens the relation between antecedent and
succedent. If the relation is meant to be a relevance relation, this is an example of what Schurz 1998 calls “relevance post validity” in contrast to “relevance in validity” as investigated in
relevance logic. Boolean connexive logics extend the language of classical propositional logic using conjunction, disjunction, and Boolean negation by a relating implication, →^w, the semantics of
which is constrained by a binary relation R on the set of all formulas. A model then is a pair <v,R>, where v is a classical valuation function. The truth condition for relating implication imposes
the relatedness constraint:
<v,R> ⊨ A →^w B iff [(<v,R> ⊭ A or <v,R> ⊨ B) and R(A, B)]
and a notion of validity with respect to a relation R is defined: R ⊨ A iff for every valuation v, v,R ⊨ A.
In order to obtain connexive logics, Jarmużek and Malinowski introduce the following conditions on binary relations R:
(a1) R is (a1) iff for any A: not R(A, ~A)
(a2) R is (a2) iff for any A: not R(~A, A)
(b1) R is (b1) iff for arbitrary A, B: (i) if R(A, B) then not R(A, ~B) and (ii) R((A →^w B), ~(A →^w ~B)) (b2) R is (b2) iff for arbitrary A, B: (i) if R(A, B) then not R(A, ~B) and (ii) R((A →^
w ~B), ~(A →^w B)).
These conditions suffice to validate Aristotle’s and Boethius theses. A correspondence between Aristotle’s and Boethius theses and conditions on R is obtained if the relations R are required to be
closed under negation, i.e., for all formulas A and B, R(A, B) implies R(~A, ~B). Then,
(a1) R is (a1) iff R ⊨ ~(A →^w ~A)
(a2) R is (a2) iff R ⊨ ~(~A →^w A)
(b1) R is (b1) iff R ⊨ (A →^w B) →^w ~(A →^w ~B)
(b2) R is (b2) iff R ⊨ (A →^w ~B) →^w ~(A →^w B).
However, these correspondences come at a price. Jarmużek and Malinowski point out that imposing negation closure validates the otherwise refutable formula ~((A →^w B) ∧ ~B ∧ ~(~A →^w ~B)) with
respect to any relation R. Jarmużek and Malinowski also show that these five conditions are independent of each other and therefore give rise to 2^5 different logics. The two connexive ones (alias
properly connexive ones in Jarmużek and Malinowski’s terminology), i.e., the logic defined by means of conditions (a1), (a2), (a3), and (4) and the logic defined by in addition requiring negation
closure, are also Kapsner strong. Moreover, Jarmużek and Malinowski present sound and complete tableau calculi for these 2^5 logics.
The basic paraconsistent logic FDE of first-degree entailment lacks a primitive implication connective and lends itself to adding an implication connective that validates Aristotle’s and Boethius’
theses by using the falsity conditions of implications as expressed by BTe′. This is possible because negation is treated according to “the American Plan,” i.e., by making use of four semantical
values: T (“told true only”), F (“told false only”), N (“neither told true nor told false”), and B (“both told true and told false”), so that support of truth and support of falsity emerge as two
independent semantical dimensions:
A receives the value T at state t iff t supports the truth of A but not the falsity of A;
A receives the value F at t iff t supports the falsity of A but not the truth of A;
A receives the value N at t iff t neither supports the truth of A nor supports the falsity of A;
A receives the value B at t iff t supports both the truth and the falsity of A.
Negation is then understood as leading from support of truth to support of falsity, and vice versa. The method of tweaking the (support of) falsity conditions can be applied to a number of different
conditionals, ranging from constructive, relevant, and material (Boolean) implication to very weak implications studied in conditional logic with the help of so-called Segerberg frames.
A system of connexive logic with an intuitively plausible possible worlds semantics using a binary relation between worlds has been introduced in Wansing 2005. In this paper it is observed that a
modification of the falsification conditions for negated implications in possible worlds models for David Nelson’s constructive four-valued logic with strong negation results in a connexive logic,
called C, which inherits from Nelson’s logic an interpretation in terms of information states pre-ordered by a relation of possible expansion of these states. For Nelson’s constructive logics see,
for example, Almukdad and Nelson 1984, Gurevich 1977, Nelson 1949, Odintsov 2008, Routley 1974, Thomason 1969, Wansing 2001, Kamide and Wansing 2012.
The key observation for obtaining C is simple: in the presence of the double negation introduction law, it suffices to validate both BT′ and its converse ~(A → B) → (A → ~B). In other words, an
interpretation of the falsification conditions of implications is called for, which deviates from the standard conditions. In Nelson’s systems of constructive logic, the double negation laws hold,
and the relational semantics for these logics is such that falsification and verification of formulas are dealt with separately. The system N4 extends FDE by intuitionistic implication, however, the
falsification conditions of implications are the classical ones expressed by the schema ~(A → B) ↔ (A ∧ ~B). To obtain a connexive implication, it is therefore enough to assume another interpretation
of the falsification conditions of implications, namely the one expressed by BTe′: (A → ~B) ↔ ~(A → B).
Consider the language L := {∧, ∨, →, ~} based on a denumerable set of propositional variables. Equivalence ↔ is defined as usual. The schematic axioms and rules of the logic C are:
a1 the axioms of intuitionistic positive logic
a2 ~~A ↔ A
a3 ~(A ∨ B) ↔ (~A ∧ ~B)
a4 ~(A ∧ B) ↔ (~A ∨ ~B)
a5 ~(A → B) ↔ (A → ~ B)
R1 modus ponens
Clearly, a5 is the only contra-classical axiom of C. The consequence relation ⊢[C] (derivability in C) is defined as usual. A C-frame is a pair F = <W, ≤>, where ≤ is a reflexive and transitive
binary relation on the non-empty set W. Let <W, ≤>^+ be the set of all X ⊆ W such that if u ∈ X and u ≤ w, then w ∈ X. A C-model is a structure M = <W, ≤, v^+, v^−>, where <W, ≤> is a C-frame and
v^+ and v^− are valuation functions from the set of propositional variables into <W, ≤>^+. Intuitively, W is a set of information states. The function v^+ sends a propositional variable p to the
states in W that support the truth of p, whereas v^− sends p to the states that support the falsity of p. M = <W, ≤, v^+, v^−> is said to be the model based on the frame <W, ≤>. The relations M, t
⊨^+ A (“M supports the truth of A at t”) and M, t ⊨^− A (“M supports the falsity of A at t”) are inductively defined as follows:
M, t ⊨^+ p iff t ∈ v^+(p)
M, t ⊨^− p iff t ∈ v^−(p)
M, t ⊨^+ (A ∧ B) iff M, t ⊨^+ A and M, t ⊨^+ B
M, t ⊨^− (A ∧ B) iff M, t ⊨^− A or M, t ⊨^− B
M, t ⊨^+ (A ∨ B) iff M, t ⊨^+A or M, t ⊨^+ B
M, t ⊨^− (A ∨ B) iff M, t ⊨^− A and M, t ⊨^− B
M, t ⊨^+ (A → B) iff for all u ≥ t (M, u ⊨^+ A implies M, u ⊨^+ B)
M, t ⊨^− (A → B) iff for all u ≥ t (M, u ⊨^+ A implies M, u ⊨^− B)
M, t ⊨^+ ~A iff M, t ⊨^− A
M, t ⊨^− ~A iff M, t ⊨^+ A
If M = <W, ≤, v^+, v^−> is a C-model, then M ⊨ A (“A is valid in M”) iff for every t ∈ W, M, t ⊨^+ A. F ⊨ A (“A is valid on F”) iff M ⊨ A for every model M based on F. A formula is C-valid iff it is
valid on every frame. Support of truth and support of falsity for arbitrary formulas are persistent with respect to the relation ≤ of possible expansion of information states. That is, for any C
-model M = <W, ≤, v^+, v^−> and formula A, if s ≤ t, then M, s ⊨^+ A implies M, t ⊨^+ A and M, s ⊨^− A implies M, t ⊨^− A. It can easily be shown that a negation normal form theorem holds. The logic
C is characterized by the class of all C-frames: for any L-formula A, ⊢[C] A iff A is C-valid. Moreover, C satisfies the disjunction property and the constructible falsity property. If ⊢[C] A ∨ B,
then ⊢[C] A or ⊢[C] B. If ⊢[C] ~(A ∧ B), then ⊢[C] ~ A or ⊢[C] ~B. Decidability of C follows from a faithful embedding into positive intuitionistic propositional logic.
Like Nelson’s four-valued constructive logic N4, C is a paraconsistent logic (cf. the entry logic: paraconsistent). Note that C contains contradictions, for example: ⊢[C] ((p ∧ ~p) → (~p ∨ p)) and ⊢[
C] ~((p ∧ ~p) → (~p ∨ p)). It is obvious from the above presentation that C differs from N4 only with respect to the falsification (or support of falsity) conditions of implications. As in N4,
provable strong equivalence is a congruence relation, i.e., the set {A : ⊢[C] A} is closed under the rule A ↔ B, ~A ↔ ~B / C(A) ↔ C(B). Wansing (2005) also introduces a first-order extension QC of C.
Kamide and Wansing (2011) present a sound and complete sequent calculus for C and show the cut-rule to be admissible, which means that it can be dispensed with.
Whereas the direction from right to left of Axiom a5 can be justified by rejecting the view that if A implies B and A is inconsistent, A implies any formula, in particular B, the direction from left
to right seems rather strong. If the verification conditions of implications are dynamic (in the sense of referring to other states in addition to the state of evaluation), then a5 indicates that the
falsification conditions of implications are dynamic as well. The falsity of (A → B) thus implies that if A is true, B is false. Yet, one might wonder why it is not required that the falsity of (A→ B
) implies that if A is true, B is not true. This cannot be expressed in a language with just one negation, ~, expressing falsity instead of absence of truth (classically at the state of evaluation or
intuitionistically at all related states). If one adds to C the further axiom ~A → (A → B) to obtain a connexive variant of Nelson’s three-valued logic N3, intuitionistic negation ¬ is definable by
setting: ¬A := A → ~A. Then a5 might be replaced by
a5′: ~(A → B) ↔ (A → ¬B).
The resulting system satisfies AT, AT′, BT, and BT′ because A → ¬~A and ~A → ¬A are theorems. For BT, for example, we have:
1. A → B assumption
2. B → ¬~B theorem
3. A → ¬~B 1, 2, transitivity of →
4. (A → ¬~B) → ~(A → ~B) axiom a5′
5. ~(A → ~B) 3, 4, R1
6. (A → B) → ~(A → ~B) 1, 5, deduction theorem
This logic, however, is the trivial system consisting of every L-formula (a fact not noticed in Wansing 2005 (Section 6) but pointed out in the online version of that paper).
The system C is a conservative extension of positive intuitionistic logic. In C, strong negation is interpreted in such a way that it turns the intuitionistic implication of its negation-free
sublanguage into a connexive implication. Analogously, strong negation may be added to positive dual intuitionistic logic to obtain a system with a connexive co-implication, and to bi-intuitionistic
logic, or to the logic 2Int from Wansing 2016a that also contains an implication and a co-implication connective, to obtain systems with both a connexive implication and a connexive co-implications,
see Wansing 2008, 2016b, and Kamide and Wansing 2016.
The systems C and QC are connexive but not Kapsner strong. This is hardly surprising because these logics are paraconsistent and allow formulas A and ~A to be simultaneously satisfiable in the sense
that a state and all its possible expansions may support the truth of both A and ~A. As a result, A → ~A and ~A → A are satisfiable. If A → ~A and ~A → A are unsatisfiable, strong connexivity is in
conflict with at the same time satisfying the deduction theorem and defining semantical consequence as preservation of support of truth: A → ~A would entail ~(A → ~A), ~A → A would entail ~(~A → A),
and the formulas (A → ~A) → ~(A → ~A) and (~A → A) → ~(~A →A) would be valid instead of unsatisifable.
The starting point for Hitoshi Omori’s (2016a) definition of a connexive extension of the basic relevance logic BD (see the entry logic: relevance) is to find a proof theory for extensions of BD with
negation treated according to the American Plan. Priest and Sylvan (1992) posed this as an open problem, and Omori gives a partial solution by defining a connexive variant BDW of BD. The semantics
uses models based on ternary frames. There is a base state g, the four truth values are represented as subsets of the set of classical truth values {0,1}, and interpretations are defined in the style
of Dunn (cf. Omori and Wansing 2017). A model is quadruple <g, W, R, I>, where W is a non-empty set (of states), g∈W, R is a three-place relation on W with Rgxy iff x = y, and I is a function that
maps pairs consisting of a state and a propositional variable to subsets of {0,1}. The interpretation function I is then extended to an assignment of truth values at states for all formulas as
1 ∈ I(w, ~A) iff 0 ∈ I(w, A)
0 ∈ I(w, ~A) iff 1 ∈ I(w, A)
1 ∈ I(w, A ∧ B) iff [1 ∈ I(w, A) and 1 ∈ I(w, B)]
0 ∈ I(w, A ∧ B) iff [0 ∈ I(w, A) or 0 ∈ I(w, B)]
1 ∈ I(w, A ∨ B) iff [1 ∈ I(w, A) or 1 ∈ I(w, B)]
0 ∈ I(w, A ∨ B) iff [0 ∈ I(w, A) and 0 ∈ I(w, B)]
1 ∈ I(w, A → B) iff for all x, y ∈ W: if Rwxy and 1 ∈ I(x, A), then 1 ∈ I(y, B)
0 ∈ I(w, A → B) iff for all x, y ∈ W: if Rwxy and 1 ∈ I(x, A), then 0 ∈ I(y, B)
An axiomatization of BDW is obtained from the axiom system for BD by adding BTe′. Like the constructive connexive logic C, the connexive relevance logic BDW is negation inconsistent but non-trivial.
Adding ex contradictione quodlibet to system C has a trivializing effect, and adding the Law of Excluded Middle to C does not result in a logic that has positive classical propositional logic as a
fragment. However, if implications A → B are understood as material, Boolean implications, then a separate treatment of falsity conditions again allows introducing a system of connexive logic. The
resulting system MC may be called a system of material connexive logic. The semantics is quite obvious: a model M is just a function from the set of all literals, i.e., propositional variables or
negated propositional variables, into the set of classical truth values {1, 0}. Truth of a formula A in a model M (M ⊨ A) is inductively defined as follows:
M ⊨ p iff v(p) = 1
M ⊨ (A ∧ B) iff M ⊨ A and M ⊨ B
M ⊨ (A ∨ B) iff M ⊨ A or M ⊨ B
M ⊨ (A → B) iff M ⊭ A or M ⊨ B
M ⊨ ~p iff v(~p) = 1
M ⊨ ~~A iff M ⊨ A
M ⊨ ~(A ∧ B) iff M ⊨ ~A or M ⊨ ~B
M ⊨ ~(A ∨ B) iff M ⊨ ~A and M ⊨ ~B
M ⊨ ~(A → B) iff M ⊭ A or M ⊨ ~B
A formula is valid iff it is true in all models. (Alternatively, one could use the semantics of C and require the set of states of a frame to be a singleton.) The set of all valid formulas is
axiomatized by the following set of axiom schemata and rules:
a1[c] the axioms of classical positive logic
a2 ~~A ↔ A
a3 ~(A ∨ B) ↔ (~A ∧ ~B)
a4 ~(A ∧ B) ↔ (~A ∨ ~B)
a5 ~(A → B) ↔ (A → ~B)
R1 modus ponens
The logic MC can be faithfully embedded into positive classical logic, whence MC is decidable. The following truth tables for MC, while considering a language with a classical negation, resulting in
a system called “dialetheic Belnap Dunn Logic,” dBD, are given in Omori 2016c:
∧ T B N F
T T B N F
B B B F F
N N F N F
F F F F F
∨ T B N F → T B N F
T T T T T T T B N F
B T B T B B T B N F
N T T N N N B B B B
F T B N F F B B B B
The formula ~(A → B) → (A ∧ ~B) is, of course, not a theorem of MC. Like C, MC is a paraconsistent logic containing contradictions. The connexive logic MC differs from the four-valued logic HBe
presented in Avron 1991 by making use of the above clause that guarantees the validity of BTe′, i.e.,
M ⊨ ~(A → B) iff M ⊭ A or M ⊨ ~B
instead of the clause
M ⊨ ~(A → B) iff M ⊨ A and M ⊨ ~B.
As already mentioned, Cantwell’s three-valued connexive logic CN can be obtained by extending MC with the Law of Excluded Middle and, semantically, by requiring that for every propositional variable
p and every model M, M ⊨ p or M ⊨ ~p. There is another three-valued connexive logic that is strictly stronger than CN, namely the “dialetheic Logic of Paradox,” dLP, studied in Omori 2016c, which
turned out to be equivalent with the system LImp from Olkhovikov 2002 (published in English translation in 2016). Whilst Olkhovikov uses a unary operator L, understood as a kind of necessity
operator, in the language of LImp, Omori uses a unary consistency operator, ○, in the language of dLP. The connective L is definable in dLP, and the connective ○ is definable in LImp. It is shown in
Omori 2016c that dLP is inconsistent, definitionally complete, and Post complete. Both, Omori (2016c) and Olkhovikov (2016) consider a first-order extension of dLP, respectively LImp.
It is quite natural to obtain an FDE-based connexive logic by starting form David Nelson’s logic N4 because the latter system’s intuitionistic implication is the weakest conditional satisfying modus
ponens and the deduction theorem. The conditionals studied within conditional logic in the tradition of Robert Stalnaker and David Lewis, where the conditional is usually written as ‘□→’, are much
weaker than intuitionistic or relevant implication. The project of taking the basic system of conditional logic CK introduced by Brian Chellas (1975) as the point of departure for obtaining connexive
conditional logics has been carried out in Wansing and Unterhuber 2019, and a similar approach is considered in Kapsner and Omori 2017. Whereas the semantics for the Lewis-Nelson models from Kapsner
and Omori 2017 uses binary relations R[A] on a non-empty set of states, for every formula A, the Chellas-Segerberg semantics employed in Wansing and Unterhuber 2019 uses binary relations R[X] on a
non-empty set of states, for subsets X of the set of all states. Both versions of the semantics can be equipped with sound and complete tableau calculi (although Kapsner and Omori only present the
models), but the Chellas-Segerberg semantics is suitable for developing a purely structural correspondence theory in terms of properties of relations that are language-independent insofar as they are
not relativized to a formula.
A pair <W, R> is a Chellas frame (or just a frame) iff W is a non-empty set, intuitively understood as a set of information states, and R ⊆ W × W × ℘(W), where ℘( W) is the powerset of W. Instead of
Rww′ X one usually writes wR[X]w′. Let W, R be a frame such that for all X ⊆ W and w, w′ ∈ W, wR[X]w′ implies w′ ∈ X. Then M = <W, R, v^+, v^−> is a model for the connexive conditional logic CCL iff
v^+ and v^− are valuation functions from the set of propositional variables into ℘( W), the support of truth and support of falsity conditions for propositional variables, negated formulas,
conjunctions, and disjunctions are defined as in the case of C-models and, moreover,
M, w ⊨^+ (A □→ B) iff for all u ∈ W such that wR[[[A]]]u it holds that M, u ⊨^+ B
M, w ⊨^− (A □→ B) iff for all u ∈ W such that wR[[[A]]]u it holds that M, u ⊨^− B,
where [[A]] is the set of states that support the truth of A.
If <W, R> is a Chellas frame, a triple <W, R, P> is said to be a Segerberg frame (or a general frame) for CCL if P is a binary relation on ℘(W) that satisfies certain closure conditions. A quintuple
M = <W, R, P, v^+, v^−> then is a general model for CCL if <W, R, P> is a general frame for CCL, <W, R, v^+, v^−> is a model for CCL, and for every propositional variable p, [[p]],[[~p]] ∈ P. The
closure conditions on P are exactly the conditions guaranteeing that for every formula A, [[A]],[[~A]] ∈ P if for every propositional variable p, [[p]],[[~p]] ∈ P. If [[A]],[[~A]] is seen as the
proposition expressed by A, then a general model for CCL is rich enough to guarantee that every proposition expressed by a formula is available. This is needed for a purely structural correspondence
theory. The formula A □→ A, for example, is valid on a general frame iff it satisfies the frame condition:
C[A □→ A]: For all X ⊆ W and w, w′ ∈ W, wR[X]w′ implies w′ ∈ X.
General frames for CCL are required to satisfy condition C[A □→ A] in order to make sure that Boethius’ theses are indeed validated. In Unterhuber and Wansing 2019 sound and complete tableau calculi
are presented for CCL and the weaker system cCL that validates Aristotle’s theses but not Boethius’ theses and that is obtained by giving up C[A □→ A]. In Wansing and Unterhuber 2019 these results
are then extended to systems that are obtained by adding a constructive implication to the language of cCL and CCL.
McCall (2012) classifies the principles he calls Abelard’s First Principle and Aristotle’s Second Thesis (cf. section 2) as connexive principles. In Wansing and Skurt 2018 it is argued that since
Aristotle’s Second Thesis and Abelard’s First Principle both involve conjunction, one may think of obtaining motivation for them from the idea of negation as cancellation and from the failure of
Simplification as justified by the erasure model of negation. Like the other connexive logics considered in the present section, CCL is a system in which Abelard’s First Principle and Aristotle’s
Second Thesis fail to be valid.
There is a growing literature on modal extensions of connexive logics. In Wansing 2005, the language of the connexive logic C is extended by modal operators □ and ◊ (“it is possible that”) to define
a connexive and constructive analogue CK of the smallest normal modal logic K. The system CK is shown to be faithfully embeddable into QC, to be decidable, and to enjoy the disjunction property and
the constructible falsity property.
It is well-known that intuitionistic propositional logic can be faithfully embedded into the normal modal logic S4, which, like K, is based on classical propositional logic (cf. the entries logic:
intuitionistic and logic: modal). There exists a translation γ, due to Gödel, such that a formula A of intuitionistic logic is intuitionistically valid iff A’s γ-translation is valid in S4. In
particular, intuitionistic implication is understood as strict material implication: γ(A → B) = □(γ(A) ⊃ γ(B)). Kamide and Wansing (2011) define a sequent calculus for connexive S4 based on MC. This
system, CS4, is shown to be complete with respect to a relational possible worlds semantics. The proof uses a faithful embedding of CS4 into positive, negation-free S4. Moreover, it is shown that the
cut-rule is an admissible rule in CS4 and that the constructive connexive logic C stand to CS4 as intuitionistic logic stands to S4. In the faithful embedding, the modal translation of negated
implications is as expected: γ(~(A → B)) = □(γ(A) ⊃ γ(~B)). A similar translation is used in Odintsov and Wansing 2010 to embed C into a modal extension BS4 of Belnap and Dunn’s four-valued logic.
In CS4 the modal operators □ and ◊ are syntactic duals of each other: the equivalence between □A and ~◊~A and between ◊A and ~□~A is provable. Kamide and Wansing (2011) also present a cut-free
sequent calculus for a connexive constructive version CS4^d– of S4 without syntactic duality between □ and ◊. The relational possible worlds semantics for CS4^d– is not fully compositional, cf.
Odintsov and Wansing 2004. CS4^d– is faithfully embeddable into positive S4 and decidable. Moreover C is faithfully embeddable into CS4^d–.
Modal Boolean connexive relatedness logics are investigated in Jarmużek and Malinowski 2019b, a modal extension of a “bi-classical” paraconsistent connexive logic is introduced in Kamide 2019, and
connexive variants of various modal extensions of FDE that are extensions of MC are studied in Odintsov, Skurt, and Wansing 2019.
Aristotle’s and Boethius’ theses express, as it seems, some pre-theoretical intuitions about meaning relations between negation and implication. But it is not clear that a language must contain only
one negation operation and only one implication. The language of bi-intuitionistic logic contains two negations, the language of the bi-intuitionistic connexive logics in Wansing 2016b and Kamide &
Wansing 2016 contain three negations, and the language of systems of consequential implication comprises two implication connectives together with one negation, see Pizzi 1977, 1991, 1993, 1996,
1999, 2004, 2005, 2008, 2018, Pizzi and Williamson 1997, 2005. Pizzi (2008, p. 127) considers a notion of consequential relevance, namely that “[t]he antecedent and the consequent of a true
conditional cannot have incompatible modal status,” and suggests to capture consequential relevance by requiring that in any true conditional A → B, (i) A strictly implies B and (ii) A and B have the
same modal status in the sense that □A ⊃ □B, □B ⊃ □A, ◊A ⊃ ◊B, and ◊B ⊃ ◊A are ture, where ⊃ is material implication. Moreover, it is required that □A ⊃ ◊A is always true.
In Pizzi and Williamson 1997, a conditional satisfying (i) and (ii) is called an analytic consequential implication and the notion of a normal system of analytic consequential implication is defined.
‘Normal’ here means that such a system contains certain formulas and is closed under certain rules. The smallest normal consequential logic that satisfies AT is called CI. Alternatively, CI can be
characterized as the smallest normal system that satisfies the Weak Boethius’ Thesis, i.e, (A → B) ⊃ ¬(A → ¬B), where → is consequential implication and ¬ is classical negation. In Omori and Wansing
2019 the semantics of CI is presented in a way showing that the semantics of the consequential conditional is obtained by tweaking the truth conditions of strict implication in Kripke models with a
serial accessibility relation (so that □A ⊃ ◊A is valid). The standard truth conditions are supplemented by requiring equal modal status for the antecedent and the consequent.
Pizzi and Williamson (1997) show that CI can be faithfully embedded into the normal modal logic KD, and vice versa. Analytic consequential implication is interpreted according to the following
translation function φ:
φ(A → B) = □(φA ⊃ φB) ∧ (□φB ⊃ □φA) ∧ (◊φB ⊃ ◊φA)
As Pizzi and Williamson (1997, p. 571) point out, their investigation is a “contribution to the modal treatment of logics intermediate between logics of consequential implication and connexive
logics.” They emphasize a difficulty of regarding consequential implication as a genuine implication connective by showing that in any normal system of consequential logic that admits modus ponens
for consequential implication and contains BT, the following formulas are provable:
(a) (A → B) ≡ (B → A),
(b) (A → B) ≡ ¬(A → ¬B)
where ≡ is classical equivalence. Since (A → B) ↔ ~(A → ~B) is a theorem of C and other connexive logics, the more problematic fact, from the point of view of this system, is the provability of (a).
Pizzi and Williamson also show that in any normal system of consequential logic that contains BT, the formula (A → B) ≡ (A ≡ B) is provable if (A → B) ⊃ (A ⊃ B) is provable, in other words,
consequential implication collapses into classical equivalence if (A → B) ⊃ (A ⊃ B) is provable. The construction of Aristotelian squares of opposition and their combination to Aristotelian cubes in
systems of consequential implication is considered in Pizzi 2008. Two kinds of consequential implication are discussed and compared to each other in Pizzi 2018.
In summary, it may be said that connexive logic, although it is contra-classical and unusual in various respects, is not just a formal game or gimmick. There are several kinds of systems of connexive
logics with different kinds of semantics and proof systems, and in the 21st century the subject has been experiencing a renaissance. The intuitions captured by systems of connexive logic can be
traced back to ancient roots, and applications of connexive logics range from Aristotle’s syllogistic to Categorial Grammar, the study of causal implications, and connexive mathematics.
• Almukdad A. and Nelson, D., 1984, “Constructible Falsity and Inexact Predicates”, Journal of Symbolic Logic, 49: 231–233.
• Anderson, A.R. and Belnap, N.D., 1975, Entailment: The Logic of Relevance and Necessity, Volume I, Princeton: Princeton University Press.
• Angell, R.B., 1962, “A Propositional Logic with Subjunctive Conditionals”, Journal of Symbolic Logic, 27: 327–343.
• –––, 1967a, “Three Logics of Subjunctive Conditionals (Abstract)”, Journal of Symbolic Logic, 32: 297–308.
• –––, 1967b, “Connexive Implication, Modal Logic and Subjunctive Conditionals”, lecture delivered in Chicago, 5 May 1967, IfCoLog Journal of Logics and their Applications, 2016, 3: 297–308.
• –––, 1978, “Tre logiche dei condizionali congiuntivi”, in: C. Pizzi (ed), Leggi di natura, modalità, ipotesi. La logica del ragionamento controfattuale, Milan: Feltrinelli, 156–180; Italian
translation of Angell 1966, see Other Internet Resources.
• –––, 2002, A-Logic, Lanham: University Press of America.
• Arieli, O. and Avron, A., 1996, “Reasoning with Logical Bilattices”, Journal of Logic, Language and Information, 5: 25–63
• Avron, A., 1991, “Natural 3-valued Logics–Characterization and Proof Theory”, Journal of Symbolic Logic, 56: 276–294.
• Belnap, N.D., 1970, “Conditional Assertion and Restricted Quantification”, Noûs, 4: 1–13.
• Bennett, J., 2003, A Philosophical Guide to Conditionals, Oxford: Clarendon Press.
• Besnard, P., 2011, “A Logical Analysis of Rule Inconsistency”, International Journal of Semantic Computing, 5: 271–280.
• Bode, J., 1979, “The Possibility of a Conditional Logical ”, Notre Dame Journal of Formal Logic, 20: 147–154.
• Boethius, A.M.S., 1860, De Syllogismo Hypothetico, J.P. Migne (ed.), Patrologia Latina 64, Paris, 831–876.
• Brady, R., 1989, “A Routley-Meyer Affixing Style Semantics for Logics Containing Aristotle’s Thesis”, Studia Logica, 48: 235–241.
• Cantwell, J., 2008, “The Logic of Conditional Negation”, Notre Dame Journal of Formal Logic, 49: 245–260.
• Chellas, B., 1975, “Basic Conditional Logic”, Journal of Philosophical Logic, 4: 133–153.
• Cooper, W., 1968, “The Propositional Logic of Ordinary Discourse”, Inquiry, 11: 295–320.
• El-Rouayheb, K., 2009, “Impossible Antecedents and Their Consequences: Some Thirteenth- Century Arabic Discussions”, History and Philosophy of Logic 30: 209–225.
• Egré, P. and Politzer, G., 2013, “On the negation of indicative conditionals”, in: M. Franke, M. Aloni and F. Roelofsen (eds), Proceedings of the 19th Amsterdam Colloquium, 10–18 [Egré & Politzer
2013 available online].
• Estrada-González, L., 2008, “Weakened Semantics and the Traditional Square of Opposition”, Logica Universalis, 2: 155–165.
• Estrada-González, L. and Ramirez-Cámara, E., 2016, “A Comparison of Connexive Logics”, IfCoLog Journal of Logics and their Applications, 3: 341–355.
• –––, 2020, “A Nelsonian Response to ‘the Most Embarrassing of All Twelfth-century Arguments’”, History and Philosophy of Logic, 41: 101–113.
• Ferguson, T.M., 2014, “Ramsey’s Footnote and Priest’s Connexive Logics”, abstract, ASL Logic Symposium 2012, Bulletin of Symbolic Logic, 20: 387–388.
• –––, 2015, “Logics of Nonsense and Parry Systems”, Journal of Philosophical Logic, 44: 65–80.
• –––, 2016, “On Arithmetic Formulated Connexively”, IfCoLog Journal of Logics and their Applications, 3: 357–376.
• –––, 2019, “Inconsistent Models (and Infinite Models) for Arithmetics with Constructible Falsity”, Logic and Logical Philosophy, 28: 389–407.
• Fine, K., 1986, “Analytic Implication”, Notre Dame Journal of Formal Logic, 27: 169–179.
• Francez, N., 2016, “Natural Deduction for Two Connexive Logics”, IfCoLog Journal of Logics and their Applications, 3: 479–504.
• –––, 2019 “Relevant Connexive Logic”, Logic and Logical Philosophy, 28: 409–425.
• –––, 2020, “A Poly-Connexive Logic”, Logic and Logical Philosophy, 29: 143–157.
• –––, 2021, A View of Connexive Logic, London: College Publications.
• Gibbard, A. 1981, “Two Recent Theories of Conditionals”, in: W.L. Harper, R. Stalnaker, and C.T. Pearce (eds), Ifs, Dordrecht: Reidel.
• Gurevich, Y., 1977, “Intuitionistic Logic with Strong Negation”, Studia Logica, 36: 49–59.
• Humberstone, L., 2000, “Contra-Classical Logics”, Australasian Journal of Philosophy, 78(4): 438–474.
• Jarmużek, T. and Malinowski, J., 2019a, “Boolean Connexive Logics: Semantics and Tableau Approach”, Logic and Logical Philosophy, 28: 427–448.
• –––, 2019b, “Modal Boolean connexive logics. Semantic and tableau approach”, Bulletin of the Section of Logic, 48: 213–243.
• Johnson, F.A., 1976, “A Three-valued Interpretation for a Relevance Logic”, The Relevance Logic Newsletter, 1: 123–128. [Johnson 1976 available online.]
• Johnston, S., 2019, “Per Se Modality and Natural Implication. An Account of Connexive Logic in Robert Kilwardby,” Logic and Logical Philosophy, 28: 449–479.
• Kamide, N., 2016, “Cut-free Systems for Restricted Bi-intuitionistic Logic and its Connexive Extension”, Proceedings of the 46th International Symposium on Multiple-Valued Logic (ISMVL), Sapporo,
Japan, IEEE Computer Society, 137–142.
• –––, 2017, “Natural Deduction for Connexive Paraconsistent Quantum Logic”, Proceedings of the 47th International Symposium on Multiple-Valued Logic (ISMVL), Novi Sad, Serbia, IEEE Computer
Society, 207–212.
• –––, 2019, “Bi-Classical Connexive Logic and its Modal Extension: Cut-elimination, Completeness and Duality”, Logic and Logical Philosophy, 28: 481–511.
• Kamide, N. and Wansing, H., 2011, “Connexive Modal Logic Based on Positive S4”, in: J.-Y. Beziau and M. Conigli (eds), Logic without Frontiers. Festschrift for Walter Alexandre Carnielli on the
Occasion of His 60th Birthday, London: College Publications, 389–409.
• –––, 2012, “Proof theory of Nelson’s Paraconsistent Logic: A Uniform Perspective”, Theoretical Computer Science, 415: 1–38.
• –––, 2016, “Completeness of connexive Heyting-Brouwer logic”, IfCoLog Journal of Logics and their Applications, 3: 441–466.
• Kamide, N., Shramko. Y., and Wansing, H., 2017, “Kripke Completeness of Bi-intuitionistic Multilattice Logic and its Connexive Variant”, Studia Logica, 105: 1193–1219.
• Kapsner, A., 2012, “Strong Connexivity”, Thought, 1: 141–145.
• –––, 2019, “Humble Connexivity”, Logic and Logical Philosophy, 28: 513–536.
• Kapsner, A. and Omori, H., 2017, “Counterfactuals in Nelson Logic”, Proceedings of LORI 2017, Berlin: Springer, 497–511.
• Khemlani, S., Orenes, I., and Johnson-Laird, P.N., 2014, “The Negation of Conjunctions, Conditionals, and Disjunctions”, Acta Psychologica, 151: 1–7.
• Kneale, W., 1957, “Aristotle and the Consequentia Mirabilis”, The Journal of Hellenic Studies, 77: 62–66.
• Kneale, W. and Kneale, M., 1962, The Development of Logic, London: Duckworth.
• Lenzen, W., 2019, “Leibniz’s Laws of Consistency and the Philosophical Foundations of Connexive Logic”, Logic and Logical Philosophy, 28: 537–551.
• Lenzen, W., 2020, “A Critical Examination of the Historical Origins of Connexive Logic”, History and Philosophy of Logic, 41: 16–35.
• Lewis, D., 1973, Counterfactuals, Oxford: Basil Blackwell.
• Lowe, E.J., 1995, “The Truth about Counterfactuals”, The Philosophical Quarterly, 45: 41–59.
• Łukasiewicz, J. 1951, Aristotle’s Syllogistic from the Standpoint of Modern Formal Logic, Oxford: Clarendon Press.
• MacColl, H., 1878, “The Calculus of Equivalent Statements (II)”, Proceedings of the London Mathematical Society 1877–78, 9: 177–186.
• Mares, E. and Paoli, F., 2019, “C.I. Lewis, E.J. Nelson, and the Modern Origins of Connexive Logic”, Organon F, 26: 405–426.
• Martin, C.J., 1991, “The Logic of Negation in Boethius”, Phronesis, 36: 277–304.
• –––, 2004, “Logic”, in: J. Brower and K. Guilfoy (eds), The Cambridge Companion to Abelard, Cambridge: Cambridge University Press, 158–199.
• McCall, S., 1963, Non-classical Propositional Calculi, Ph.D. Dissertation, Oxford University.
• ––– 1964, “A New Variety of Implication,” (abstract),Journal of Symbolic Logic, 29: 151–152.
• –––, 1966, “Connexive Implication”, Journal of Symbolic Logic, 31: 415–433.
• –––, 1967, “Connexive Implication and the Syllogism”, Mind, 76: 346–356.
• –––, 1975, “Connexive Implication”, § 29.8 in: A.R. Anderson and N.D. Belnap, Entailment. The Logic of Relevance and Necessity (Volume 1), Princeton: Princeton University Press, 434–446.
• –––, 2012, “A History of Connexivity”, in: D.M. Gabbay et al. (eds), Handbook of the History of Logic. Volume 11. Logic: A History of its Central Concepts, Amsterdam: Elsevier, 415–449.
• –––, 2014, “Connexive Gentzen”, Logic Journal of the IGPL, 22: 964–981.
• Meyer, R.K, 1977, “S5–The Poor Man’s Connexive Implication”, The Relevance Logic Newsletter, 2: 117–123. [Meyer 1977 available online.]
• Meyer, R.K. and Martin, E.P., 1986, “Logic on the Australian Plan”, Journal of Philosophical Logic, 15: 305–332.
• Mortensen, C., 1984, “Aristotle’s Thesis in Consistent and Inconsistent Logics”, Studia Logica, 43: 107–116.
• Nasti De Vincentis, M., 2002, Logiche della connessività. Fra logica moderna e storia della logica antica, Bern: Haupt. 2002.
• –––, 2004, “From Aristotle’s Syllogistic to Stoic Conditionals: Holzwege or Detectable Paths?”, Topoi, 23: 113–37.
• –––, 2006, “Conflict and Connectedness: Between Modern Logic and History of Ancient Logic”, in: E. Ballo and M. Franchella (eds), Logic and Philosophy in Italy, Monza: Polimetrica, 229–251.
• Nelson, D., 1949, “Constructible Falsity”, Journal of Symbolic Logic, 14: 16–26.
• Nelson, E.J., 1930, “Intensional Relations”, Mind, 39: 440–453.
• Nute, D., 1980, Topics in Conditional Logic, Dordrecht: Reidel.
• Odintsov, S., 2008, Constructive Negations and Paraconsistency, Dordrecht: Springer-Verlag.
• Odintsov S., Skurt, D. and Wansing, H., 2019, “Connexive variants of modal logics over FDE”, in: A. Zamansky and O. Arieli (eds), Arnon Avron on Semantics ands Proof Theory of Non-Classical
Logics, Cham: Springer, 295–318.
• Odintsov S. and Wansing, H., 2004, “Constructive Predicate Logic and Constructive Modal Logic. Formal Duality versus Semantical Duality”, in: V. Hendricks et al. (eds.), First-Order Logic
Revisited, Berlin: Logos Verlag, 269–286.
• –––, 2010, “Modal Logics with Belnapian Truth Values”, Journal of Applied Non-Classical Logics, 20: 279–301.
• Olkhovikov, G.K., 2002, “On a New Three-Valued Paraconsistent Logic”, in: Logic of Law and Tolerance, Yekaterinburg: Ural State University Press, 96–113, translated by T.M. Ferguson, IfCoLog
Journal of Logics and their Applications, 3: 317–334.
• –––, 2016 “A Complete, Correct, and Independent Axiomatization of the First-Order Fragment of a Three-Valued Paraconsistent Logic”, IfCoLog Journal of Logics and their Applications, 3: 335–340.
• Omori, H., 2016a, “A Simple Connexive Extension of the Basic Relevant Logic BD”, IfCoLog Journal of Logics and their Applications, 3: 467–478.
• –––, 2016b, “A Note on Francez’ Half-Connexive Formula”, IfCoLog Journal of Logics and their Applications, 3: 505–512.
• –––, 2016c, “From paraconsistent logic to dialetheic logic”, in: H. Andreas and P. Verdée (eds.), Logical Studies of Paraconsistent Reasoning in Science and Mathematics, Berlin: Springer, pp.
• –––, 2019, “Towards a Bridge over Two Approaches in Connexive Logic”, Logic and Logical Philosophy, 28: 553–566.
• Omori H. and Sano, K., 2015, “Generalizing Functional Completeness in Belnap-Dunn Logic”, Studia Logica, 103: 883–917.
• Omori H. and Wansing, H., 2017, “40 years of FDE:An Introductory Overview”, Studia Logica, 105: 1021–1049.
• ––– 2018, “ On Contra-classical Variants of Nelson Logic N4 and its Classical Extension”, Review of Symbolic Logic, 11: 805–820.
• –––, 2019, “Connexive Logics. An Overview and Current Trends”, Logic and Logical Philosophy, 28: 371–387.
• Parry, W.T., 1933, “Ein Axiomensystem für eine neue Art von Implikation (analytische Implikation)”, Ergebnisse eines mathematischen Kolloquiums, 4: 5–6.
• Pfeifer, N., 2012, “Experiments on Aristotle’s Thesis: Towards an experimental philosophy of conditionals”, The Monist, 95: 223–240.
• Pfeifer, N. and Tulkki, L., 2017, “Conditionals, Counterfactuals, and Rational Reasoning. An Experimental Study on Basic Principles”, Minds and Machines, 27: 119–165.
• Pfeifer, N. and Yama, H., 2017, “Counterfactuals, Indicative Conditionals, and Negation under Uncertainty: Are there Cross-cultural Differences?”, in: Gunzelmann, G., Howes, A., Tenbrink, T., and
Davelaar, E. (eds), Proceedings of the 39th Cognitive Science Society Meeting, 2882–2887.
• Pizzi, C., 1977, “Boethius’ Thesis and Conditional Logic”, Journal of Philosophical Logic, 6: 283–302.
• –––, 1991, “Decision Procedures for Logics of Consequential Implication”, Notre Dame Journal of Formal Logic, 32: 618–636.
• –––, 1993, “Consequential Implication: A Correction”, Notre Dame Journal of Formal Logic, 34: 621–624.
• –––, 1996, “Weak vs. Strong Boethius’ Thesis: A Problem in the Analysis of Consequential Implication”, in: A. Ursini and P. Aglinanó (eds), Logic and Algebra, New York: Marcel Dekker, 647–654.
• –––, 1999, “A Modal Framework for Consequential Implication and the Factor Law”, Contemporary Mathematics, 313–326.
• –––, 2004, “Contenability and the Logic of Consequential Implication”, Logic Journal of the IGPL, 12: 561–579.
• –––, 2005, “Aristotle’s Thesis between Paraconsistency and Modalization”, Journal of Applied Logic, 3: 119–131.
• –––, 2008, “Aristotle’s Cubes and Consequential Implication”, Logica Universalis, 2: 143–153.
• –––, 2018 “Two Kinds of Consequential Implication”, Studia Logica, 106: 453–480.
• Pizzi, C. and Williamson, T., 1997, “Strong Boethius’ Thesis and Consequential Implication”, Journal of Philosophical Logic, 26: 569–588.
• –––, 2005, “Conditional Excluded Middle in Systems of Consequential Implication”, Journal of Philosophical Logic, 34: 333–362.
• Priest, G., 1999, “Negation as Cancellation and Connexive Logic”, Topoi, 18: 141–148.
• Rahman, S. and Rückert, H., 2001, “Dialogical Connexive Logic”, Synthese, 127: 105–139.
• Rahman, S. and Redmond, J., 2008, “Hugh MacColl and the Birth of Logical Pluralism”, in D. Gabbay and J. Woods (eds.), British Logic in the Nineteenth Century (Handbook of the History of Logic:
Volume 4), Amsterdam: Elsevier, 533–604.
• Ramsey, F.P., 1929, “General Propositions and Causality”, in: F. Ramsey, Philosophical Papers, H. A. Mellor (ed.), Cambridge: Cambridge University Press, 1990.
• Rooij, R. van and Schulz, K. 2019, “Conditionals, causality and conditional probability”, Journal of Logic, Language and Information, 28: 55–71.
• –––, 2020, “Generics and typicality: A bounded rationality approach”, Linguistics and Philosophy, 43: 88–117.
• Routley, R., 1974, “Semantical Analyses of Propositional Systems of Fitch and Nelson”, Studia Logica, 33: 283–298.
• –––, 1978, “Semantics for Connexive Logics. I”, Studia Logica, 37: 393–412.
• Routley, R., Meyer, R., Plumwood, V. and Brady, R., 1982, Relevant Logics and Their Rivals, Atascadero, CA: Ridgeview Publishing Company.
• Routley, R. and Montgomery, H., 1968, “On Systems Containing Aristotle’s Thesis”, Journal of Symbolic Logic, 33: 82–96.
• Routley, R. and Routley V., 1985, “Negation and Contradiction”, Revista Columbiana de Mathemáticas, 19: 201–231.
• Sarenac, D. and Jennings, R.E., 2003, “The Preservation of Relevance”, Eidos, 17: 23–36.
• Schroeder-Heister, P., 2009, “Sequent Calculi and Bidirectional Natural Deduction: on the Proper Basis of Proof-theoretic Semantics”, in: M. Peliš (ed), The Logica Yearbook 2008, London: College
Publications, 237–251.
• Schurz, G., 1998, “Relevance in Deductive Reasoning: a Critical Overview”, in: G. Schurz and M. Ursic (eds), Beyond Classical Logic, Conceptus-Studien, St. Augustin: Academia Verlag, 9–56.
• Stalnaker, R. 1968, “A Theory of Conditionals”, in: N. Rescher (ed.), Studies in Logical Theory (American Philosophical Quarterly Monograph Series: Volume 2), Oxford: Blackwell, 98–112.
• Strawson, P., 1952, Introduction to Logical Theory, Oxford: Oxford University Press.
• Sylvan, R., 1989, “A Preliminary Western History of Sociative Logics”, chapter 4 of Bystanders’ Guide to Sociative Logics, Research Series in Logic and Metaphysics #9, Australian National
University, Canberra, published as chapter 5 of Sociative Logics and their Applications. Essays by the Late Richard Sylvan, D. Hyde and G. Priest (eds.), Aldershot: Ashgate Publishing, 2000.
• Thomason, R., 1969, “A Semantical Study of Constructive Falsity”, Zeitschrift für mathematische Logik und Grundlagen der Mathematik, 15: 247–257.
• Thompson, B., 1991, “Why is Conjunctive Simplification Invalid?”, Notre Dame Journal of Formal Logic, 32: 248–254.
• Unterhuber, M., 2013, Possible Worlds Semantics for Indicative and Counterfactual Conditionals. A Formal Philosophical Inquiry into Chellas-Segerberg Semantics, Heusenstamm: Ontos Verlag.
• Urchs, M., 1994, “On the Logic of Event-causation. Jaśkowski-style Systems of Causal Logic”, Studia Logica, 53: 551–578.
• Wagner, G., 1991, Ex Contradictione Nihil Sequitur, in: Proceedings IJCAI 1991, San Francisco: Morgan Kaufmann, 538–543.
• Wansing, H., 2001, “Negation”, in: L. Goble (ed.), The Blackwell Guide to Philosophical Logic, Cambridge, MA: Basil Blackwell Publishers, 415–436.
• –––, 2005, “Connexive Modal Logic”, in R. Schmidt et al. (eds.), Advances in Modal Logic. Volume 5, London: King’s College Publications, 367–383. [Wansing 2005 available online.]
• –––, 2007, “A Note on Negation in Categorial Grammar”, Logic Journal of the Interest Group in Pure and Applied Logics, 15: 271–286.
• –––, 2008, “Constructive Negation, Implication, and Co-implication”, Journal of Applied Non-Classical Logics, 18: 341–364.
• –––, 2016a, “Falsification, natural deduction, and bi-intuitionistic logic”, Journal of Logic and Computation, 26 (2016): 425–450; first online 17 July 2013, doi:10.1093/logcom/ext035
• –––, 2016b, “Natural Deduction for Bi-Connexive Logic and a Two-Sorted Typed λ-Calculus”, IfCoLog Journal of Logics and their Applications, 3: 413–439.
• –––, 2017, “A more general general proof theory”, Journal of Applied Logic, 25: 23–46.
• Wansing, H., Omori, H. and Ferguson, T.M., “The Tenacity of Connexive Logic: Preface to the Special Issue”, IfCoLog Journal of Logics and their Applications, 3: 279–296.
• Wansing, H. and Skurt, D., 2018, “Negation as Cancellation, Connexive Logic, and qLPm”, Australasian Journal of Logic, 15: 476–488.
• Wansing, H. and Unterhuber, M., 2019, “Connexive Conditional Logic. Part I”, Logic and Logical Philosophy, 28: 567–610.
• Weiss, Y., 2019, “Connexive Extensions of Regular Conditional Logic”, Logic and Logical Philosophy, 28: 611–627.
• Wiredu, J.E., 1974, “A Remark on a Certain Consequence of Connexive Logic for Zermelo’s Set Theory”, Studia Logica, 33: 127–130.
• Woods, J., 1968, “ Two Objections to System CC1 of Connexive Implication”, Dialogue, 7: 473–475.
Academic Tools
How to cite this entry.
Preview the PDF version of this entry at the Friends of the SEP Society.
Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO).
Enhanced bibliography for this entry at PhilPapers, with links to its database.
The author would like to thank Hitoshi Omori for many stimulating discussions on connexive logic and comments on a draft version of this entry, Wolfgang Lenzen for making available an excerpt from
Boethius’ De Syllogismo Hypothetico, and Hans Rott and Andreas Kapsner for some helpful remarks. | {"url":"https://plato.stanford.edu/Entries/logic-connexive/","timestamp":"2024-11-10T01:48:36Z","content_type":"text/html","content_length":"167273","record_id":"<urn:uuid:4270fbaf-1a5d-4365-aa4b-fd22a5395809>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00864.warc.gz"} |
Gravitation | Brilliant Math & Science Wiki
Gravity or gravitation is a natural phenomenon by which all things with energy are brought toward (or gravitate toward) one another, including stars, planets, galaxies, and even light and sub-atomic
particles. Gravity is responsible for many of the structures in the universe, by creating spheres of hydrogen—where hydrogen fuses under pressure to form stars—and grouping them into galaxies. On
Earth, it gives weight to physical objects and causes the tides. It has an infinite range, although its effects become increasingly weaker on objects farther away.
Gravity is accurately described by the general theory of relativity which describes gravity not as a force but as a consequence of the curvature of spacetime, caused by the uneven distribution of
mass/energy, and resulting in gravitational time dilation, where time lapses more slowly in lower (stronger) gravitational potential.
However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which postulates that gravity is a force where two bodies of mass are directly drawn or
attracted to each other according to a mathematical relationship, where the attractive force is proportional to the product of their masses and inversely proportional to the square of the distance
between them.
Mass is a fundamental property of all particles, and in conventional theories of physics, from Newtonian mechanics to string theory, it is positive or zero. In addition to setting the scale of
particle acceleration in response to forces and setting the speed limit for particles, mass gives rise to gravitational force. It should be noted that only zero mass particles, the photon and the
gluon, travel at the speed of light.
Everyday activities like dragging something across the floor or throwing a medicine ball give us insight into what is called inertial mass, \(m_\text{inertial}\), which is the resistance of an object
to acceleration under a net force \(m_\text{inertial} = \dfrac{F_\text{net}}{a}.\) The bigger something is, the more inertial mass it tends to have and the harder it is to move the object.
We know from our experiences that it is harder to carry a box of books up the stairs than a tea kettle, and there is also a common aphorism, "the bigger they are, the harder they fall." All these
show that gravity is a mass-dependent force: the bigger something is, the more gravitational mass it has and the harder it is to move the object. That is, \(F_\text{gravity} \sim f(m_\text
{gravitational})\), where \(f\) is some monotonically increasing function of \(m_\text{gravitational}\).
However, the relationship between these two masses is not clear, and neither is the form of \(f\). Happily, there is a simple experiment to demonstrate what's known as the equivalence principle.
Equivalence Principle
After centuries of controversy as to whether or not Galileo actually dropped anything from the Leaning Tower of Pisa, TV physics personality Brian Cox finally solved the problem of relating \(m_\
text{inertial}\) to \(m_\text{gravitational}\). Using an enormous vacuum chamber, a feather and a bowling ball were dropped from a tall crane. The ball and feather were found to hit the ground at
precisely the same time, which showed that their accelerations were identical: \(a_\text{feather}=a_\text{bowling ball}\). This result implies that the acceleration of a mass in a gravitational
field is independent of its mass.
If we take this seriously, it shows that
\[ \frac{F_\text{gravity}(m_\text{gravitational})}{m_\text{inertial}} =\text{ constant}, \]
which we find from \(F_\text{net} = F_\text{gravity} = m_\text{inertial}a\).
All else being equal, the ratio of the force felt by an object due to its gravitational mass to its inertial mass is a constant. The logic of this condition demonstrates two things:
□ The force of gravity must be in direct proportion to the gravitational mass: \(F_\text{gravity}\propto m_\text{gravitational}\).
□ Gravitational mass and inertial mass are identical: \(m_\text{gravitational} = m_\text{inertial}\).
This makes it clear why all objects accelerate uniformly in the gravitational field of the Earth.
If we call the gravitational field \(\Gamma\), then \(F_\text{gravity}=m_\text{inertial}\Gamma\), so \(m_\text{inertial}\Gamma = m_\text{inertial}a\) implies that \(\Gamma = a\), i.e. the force
of gravity on an object is given by the mass of the object, multiplied by the gravitational field strength, which has units of acceleration.
This gives rise to a question: Is there any fundamental distinction between acceleration and a gravitational field?
Equivalence Principle
The inertial and gravitational masses of an object are indistinguishable:
\[m_\text{inertial} = m_\text{gravitational}. \ _\square\]
We know how particles respond to gravitational fields, but what gives rise to gravitational fields? As it turns out, gravitational fields interact with particles through their masses, and
gravitational fields arise from particles via their masses, as is the case with electric fields and charged particles. In other words, gravity is the force of interaction between massive objects.
This is a revealing point.
Gravitational field's dependence on mass of source particle:
In the previous example, we found the force of gravity on a particle with mass \(m_a\) in a gravitational field was \(m_a\Gamma\). If we consider \(\Gamma\) to be the field of another particle of
mass \(m_b\), located a fixed distance away, then we can say the force on particle \(a\) is \(m_a\Gamma_b\).
However, we can view this interaction the other way around, as the force felt by particle \(b\) due to the field from particle \(a\). From this perspective, the force can be written as \(m_b\
Gamma_a\), yielding
\[ m_b\Gamma_a &= m_a\Gamma_b \\ \frac{m_b}{m_a} &= \frac{\Gamma_b}{\Gamma_a} = \text{ constant}, \]
which implies that \(\Gamma_i \propto m_i\). Thus, the force of gravity between two massive objects is proportional to the product of their masses \(F_\text{ab} \propto m_am_b\). Because particle
mass is non-negative, the gravitational interaction between massive objects can only be attractive.
Now we nearly have a complete description of Newtonian gravity, and all that is left is the force's dependence on the distance between particles. Here, there are two ways of going. One way is to
point out that experimental measurements of the force of attraction between massive objects show that the gravitational force decreases as the inverse-square of the distance between objects, \(r^{-2}
\). However, we'll consider a more interesting approach.
Gravitational field's dependence on distance from source:
Let's envision the field as a bundle of lines emanating in all directions. The number of lines emerging from the particle is in direct proportion to the mass of the particle \((\Gamma_S \propto
m_s),\) and the strength of the interaction with another particle \(m_r\) is proportional to the number of field lines from \(m_s\) which intersect \(m_r\).
If we consider the field of a point particle \(m_s\), it clearly must have spherical symmetry. Now, if we surround \(m_s\) with an imaginary sphere, it is clear that no matter how large or how
small the sphere, the same number of field lines from \(m_s\) will penetrate the surface of the sphere. However, the density of the field lines which penetrate the surface will decrease as the
sphere grows larger, and so will the number of field lines that penetrate a distant particle.
Concretely, the number of field lines is a constant, but the surface area of the sphere grows as \(4\pi r^2\), so the density of field lines shrinks as \(\Gamma_S \sim \frac{1}{4\pi}\frac{m_s}{r^
2}\). We can state this in the form of a flux equation. If we call the flux of field lines from \(m_s\) through a surrounding surface \(\Phi_G\), then we have \(\Phi_G \sim \frac{m_s}{r^2}\).
Now, since field lines point inward toward the source particle, the flux through the surface will be inward and thus have a negative sign.
Now we have the complete form for Newton's law of gravitation, and the gravitational field due to a particle \(m_s\) at a distance \(r\) is given by
\[\Gamma_s \sim \frac{m_s}{r^2}.\]
The \(\sim\) implies that there is a numerical prefactor we've left out. We'll call this number \(G\), the gravitational constant. Its measurement was no small historical achievement.
Newton's Law of Gravitation
The strength of the gravitational interaction between two particles of masses \(m_a\) and \(m_b\), separated by a distance \(r\), is given by
\[F_{ab} = G\frac{m_am_b}{r^2}. \ _\square\]
The above equation can also be expressed in vector form:
\[ \vec{F}=G\dfrac{m_1m_2}{r^2}(-\hat{r})&=-G\dfrac{m_1m_2}{r^2} \hat{r}\\\\ \vec{F}&=-G\dfrac{m_1m_2}{|r|^2} \hat{r}. \]
Two identical rockets are in circular trajectories around Earth under the control of their engines. Rocket A is twice as far from the center of Earth as rocket B, yet they have the same centripetal
acceleration. What is the ratio of the velocity of rocket B to that of rocket A, \(\dfrac{v_B}{v_A}?\)
An object's mass is \(\SI{120}{\kilo\gram}\). When it is taken away to the moon, it is noticed that the object's mass remains the same but its weight has changed.
What will be the weight (in Newtons) of the object on the moon?
Details and Assumptions:
• \(g=\SI[per-mode=symbol]{9.8}{\meter\per\second\squared}.\)
• The acceleration due to gravity on the moon is \(\frac{1}{6}\) of that on Earth.
Gravitational Field around Spherical Distributions of Matter
We mentioned that the distance dependence in Newton's law can be obtained by considering the total flux of gravitational field lines through an imaginary sphere. A surprising corollary to the flux
relation, which we will now prove, is that a particle located inside of a spherically symmetric shell of mass will feel no gravitational attraction to the shell.
Show that a particle \(m_r,\) located inside a spherical shell of total mass \(M_s,\) feels no gravitational attraction to the shell.
We focus on the diagram of the shell that's intersected by the spherical sections of radii \(r_L\) and \(r_R\). The two sections intersect the same small solid angle \(\Delta\Omega\) (we
exaggerate the size of the sections for clarity, but they are taken to be very small), and thus the entire shell can be calculated using opposing pairs of equal solid angles. The entire shell has
mass \(M_s\) and thus has mass density \(\sigma = \frac{1}{4\pi}\frac{M_S}{R^2},\) where \(R\) is the radius of the shell. Particle \(m_r\) is attracted by both sections in opposite directions.
The section of radius \(r_L\) has a total mass of \(M_L = \sigma \Omega r_L^2\), and the entire section is located a distance of approximately \(r_L\) from the particle. Thus, the gravitational
field strength at \(m_r\) due to the section is \[\sim\Gamma_L = G\frac{M_L}{r_L^2} = G\frac{\sigma\Delta\Omega r_L^2}{r_L^2} = G\sigma\Delta\Omega\] and points along the gold line toward the
You may have noticed that the previous calculation results in an \(r_L\)-independent expression. The calculation for the gravitational field due to the \(r_R\) section is similarly independent of
\(r_R\) and is equal to \(G\sigma\Delta\Omega\), pointing along the gold line toward the right. I.e. the net force on \(m_r\) from the two sections of the shell is \(G\sigma\Delta\Omega-G\sigma\
Because the shell can be deconstructed into a collection of such pairs, there is no net force on the particle, because the attraction of every patch of surface cancels exactly with its partner.
Note that this result is general for any particle inside the spherical shell, and does not depend on the particular location inside of the shell. \(_\square\)
Challenge yourself
Using a similar line of argument, one can show that outside of a spherical shell of mass \(M_s\), the field strength is given by \(\Gamma_s = G\frac{M_s}{r^2},\) where \(r\) is the distance from
the center of the sphere. Can you do it?
\[\sqrt{2}\frac{Gm_b^2}{d^2}\] \[\sqrt{2}\frac{Gm_b^2}{d^2} + \frac{Gm_bM_s}{R^2}\] \[2\frac{Gm_b^2}{d^2} + \frac{Gm_bM_s}{R^2}\] \[2\frac{Gm_b^2}{d^2}\]
A green sphere of mass \(m_b\) is placed inside a spherical shell of radius \(R\) and mass \(M_s\). Two spheres identical in mass to the green sphere are placed a distance \(d\) to the right and to
the bottom, respectively, of it. What is the magnitude of the gravitational force on the green sphere?
Gravitational Potential Energy
Mechanics problems often approximate the potential energy change of an object near the surface of Earth to be \(\Delta U = mg\Delta h\), the mass of the object times the change in height times a
field of constant strength \(g = 9.8\text{ m/s}^2\). However, we know that the gravitational field of Earth falls off as the square of the height above the Earth's surface \(\sim r^{-2} = \left(R_\
text{Earth}+h\right)^{-2}\), where \(R_\text{Earth}\) is the radius of Earth, i.e. non-constant. The question stands, "How do we reconcile the approximation with the inverse square law of gravity?"
Potential Energy Approximation
Let's calculate the work required to lift an object a distance \(\Delta h\) in the gravitational field of Earth. Since the particle starts and ends with zero velocity, the work done is stored in
the form of potential energy, \(W = \Delta U\). We have
\[W &= \int F(r)dr\\ &= \int_{R_\text{Earth}}^{R_\text{Earth}+\Delta h} G\frac{m_rM_\text{Earth}}{r^2} dr. \]
The integral is simple and gives us
\[ W &= -Gm_rM_\text{Earth} \left(\frac{1}{R_\text{Earth}+\Delta h} - \frac{1}{R_\text{Earth}} \right) \\ &= Gm_rM_\text{Earth} \frac{\Delta h}{\left(R_\text{Earth}+\Delta h\right)R_\text
{Earth}}. \]
Now, as long as the change in height \(\Delta h\) is small compared to the radius of Earth, we can say \(\left(R_\text{Earth}+\Delta h\right)R_\text{Earth} \approx R_\text{Earth}^2\). We
therefore have
\[\Delta U = m_r\frac{GM_\text{Earth}}{R_\text{Earth}^2}\Delta h.\]
Now things are starting to look familiar. In the usual approximation we have \(\Delta U = m_r g \Delta h\) and here we have \(\Delta U = m_r \frac{GM_\text{Earth}}{r_\text{Earth}^2}\Delta h\).
For the two schemes to be in agreement, it should be so that \(g = \dfrac{GM_\text{Earth}}{r_\text{Earth}^2}\).
Let's check
\[ \frac{GM_\text{Earth}}{r_\text{Earth}^2} &\approx \frac{6.7\times10^{-11}\frac{\text{Nm}^2}{\text{kg}^2}\times 6\times10^{24}\text{ kg} }{ \left(6.4 \times 10^6 \text{m}\right)^2 } \\ &\approx
\frac{6.7\times 6 \times 10}{6.4^2}\text{ m/s}^2\\ &\approx \frac{40\times10}{41}\text{ m/s}^2 \\ &\approx 9.8 \text{ m/s}^2. \]
Thus, we have derived the connection between the usual \(g\) and the expression obtained from the approximation that underlies it:
\[\boxed{\displaystyle g = \frac{GM_\text{Earth}}{r_\text{Earth}^2}}.\]
Evaluating Gravitational Potential Energy
Sam the rock climber can climb his favorite route at the gym 20 times before he runs out of energy. Approximately how many times would he be able to climb the same route if it were built on Mars?
Details and Assumptions:
• The radius of Mars is 53% the radius of Earth.
• The mass of Mars is 11% the mass of Earth.
• When climbing on Mars, he has an air supply with the same composition as the Earth's air. | {"url":"https://brilliant.org/wiki/gravitation/#determining-forces-and-accelerations-using-newtons","timestamp":"2024-11-10T19:26:45Z","content_type":"text/html","content_length":"74060","record_id":"<urn:uuid:451c0750-8d7c-460f-9360-f8a9715c1e0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00676.warc.gz"} |
COMPARING DECIMALS! | Quizalize
Feel free to use or edit a copy
includes Teacher and Student dashboards
Track each student's skills and progress in your Mastery dashboards
With a free account, teachers can
• edit the questions
• save a copy for later
• start a class game
• automatically assign follow-up activities based on students’ scores
• assign as homework
• share a link with colleagues
• print as a bubble sheet
• WHICH NUMBER IS GREATER THAN SEVEN AND TWENTY-EIGHT THOUSANDTHS?
• Q1
WHICH NUMBER IS GREATER THAN SEVEN AND TWENTY-EIGHT THOUSANDTHS?
• Q2
SELECT THE NUMBER BELOW THAT HAS THE LEAST VALUE.
• Q3
WHICH SYMBOL CORRECTLY REPLACES THE BLANK? 32.028 __________ 32.208
• Q4
WHICH SYMBOL CORRECTLY REPLACES THE BLANK? 4.140 __________ 4.14
• Q5
WHICH NUMBER IS LESS THAN FIFTEEN AND TWENTY HUNDREDTHS?
• Q6
SELECT THE GREATEST NUMBER BELOW. | {"url":"https://resources.quizalize.com/view/quiz/comparing-decimals-e647c5fb-dda9-4e2b-a3a0-ad6e12edca66","timestamp":"2024-11-04T01:42:44Z","content_type":"text/html","content_length":"78511","record_id":"<urn:uuid:387338ef-7037-4b07-9386-49e490750648>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00497.warc.gz"} |
An Optimal Elimination Algorithm for Learning a Best Arm
An Optimal Elimination Algorithm for Learning a Best Arm
Avinatan Hassidim · Ron Kupfer · Yaron Singer
Orals & Spotlights: Learning Theory
Abstract: We consider the classic problem of $(\epsilon,\delta)$-\texttt{PAC} learning a best arm where the goal is to identify with confidence $1-\delta$ an arm whose mean is an $\
epsilon$-approximation to that of the highest mean arm in a multi-armed bandit setting. This problem is one of the most fundamental problems in statistics and learning theory, yet somewhat
surprisingly its worst case sample complexity is not well understood. In this paper we propose a new approach for $(\epsilon,\delta)$-\texttt{PAC} learning a best arm. This approach leads to an
algorithm whose sample complexity converges to \emph{exactly} the optimal sample complexity of $(\epsilon,\delta)$-learning the mean of $n$ arms separately and we complement this result with a
conditional matching lower bound. More specifically: \begin{itemize} \item The algorithm's sample complexity converges to \emph{exactly} $\frac{n}{2\epsilon^2}\log \frac{1}{\delta}$ as $n$ grows and
$\delta \geq \frac{1}{n}$; % \item We prove that no elimination algorithm obtains sample complexity arbitrarily lower than $\frac{n}{2\epsilon^2}\log \frac{1}{\delta}$. Elimination algorithms is a
broad class of $(\epsilon,\delta)$-\texttt{PAC} best arm learning algorithms that includes many algorithms in the literature. \end{itemize} When $n$ is independent of $\delta$ our approach yields an
algorithm whose sample complexity converges to $\frac{2n}{\epsilon^2} \log \frac{1}{\delta}$ as $n$ grows. In comparison with the best known algorithm for this problem our approach improves the
sample complexity by a factor of over 1500 and over 6000 when $\delta\geq \frac{1}{n}$.
Successful Page Load | {"url":"https://neurips.cc/virtual/2020/spotlight/18818","timestamp":"2024-11-11T23:54:58Z","content_type":"text/html","content_length":"50475","record_id":"<urn:uuid:52e935fe-303b-48b9-ac8d-c44d26f0ff4a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00291.warc.gz"} |
Online Data Science Certification Archives - DexLab Analytics | Big Data Hadoop SAS R Analytics Predictive Modeling & Excel VBA
In our previous blog we discussed about few of the basic functions of MQL like .find() , .count() , .pretty() etc. and in this blog we will continue to do the same. At the end of the blog there is a
quiz for you to solve, feel free to test your knowledge and wisdom you have gained so far.
Given below is the list of functions that can be used for data wrangling:-
1. updateOne() :- This function is used to change the current value of a field in a single document.
After changing the database to “sample_geospatial” we want to see what the document looks like? So for that we will use .findOne() function.
Now lets update the field value of “recrd” from ‘ ’ to “abc” where the “feature_type” is ‘Wrecks-Visible’.
Now within the .updateOne() funtion any thing in the first part of { } is the condition on the basis of which we want to update the given document and the second part is the changes which we want to
make. Here we are saying that set the value as “abc” in the “recrd” field . In case you wanted to increase the value by a certain number ( assuming that the value is integer or float) you can use
“$inc” instead.
2. updateMany() :- This function updates many documents at once based on the condition provided.
3. deleteOne() & deleteMany() :- These functions are used to delete one or many documents based on the given condition or field.
4. Logical Operators :-
“$and” : It is used to match all the conditions.
“$or” : It is used to match any of the conditions.
The first code matches both the conditions i.e. name should be “Wetpaint” and “category_code” should be “web”, whereas the second code matches any one of the conditions i.e. either name should be
“Wetpaint” or “Facebook”. Try these codes and see the difference by yourself.
So, with that we come to the end of the discussion on the MongoDB Basics. Hopefully it helped you understand the topic, for more information you can also watch the video tutorial attached down this
blog. The blog is designed and prepared by Niharika Rai, Analytics Consultant, DexLab Analytics DexLab Analytics offers machine learning courses in Gurgaon. To keep on learning more, follow DexLab
Analytics blog.
Introduction to MongoDB
MongoDB is a document based database program which was developed by MongoDB Inc. and is licensed under server side public license (SSPL). It can be used across platforms and is a non-relational
database also known as NoSQL, where NoSQL means that the data is not stored in the conventional tabular format and is used for unstructured data as compared to SQL and that is the major difference
between NoSQL and SQL.
MongoDB stores document in JSON or BSON format. JSON also known as JavaScript Object notation is a format where data is stored in a key value pair or array format which is readable for a normal human
being whereas BSON is nothing but the JSON file encoded in the binary format which is quite hard for a human being to understand.
Structure of MongoDB which uses a query language MQL(Mongodb query language):-
Databases:- Databases is a group of collections.
Collections:- Collection is a group fields.
Fields:- Fields are nothing but key value pairs
Just for an example look at the image given below:-
Here I am using MongoDB Compass a tool to connect to Atlas which is a cloud based platform which can help us write our queries and start performing all sort of data extraction and deployment
techniques. You can download MongoDB Compass via the given link https://www.mongodb.com/try/download/compass
In the above image in the red box we have our databases and if we click on the “sample_training” database we will see a list of collections similar to the tables in sql.
Now lets write our first query and see what data in “companies” collection looks like but before that select the “companies” collection.
Now in our filter cell we can write the following query:-
In the above query “name” and “category_code” are the key values also known as fields and “Wetpaint” and “web” are the pair values on the basis of which we want to filter the data.
What is cluster and how to create it on Atlas?
MongoDB cluster also know as sharded cluster is created where each collection is divided into shards (small portions of the original data) which is a replica set of the original collection. In case
you want to use Atlas there is an unpaid version available with approximately 512 mb space which is free to use. There is a pre-existing cluster in MongoDB named Sandbox , which currently I am using
and you can use it too by following the given steps:-
1. Create a free account or sign in using your Google account on
2. Click on “Create an Organization”.
3. Write the organization name “MDBU”.
4. Click on “Create Organization”.
5. Click on “New Project”.
6. Name your project M001 and click “Next”.
7. Click on “Build a Cluster”.
8. Click on “Create a Cluster” an option under which free is written.
9. Click on the region closest to you and at the bottom change the name of the cluster to “Sandbox”.
10. Now click on connect and click on “Allow access from anywhere”.
11. Create a Database User and then click on “Create Database User”.
username: m001-student
password: m001-mongodb-basics
12. Click on “Close” and now load your sample as given below :
Loading may take a while….
13. Click on collections once the sample is loaded and now you can start using the filter option in a similar way as in MongoDB Compass
In my next blog I’ll be sharing with you how to connect Atlas with MongoDB Compass and we will also learn few ways in which we can write query using MQL.
So, with that we come to the end of the discussion on the MongoDB. Hopefully it helped you understand the topic, for more information you can also watch the video tutorial attached down this blog.
The blog is designed and prepared by Niharika Rai, Analytics Consultant, DexLab Analytics DexLab Analytics offers machine learning courses in Gurgaon. To keep on learning more, follow DexLab
Analytics blog.
MongoDB Basics Part-I
In this particular blog we will discuss about few of the basic functions of MQL (MongoDB Query Language) and we will also see how to use them? We will be using MongoDB Compass shell (MongoSH Beta)
which is available in the latest version of MongoDB Compass.
Connect your Atlas cluster to your MongoDB Compass to get started. Latest version of MongoDB Compass will have this shell, so if you don’t find this shell then please install the latest version for
this to work.
Now lets start with the functions.
1. find() :- You need this function for data extraction in the shell.
In the shell we need to first write the “use database name” code to access the database then use .find() to extract data which has name “Wetpaint”
For the above query we get the following result:-
The above result brings us to another function .pretty() .
2. pretty() :- this function helps us see the result more clearly.
Try it yourself to compare the results.
3. count() :- Now lets see how many entries we have by the company name “Wetpaint”.
So we have only one document.
4. Comparison operators :-
“$eq” : Equal to
“$neq”: Not equal to
“$gt”: Greater than
“$gte”: Greater than equal to
“$lt”: Less than
“$lte”: Less than equal to
Lets see how this works.
5. findOne() :- To get a single document from a collection we use this function.
6. insert() :- This is used to insert documents in a collection.
Now lets check if we have been able to insert this document or not.
Notice that a unique id has been added to the document by default. The given id has to be unique or else there will be an error. To provide a user defined id use “_id”.
So, with that we come to the end of the discussion on the MongoDB. Hopefully it helped you understand the topic, for more information you can also watch the video tutorial attached down this blog.
The blog is designed and prepared by Niharika Rai, Analytics Consultant, DexLab Analytics DexLab Analytics offers machine learning courses in Gurgaon. To keep on learning more, follow DexLab
Analytics blog.
ARIMA (Auto-Regressive Integrated Moving Average)
This is another blog added to the series of time series forecasting. In this particular blog I will be discussing about the basic concepts of ARIMA model.
So what is ARIMA?
ARIMA also known as Autoregressive Integrated Moving Average is a time series forecasting model that helps us predict the future values on the basis of the past values. This model predicts the future
values on the basis of the data’s own lags and its lagged errors.
When a data does not reflect any seasonal changes and plus it does not have a pattern of random white noise or residual then an ARIMA model can be used for forecasting.
There are three parameters attributed to an ARIMA model p, q and d :-
p :- corresponds to the autoregressive part
q:- corresponds to the moving average part.
d:- corresponds to number of differencing required to make the data stationary.
In our previous blog we have already discussed in detail what is p and q but what we haven’t discussed is what is d and what is the meaning of differencing (a term missing in ARMA model).
Since AR is a linear regression model and works best when the independent variables are not correlated, differencing can be used to make the model stationary which is subtracting the previous value
from the current value so that the prediction of any further values can be stabilized . In case the model is already stationary the value of d=0. Therefore “differencing is the minimum number of
deductions required to make the model stationary”. The order of d depends on exactly when your model becomes stationary i.e. in case the autocorrelation is positive over 10 lags then we can do
further differencing otherwise in case autocorrelation is very negative at the first lag then we have an over-differenced series.
The formula for the ARIMA model would be:-
To check if ARIMA model is suited for our dataset i.e. to check the stationary of the data we will apply Dickey Fuller test and depending on the results we will using differencing.
In my next blog I will be discussing about how to perform time series forecasting using ARIMA model manually and what is Dickey Fuller test and how to apply that, so just keep on following us for
So, with that we come to the end of the discussion on the ARIMA Model. Hopefully it helped you understand the topic, for more information you can also watch the video tutorial attached down this
blog. The blog is designed and prepared by Niharika Rai, Analytics Consultant, DexLab Analytics DexLab Analytics offers machine learning courses in Gurgaon. To keep on learning more, follow DexLab
Analytics blog.
ARMA- Time Series Analysis Part 4
ARMA(p,q) model in time series forecasting is a combination of Autoregressive Process also known as AR Process and Moving Average (MA) Process where p corresponds to the autoregressive part and q
corresponds to the moving average part.
Autoregressive Process (AR) :- When the value of Y[t] in a time series data is regressed over its own past value then it is called an autoregressive process where p is the order of lag into
Y[t] = observation which we need to find out.
α[1]= parameter of an autoregressive model
Y[t-1]= observation in the previous period
u[t]= error term
The equation above follows the first order of autoregressive process or AR(1) and the value of p is 1. Hence the value of Y[t] in the period ‘t’ depends upon its previous year value and a random
Moving Average (MA) Process :- When the value of Y[t] of order q in a time series data depends on the weighted sum of current and the q recent errors i.e. a linear combination of error terms then it
is called a moving average process which can be written as :-
y[t] = observation which we need to find out
α= constant term
β[ut-q]= error over the period q .
ARMA (Autoregressive Moving Average) Process :-
The above equation shows that value of Y in time period ‘t’ can be derived by taking into consideration the order of lag p which in the above case is 1 i.e. previous year’s observation and the
weighted average of the error term over a period of time q which in case of the above equation is 1.
How to decide the value of p and q?
Two of the most important methods to obtain the best possible values of p and q are ACF and PACF plots.
ACF (Auto-correlation function) :- This function calculates the auto-correlation of the complete data on the basis of lagged values which when plotted helps us choose the value of q that is to be
considered to find the value of Y[t]. In simple words how many years residual can help us predict the value of Y[t] can obtained with the help of ACF, if the value of correlation is above a certain
point then that amount of lagged values can be used to predict Y[t].
Using the stock price of tesla between the years 2012 and 2017 we can use the .acf() method in python to obtain the value of p.
.DataReader() method is used to extract the data from web.
The above graph shows that beyond the lag 350 the correlation moved towards 0 and then negative.
PACF (Partial auto-correlation function) :- Pacf helps find the direct effect of the past lag by removing the residual effect of the lags in between. Pacf helps in obtaining the value of AR where as
acf helps in obtaining the value of MA i.e. q. Both the methods together can be use find the optimum value of p and q in a time series data set.
Lets check out how to apply pacf in python.
As you can see in the above graph after the second lag the line moved within the confidence band therefore the value of p will be 2.
So, with that we come to the end of the discussion on the ARMA Model. Hopefully it helped you understand the topic, for more information you can also watch the video tutorial attached down this blog.
The blog is designed and prepared by Niharika Rai, Analytics Consultant, DexLab Analytics DexLab Analytics offers machine learning courses in Gurgaon. To keep on learning more, follow DexLab
Analytics blog.
Autocorrelation- Time Series – Part 3
Autocorrelation is a special case of correlation. It refers to the relationship between successive values of the same variables .For example if an individual with a consumption pattern:-
spends too much in period 1 then he will try to compensate that in period 2 by spending less than usual. This would mean that Ut is correlated with Ut+1 . If it is plotted the graph will appear as
follows :
Positive Autocorrelation : When the previous year’s error effects the current year’s error in such a way that when a graph is plotted the line moves in the upward direction or when the error of the
time t-1 carries over into a positive error in the following period it is called a positive autocorrelation.
Negative Autocorrelation : When the previous year’s error effects the current year’s error in such a way that when a graph is plotted the line moves in the downward direction or when the error of the
time t-1 carries over into a negative error in the following period it is called a negative autocorrelation.
Now there are two ways of detecting the presence of autocorrelation
By plotting a scatter plot of the estimated residual (ei) against one another i.e. present value of residuals are plotted against its own past value.
If most of the points fall in the 1st and the 3rd quadrants , autocorrelation will be positive since the products are positive.
If most of the points fall in the 2nd and 4th quadrant , the autocorrelation will be negative, because the products are negative.
By plotting ei against time : The successive values of ei are plotted against time would indicate the possible presence of autocorrelation .If e’s in successive time show a regular time pattern, then
there is autocorrelation in the function. The autocorrelation is said to be negative if successive values of ei changes sign frequently.
First Order of Autocorrelation (AR-1)
When t-1 time period’s error affects the error of time period t (current time period), then it is called first order of autocorrelation.
AR-1 coefficient p takes values between +1 and -1
The size of this coefficient p determines the strength of autocorrelation.
A positive value of p indicates a positive autocorrelation.
A negative value of p indicates a negative autocorrelation
In case if p = 0, then this indicates there is no autocorrelation.
To explain the error term in any particular period t, we use the following formula:-
Where Vt= a random term which fulfills all the usual assumptions of OLS
How to find the value of p?
One can estimate the value of ρ by applying the following formula :-
Time Series Analysis Part I
A time series is a sequence of numerical data in which each item is associated with a particular instant in time. Many sets of data appear as time series: a monthly sequence of the quantity of goods
shipped from a factory, a weekly series of the number of road accidents, daily rainfall amounts, hourly observations made on the yield of a chemical process, and so on. Examples of time series abound
in such fields as economics, business, engineering, the natural sciences (especially geophysics and meteorology), and the social sciences.
• Univariate time series analysis- When we have a single sequence of data observed over time then it is called univariate time series analysis.
• Multivariate time series analysis – When we have several sets of data for the same sequence of time periods to observe then it is called multivariate time series analysis.
The data used in time series analysis is a random variable (Yt) where t is denoted as time and such a collection of random variables ordered in time is called random or stochastic process.
Stationary: A time series is said to be stationary when all the moments of its probability distribution i.e. mean, variance , covariance etc. are invariant over time. It becomes quite easy forecast
data in this kind of situation as the hidden patterns are recognizable which make predictions easy.
Non-stationary: A non-stationary time series will have a time varying mean or time varying variance or both, which makes it impossible to generalize the time series over other time periods.
Non stationary processes can further be explained with the help of a term called Random walk models. This term or theory usually is used in stock market which assumes that stock prices are
independent of each other over time. Now there are two types of random walks:
Random walk with drift : When the observation that is to be predicted at a time ‘t’ is equal to last period’s value plus a constant or a drift (α) and the residual term (ε). It can be written as
Yt= α + Yt-1 + εt
The equation shows that Yt drifts upwards or downwards depending upon α being positive or negative and the mean and the variance also increases over time.
Random walk without drift: The random walk without a drift model observes that the values to be predicted at time ‘t’ is equal to last past period’s value plus a random shock.
Yt= Yt-1 + εt
Consider that the effect in one unit shock then the process started at some time 0 with a value of Y0
When t=1
Y1= Y0 + ε1
When t=2
Y2= Y1+ ε2= Y0 + ε1+ ε2
In general,
Yt= Y0+∑ εt
In this case as t increases the variance increases indefinitely whereas the mean value of Y is equal to its initial or starting value. Therefore the random walk model without drift is a
non-stationary process.
So, with that we come to the end of the discussion on the Time Series. Hopefully it helped you understand time Series, for more information you can also watch the video tutorial attached down this
blog. DexLab Analytics offers machine learning courses in delhi. To keep on learning more, follow DexLab Analytics blog.
Linear Regression Part I: A Comprehensive Guide to Linear Regression
Today’s blog explores another vital statistical concept Linear Regression, let’s begin. Linear regression is normally used in statistics for predictive modeling. It tries to model a relationship
between two independent (explanatory variable) and dependent (explained variable) variables X and Y by fitting a linear equation (Y=b[o]+b[1]X+U[i]) to an observed data.
Assumptions of linear regression
• U[i] is a random real variable, where U[i ]is the difference between the observed dependent variable Y and predicted Y variable.
• The mean of U[i ]in any particular period is zero.
• The variance of U[i] is constant in each period i.e for all values of X, U[i] will show the same dispersion around their mean
• The variable U[i] has a normal distribution i.e the value of U[i] (for each X[i]) have a bell shaped symmetrical distribution about their zero mean.
• The random terms of different observations are independent i.e the covariance of any U[i ]with any other U[j] is equal to zero.
• U[i] is independent of the explanatory variable X.
• X[i ]are a set of fixed values in the hypothesised process of repeated sampling which underlies the linear regression model.
• In case there are more than one explanatory variables then they are not perfectly linearly correlated.
Linear Regression equation can be written as:
is the dependent variable
X is the independent variable.
b[0 ]is the intercept (where the line crosses the vertical y-axis)
b[1 ]is the slope
U[i] is the error term (difference between ) also called residual or white noise.
Simple linear regression follows the properties of Ordinary Least Square (OLS) which are as follows:-
1. Unbiased estimator:- E()=b ie. an estimator is unbiased if its bias is 0; E() – b = 0
2. Minimum Variance:- An estimate is best when it has the smallest variance as compared to any other estimate obtained from other econometric method.
3. Linear estimator
4. Best, Linear, Unbiased estimator (BLUE)
With that the discussion on Linear Regression wraps up here, hopefully it cleared away any confusion you might have and helped you get a grasp on the concept. We have a video discussion on this same
topic, which is attached below this blog, check it out for further reference.
Continue to track the DexLab Analytics blog to find informative posts related to Python for data science training.
Step-by-step guide to building a career in Data Science
With 2.5 quintillion bytes of data being created everyday companies are scrambling to build models and hire experts to extract information hidden in massive unstructured datasets and the data
scientists have become the most sought-after professionals in the world. The job portals are full of job postings looking for data scientists whose resume has the perfect combination of skill and
experience. In this world which is being driven by the data revolution, achieving your big data career dreams need a little bit of planning and strategizing. So, here is a step-by-step guide for you.
Grabbing a high paying and skilled data job is not going to be easy, industries will only invest money on individuals with the right skillset. Your job responsibility will involve wading through tons
of unstructured data to find pattern and meaning, making forecasts regarding marketing trends, customer behavior and deliver the insight in a presentable format to the company on the basis of which
they are going to be strategizing.
So, before you even begin make sure that you have the tenacity and enthusiasm required for the job. You would need to undergo Data science using python training, in order to gain the necessary skills
and knowledge and since this is an evolving field you should be ready to constantly upskill yourself and stay updated about the latest developments in the field.
Are you ready? If it’s a resounding yes, then, without wasting any more time let’s get straight to the point and explore the steps that will lead you to become a data scientist.
Step 1: Complete education
Before you pursue data science, you must complete your bachelors degree, if you are coming from computer science, applied mathematics, or, economics that could give you a head start. However, you
need to undergo Data Science training, post that to acquire the required skillset.
Step 2: Gain knowledge of Mathematics and statistics
You do not need to have a PHD in either, but, since both are at the core of the data science you must have a good grasp on applied mathematics and statistics. Your task would require you to have
knowledge regarding linear algebra, probability & statistics. So, your first step would be to update yourself and be familiar with the concepts if you happen to hail from a non-science background so
that you can sail through the rest of the journey.
Step 3: Get ready to do programming
Just like mathematics and statistics, having a grip on a programming language preferably Python, is essential. Now, why do you need to learn coding? Well, coding is important as you have to work with
large datasets comprising mostly unstructured data and coding will help you to clean, organize, read data and also process it. Now the stress is on Python because it is one of the widely used
languages in the data science community and is comparatively easier to pick up.
Step 4: Learn Machine Learning
Machine learning plays a crucial role in data science as it helps finding patterns in data and making predictions. Mastering machine learning techniques would enable you develop algorithms for the
models and create an automated system that enables you to make predictions in real-time. Consider undergoing a Machine Learning training gurgaon.
Step 5: Learn Data Munging, Visualization, and Reporting
It has been mentioned before that you would mostly be handling unstructured data, which means in order to process that data you must transform that data into a format that is easy to work with. Data
munging helps you achieve that. Data visualization is again a must-have skill for a data scientist as it allows you to visually present your data findings that is easy to understand through graphs,
charts, while data reporting lets you prepare and present reports for businesses.
Step 6: Be certified
Now that the field has advanced so much, there is a requirement for professionals who have undergone Data Science course. Doing a certification course would upskill you and arm you with industry
knowledge. Reputed institutes like Dexlab Analytics offer cutting edge courses such as Python for data science training. If you just follow this step it would take care of the rest of the worries,
the best part of getting your training is that here you will be taught everything from scratch so, no need to fret if you do not know programming language. Your learning would be aided by hands-on
Step 7: Practice your skills
You need to test the skills you have acquired and to hone the skills you must explore Kaggle, which lets your access resources you need and this platform also allows you to take part in competitions
that further helps you sharpen your abilities. You should also keep on practicing by doing projects in order to put the theories into action.
Step 8: Work on your soft skills
In order to be a professional data scientist you must acquire soft skills as well. So along with working on your communication skills, you must also need to develop problem solving skills while
learning how business organizations function to understand what would be required of you when you assume the role of a data scientist.
Step 9: Get an internship
Now that you have the skill and certification you need experience to get hired, build a resume stressing on the skills you have acquired and search the job portals to land an internship. It would not
only enhance your resume, but, it also gives you exposures to real projects, the more projects you handle the better and you would also learn from the experts there.
Step 10: Apply for a job
Once you have gathered enough experience start applying for full-time positions as now you have both skill and experience. But, do not stop learning once you land a job, because this field is growing
many changes will happen so you have to mold yourself accordingly. Be a part of the community, network with people, keep on exploring GitHub and find out what other skills you require.
So, those were the steps you need to follow to build a rewarding career in data science. The job opportunities are plenty and to grab the right job you must do big data training in gurgaon. These
courses are aimed to prepare individuals for the industry, so get ready for an exciting career! | {"url":"https://m.dexlabanalytics.com/blog/category/online-data-science-certification","timestamp":"2024-11-13T22:44:35Z","content_type":"text/html","content_length":"246617","record_id":"<urn:uuid:47a85adb-ec61-48c7-a570-a8592a508222>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00283.warc.gz"} |
Welcome - Dopti-Research | Videnskab og teknik
New patent improves electric motor performance.
The patent and the method behind it, can be used to develop more torque from the same size electric motor, and less heat for the same output power.
Update January 2023; a prototype and proof of concept model has been developed, showing that the principle works as intended.
Dopti research has completed an extensive patent on an innovative method of creating a rotating magnetic field, which results in a more energy efficient electric motor.
The higher efficiency is realized by a better utilization of the electric current to create the necessary magnetic field.
The method is also significantly more flexible than corresponding existing methods, and can be optimized for higher efficiency in a wider range of scenarios.
The idea for the patent arose in connection with research into high-speed control of magnetic fields, and the inventor (P.K Olsen, Denmark) realized that it could be used to control a rotating
magnetic field with high efficiency and flexibility.
In addition, the method opens up a completely new possibility; namely, changing most aspects of the magnetic field, including the shape and number and location of the magnetic poles, while
simultaneously rotating the magnetic field. This makes it possible to optimize the magnetic field for several different operating situations and potentially make an electric motor highly efficient in
a wider range of operating situations.
Overcoming the limitations of the 3-phase system (invented by Ferraris and Tesla in the early 1880s)
The new patent overcomes 2 important limitations of the nearly 140-year-old invention of the rotating magnetic field by phase rotation.
– Power limitations; the new technology can produce more torque from the same size electric motor or produce less waste heat at the same torque level.
– Magnetic field limitations; the new technology can optimize the magnetic field in a number of ways, at the same time as the field rotates, thus expanding the area of highly efficient operation.
Efficiency is in the details.
The new patent opens up a multitude of new ways to optimize the magnetic field and rotation down to microscopic parameters for higher efficiency in any load situation.
The new patent could accelerate the transition to a more energy-efficient future.
Electric motors are the single largest consumer of electricity. They account for around 45% of global electricity consumption, according to an analysis by the International Energy Agency.
Get the proof yourself.
You probably doubt that it is possible to reduce the heat loss in an electric motor (up to 15-20 %), and increase torque output ( 25-30 %). So get the proof.
-The inventor himself can in 5-10 minutes; explain the physics behind it, and prove it mathematically.
-In another 10 minutes you can understand how it can be realized in practice. After the introductory seminar, you can do the calculations yourself, and model the results.
Register for the free seminar.
Get a sneak peek at this exciting new technology. The seminar will cover the physics and mathematics behind the new working principle. With examples of setups to realize benefits such as; more
torque, less weight, reduced saturation loss, reduced hysteresis loss, reduced r2 loss and other aspects of the new working principle.
The introductory seminar is held as a zoom meeting.
The number of participants must be agreed in advance.
The purpose of the introductory seminar is that you are subsequently able to assess the potential of the invention. | {"url":"https://www.dopti-research.dk/welcome/","timestamp":"2024-11-05T03:01:43Z","content_type":"text/html","content_length":"39654","record_id":"<urn:uuid:cc313169-ad84-44df-9409-e9b7fa2703f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00572.warc.gz"} |
Diffraction in DSLWP-B lunar occultations
In February this year, the orientation of the orbit of DSLWP-B around the Moon was such that, when viewed from the Earth, it passed behind the Moon on every orbit. This opened up the possibility for
recording the signal of DSLWP-B as it hid behind the Moon, thus blocking the line of sight path. The physical effect that can be observed in such events is that of diffraction. The power of the
received signal doesn’t drop down to zero in a brick-wall fashion just after the line of sight is blocked, but rather behaves in an oscillatory fashion, forming the so called diffraction fringes.
The signal from DSLWP-B was observed and recorded at the Dwingeloo 25m radiotelescope for three days in February: 4th, 13th and 15th. During the first two days, an SSDV transmission was commanded
several minutes before DSLWP-B hid behind the Moon, so as to guarantee a continuous signal at 436.4MHz to observe the variations in signal power as DSLWP-B went behind the Moon. On the 15th, the
occultation was especially brief, lasting only 28 minutes. Thus, DSLWP-B was commanded to transmit continuously before hiding behind the Moon. This enabled us to also observe the end of the
occultation, since DSLWP-B continued transmitting when it exited from behind the Moon. This is an analysis of the recordings made at Dwingeloo.
The recordings can be found here. I have used my Moonbounce processing scripts to compute the signal power from the recordings. These scripts were made to study the power of the Moonbounce signal,
but can be used here as well. Essentially, the script performs Doppler correction using the tracking files published in dslwp_dev as orbit state vector and GMAT as propagator, detects the frequency
offset of the GMSK signal from DSLWP-B, lowpass filters it, and takes averaged power readings in windows of 102.4ms.
The occultations were planned as shown in this post. After computing the expected times of the occultations, the relevant segments of the power readings are plotted, obtaining the figures shown
The first two figures correspond to the start of the occultations on February 4 and 13. The last two figures correspond to the start and end of the occultation on Februrary 15. The first figure looks
somewhat different from the rest because the occultation was much slower. Take note at the time axis.
We can clearly see the diffraction fringes. Before dropping down to zero, the power of the received signal oscillates. The diffraction pattern that happens during a lunar occultation is explained in
this document. There, the case of study is the occultation of a star by the Moon at optical wavelengths. In the derivation done in that document, it is implicitly assumed that the star is very far,
so the wave arriving the Moon can be assumed to be a plane wave. This is not the case for DSLWP-B, which is much more near to the Moon. In this case, we need to consider a point source instead of a
plane wave. The same kind of diffraction pattern is obtained in both cases, but the temporal frequency at which the diffraction fringes occur (the fringe rate) can be quite different, especially
because the distance from DSLWP to the surface of the Moon is changing quickly.
The typical diffraction pattern is given by the Fresnel integral\[D(\nu) = \frac{1}{2}\left|\int_{-\infty}^\nu e^{i\pi s^2/2}\,ds\right|^2.\]A plot of this function can be seen in the figure below.
The variable \(\nu\), depends on the geometry of the diffraction, and is computed differently in the case of a plane wave or a point source. The point \(\nu=0\) represents the case of a grazing
diffraction, which means that the line of sight path is tangent to the obstacle. The case \(\nu < 0\) corresponds to the situation when the direct line of sight is blocked, and \(\nu > 0\)
corresponds to the situation when there is a direct line of sight path. As the obstacle gets away from the line of sight and \(\nu \to +\infty\), we have \(D(\nu) \to 1\), so we recover the power we
would have if the obstacle wasn’t present.
Without discussing for now much about the geometry of the problem and how to compute \(\nu\) with respect to time \(t\), we just assume that it is reasonable to approximate \(\nu\) by its first order
Taylor expansion about the instant \(t_0\) when the grazing diffraction happens:\[\nu \approx v(t-t_0).\]
The next step in the analysis is to fit a function of the form\[P_0D(v(t-t_0))\]to the observed data, where \(P_0\), \(v\), and \(t_0\) are unknown parameters representing the unobstructed signal
power, the fringe rate, and the instant of grazing respectively.
To that end, we use a non-linear least squares fit. It is important to choose the initial values for the unknown parameters correctly to avoid falling in a local minimum. We do the following educated
guesses. For \(t_0\) we take the instant when the power achieves its average value. To estimate \(v\) we try to locate the first local minimum and maximum of the diffraction fringes and compute \(v\)
in terms of their temporal spacing. Finally, \(P_0\) is estimated as the mean value of the power after the first local minimum. The time interval over which the fit is performed is selected by hand.
Also, averaging of the data is performed for the first occultation.
After taking these precautions, the fit is rather good, as shown in the figures below.
The fit gives us empirical values for \(t_0\) and \(v\). Let us compare those with the theoretical values obtained by using the tracking files from dslwp_dev as ephemeris and propagating the orbit in
GMAT. This requires us to speak about how \(\nu\) is computed in terms of the geometry, for the case of a point source.
This is treated in this text very succinctly. It states that\[\nu = \pm2\sqrt{\frac{\Delta}{\lambda}},\]where \(\lambda\) is the wavelength and \(\Delta\) is the difference in distances between a
certain “reflected” path and the direct line of sight path. The sign \(\pm\) is chosen according as to whether the direct line of path is blocked or not, as explained above.
In more detail, assume that our transmitter and receiver are at points \(A\) and \(B\) respectively. Consider the point \(R\) in the obstacle that is closest to the line segment joining \(A\) and \(B
\). Then \[\Delta = \|A-R\| + \|R-B\| – \|A-B\|,\] where \(\|P-Q\|\) denotes the distance between the points \(P\) and \(Q\). Thus, \(\Delta\) is the extra distance traveled by a ray that goes from \
(A\) to \(B\) reflecting off \(R\).
As an interesting fact, we have that the time derivative \(d\Delta/dt\) is equal to the difference in Dopplers between the direct signal and the Moonbounce signal (see the Appendix in this post).
However, we are not really interested in \(d\Delta/dt\) but rather in the time derivative of \(\nu\), which involves \(\sqrt{\Delta}\), so it is not easy to relate the fringe rate and the Doppler
In the case of the lunar occultation of DSLWP-B, we compute \(\Delta\) as follows. From GMAT we get the coordinates of the points \(A\) and \(B\) corresponding to DSLWP-B and Dwingeloo, and \(M\),
which corresponds to the centre of the Moon. We compute the distance \(\delta\) between \(M\) and the line passing through \(A\) and \(B\) by computing \[l = \frac{\langle M – B, A – B\rangle}{\|A-B\
|},\]the projection of the vector joining Dwingeloo and the Moon onto the unit line of sight vector pointing from Dwingeloo to DSLWP-B. Then\[\delta = \sqrt{|M-B|^2 – l^2}.\]We note that\[\Delta = |\
delta – r|,\]where \(r\) is the lunar radius, and that the sign of \(\nu\) coincides with the sign of \(\delta – r\).
The instant of grazing diffraction is found by locating the time \(t_0\) for which \(\Delta = 0\). Then, the time derivative \(\frac{d\nu}{dt}(t_0)\) is evaluated numerically as a difference
For each of the four diffractions studied in this post, we obtain the following time differences in seconds between the instant of grazing \(t_0\) obtained from the fit to the measurements and the
GMAT simulation:
[-116.05367306500001, 13.311127206, 31.371431057000002, 41.360679158]
We see that there are errors on the order tens of seconds. This can be explained by errors in the ephemeris. Indeed, it is difficult to estimate the mean anomaly of the orbit accurately, and it is
the first parameter that becomes inaccurate with time, since orbital perturbations always build up in the mean anomaly. An offset in mean anomaly translates directly into a time offset in the instant
of grazing. Determining accurately the instant of grazing is actually a good way to measure the mean anomaly precisely.
It is also interesting to observe the last two time differences, which correspond to the entry and exit of the same occultation on February 15. They have a difference of 10.01s, meaning that the
duration of the occultation had such a difference between the observations and the GMAT prediction. This could also be explained by ephemeris errors, but the situation is not as simple as an offset
in the mean anomaly.
The relative errors in the calculation of the fringe rate \(\frac{d\nu}{dt}(t_0)\) between the observations and the GMAT prediction are shown below in parts per one:
[0.012628611461853678, -0.14849525967852584, 0.03838583537440288, 0.021923209485447237]
We observe errors between 1% and 15%. This means that our model for the geometry of the diffraction explains the observations very well.
The grazing instants determined from the observations are
The fringe rates, in units of 1/s determined from the observations are
[-0.1682628343896966, -1.2532957532402556, -1.7370407299575834 1.6895344971498276]
The computations and plots shown in this post have been done in this Jupyter notebook.
3 comments
1. Fascinating – reminds me of an old experiment “USING SHADOWED STARLIGHT AS A YARDSTICK” { https://archive.org/stream/TheAmateurScientist/Stong-TheAmateurScientist_djvu.txt } describing
diffraction of starlight by the moon
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://destevez.net/2019/04/diffraction-in-dslwp-b-lunar-occultations/","timestamp":"2024-11-07T10:21:26Z","content_type":"text/html","content_length":"62053","record_id":"<urn:uuid:1bef2e86-4c73-4de3-84d9-483f20a12ae1>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00034.warc.gz"} |
MPI-Exp.Dep.1: Abstract
Winkelmann, A., Fadley, C. S., Garcia de Abajo, F. J.
High-energy photoelectron diffraction: Model calculations and future possibilities New Journal of Physics 10, pp 113002/1-22 (2008)
We discuss the theoretical modeling of x-ray photoelectron diffraction (XPD) with hard x-ray excitation at up to 20 keV, using the dynamical theory of electron diffraction to illustrate the
characteristic aspects of the diffraction patterns resulting from such localized emission sources in a multilayer crystal. We show via dynamical calculations for diamond, Si and Fe that the dynamical
theory predicts well the available current data for lower energies around 1 keV, and that the patterns for energies above about 1 keV are dominated by Kikuchi bands, which are created by the
dynamical scattering of electrons from lattice planes. The origin of the fine structure in such bands is discussed from the point of view of atomic positions in the unit cell. The profiles and
positions of the element-specific photoelectron Kikuchi bands are found to be sensitive to lattice distortions (e.g. a 1 % tetragonal distortion) and the position of impurities or dopants with
respect to lattice sites. We also compare the dynamical calculations with results from a cluster model that is more often used to describe lower energy XPD. We conclude that hard XPD (HXPD) should be
capable of providing unique bulk-sensitive structural information for a wide variety of complex materials in future experiments. | {"url":"http://www2.mpi-halle.mpg.de/exp_department_1/publications/abstract/ext/8422/2008/?L=https%3A%2F%2Fwww.fak3web.com","timestamp":"2024-11-02T13:59:02Z","content_type":"application/xhtml+xml","content_length":"10135","record_id":"<urn:uuid:39408eb9-40e4-4a69-8792-6600467eb341>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00330.warc.gz"} |
Audiogon Discussion Forum
If you see a plot of freequency v.s. signal when it comes to crossover, at some point which crossover is designed or multiple of such it should have a slope(s).
Since the freequency can't be cut of perfectly vertically there slope defined exponentially.
Expanding just a bit on Marakanetz's response, the slope is the decrease in signal strength per octave. So a first order cross-over has a slope of -3db and it goes up from there -- second is -6,
third is -12, fourth is -24. The decrease begins at a certain frequency and keeps on going from there depending on the specifics of the cross-over.
"Expanding just a bit on Marakanetz's response, the slope is the decrease in signal strength per octave. So a first order cross-over has a slope of -3db and it goes up from there -- second is -6,
third is -12, fourth is -24. The decrease begins at a certain frequency and keeps on going from there depending on the specifics of the cross-over."
Well...not quite. First, 'slope' references the steepness of the plotted falloff of signal strength v. frequency. When someone says 'the slope of the filter', they mean the steepness of the filter
when visualized (= plotted on paper). But that term is rather casual and inexact.
A filter point is defined as that frequency which is 3 decibels (dB--the 'B' is capitalized in the abbreviation because it was someone's name) above or below the zero or reference level. A 1st-order
filter has a 'slope' or steepness of SIX dB per octave. That means that with a perfectly behaving low-pass filter, the signal strength one octave (= double the frequency) further up the frequency
scale is now 9dB below reference level. The signal one octave higher yet (= another doubling of frequency) is another 6dB down, or a total of 15dB, and so on. For example,with a 1st-order low-pass
filter of, say, 100Hz, the signal strength will be down 3dB at 100Hz, 9dB at 200Hz, 15dB at 400Hz, 21dB at 800Hz, etc. A 2nd-order filter has a slope of 12dB per octave; a 3rd-order, 18dB per octave,
and so on.
In a 2nd-order filter at 100Hz, 100Hz is still down 3dB, but 200Hz is down 15dB, 400Hz is down 27dB, etc. With a 3rd-order, 200Hz is down 21dB, 400Hz is down 39dB, etc.
Higher-order (means 'higher than 1st-order') filters get rid of 'unwanted' signal to a greater degree than 1st-order filters; they're often used to increase power-handling capacity of a driver and
hence a system. For instance, if one had a woofer that sounded really poor in the lower midrange because it was slow to respond to faster signals, one could use a 3rd- or 4th-order filter to keep
more of the midrange out of that woofer. However, higher-order filters have increasing amounts of phase error, so there's a price to pay.
I think 1st is 6 db (decibal) an octave roll-off, 2nd is 12, 3rd is 18, and 4th is 24db an octave roll-off. That's the slope at which the signal is rolling-off With 90 degrees phase shift on each one
going up also.
OK, so the second part of my question was..
What should I be listening for to establish the proper slope for my processor?
Please be as straight forward as possible.
Thanks for the help!
M, is this a high- or low-pass filter? Do you have large, full-range speakers or ones with relatively limited bass output? Do the main speakers have plenty of power-handling capability full range or
can you overdrive them at YOUR normal, high listening levels?
My speakers are full range Thiel 2.3's. They are capable of full range bass, but I cross them over at 80Hz to my sub. I find it allows the Thiels to do what they do best.
The slope options available are 6db, 12db, 18db and 24db
Hope this helps.
So you're talking about the high-pass filter to the Thiels?
If so, my 1st response is 'why bother'? Using any of these filters introduces phase errors into the bass and midrange frequencies. Also, it sounds as if you're using one subwoof, so filtering the
bass from the Thiels makes the low- and midbass mono instead of stereo.
My 2nd response is the less 'slope' (and phase errors) the better. If you feel you MUST use a hi-pass filter, use as gentle a one as you can select.
But try running them fullrange and filling just the bottom octave with the subwoof.
Does the Casablanca give you the choice of sending the full signal to the subwoofer channels (i.e. SLOPE OFF)? If so, then why not set the subs to receive the full signal and let the sub's filter let
in what it needs.
But if you want to bypass the Thiels completely, start with the 80 hz frequency at 24 db. This steep roll off may result in a "hole" at 80 hz. Keep going down the slope decade until the sound is
coherent, that is, the sound levels at that frequency are similar between the sub and the speakers. Ideally, they should be within 3db at 60,80 and 100 hz but not more than 6 db. A sound pressure
level meter will greatly help. The bass should sound fast and quickly disappear - not resonating or reverberating.
If you go to 6 db, then the speaker woofer may overrun the sub and you'll get reinforcement (louder, bass boom, fat bass, etc.) at that frequency. Sorry, but this is probably a trial and error
process. Also note that the Thiels have a 6 db slope - but that is with its own drivers and will probably not work so great with a sub at that slope setting.
BTW - make sure your speakers are properly set up before trying out the different slopes or you may not notice a difference.
Ezmeralda, Jeffrey -- thanks for the the clarification on the slope values. My memory doesn't seem to work quite as well as it once did ;-) Next time I'll confirm before I post.
"Ezmeralda, Jeffrey -- thanks for the the clarification on the slope values. My memory doesn't seem to work quite as well as it once did ;-) Next time I'll confirm before I post."
You're welcome. Happens to us all. I had a poor memory when I was young, and I think as I got older, it got worse...but I can't remember. Now at 60, I have too many 'senior moments'.
Thanks guys!!
I figured it out with your help.
System sounds great. | {"url":"https://d2dve11u4nyc18.cloudfront.net/discussions/what-is-quot-slope-quot","timestamp":"2024-11-05T20:07:36Z","content_type":"text/html","content_length":"114791","record_id":"<urn:uuid:62898719-d783-40c3-944b-2cde329467ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00388.warc.gz"} |
Dally, Shark & Ruslan workbench
- Electron pulse burst speed is set by the Katcher voltage [recall e = mc2] where e=energy, m=electron mass and c=speed of light, but in this case the velocity is v^2 set by the Katcher high
It is more convenient for the readers to have a ready formula for the electron velocity in vacuum as a function of its kinetic energy, instead of the mere relation between its energy and mass.
= (2E
= E
* 593073.5m/s
= electron's kinetic energy in Electonvolts [eV]
... = electron's velocity in [m/s]
= electron's rest mass in [kg]
= the speed of light (299792458 m/s)
The Newtonian formula above deviates from reality by only 1% for electron energies less than 24kV.
If this is unacceptable or the energies considered are higher, then use the more precise relativistic formula listed below:
= c*(1-(1/(1+(E
The average drift velocity of electrons in air is 100s times slower than in vacuum ...and it doesn't depend only on acceleration voltage.
the square part is bonus...
Why ? | {"url":"https://www.overunityresearch.com/index.php?topic=3926.msg96260","timestamp":"2024-11-09T07:03:24Z","content_type":"application/xhtml+xml","content_length":"92384","record_id":"<urn:uuid:45bdcd2e-69bb-4d20-90c0-0ff820a5bb1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00490.warc.gz"} |
The Algorithms (ALGO) cluster studies the design and analysis of algorithms and data structures, one of the core areas within computer science. Research in our cluster ranges from curiosity-driven to
motivated by concrete applications, and from purely theoretical to experimental. In all cases, the goal is to understand the underlying principles of the developed solutions and to formally prove
their properties. Our approaches frequently combine the rigorous methods from algorithmic theory – which give performance guarantees with respect to both the quality of solutions and the running time
of algorithms – with efficient engineering to achieve results of both theoretical and practical significance.
Within the broad field of algorithms, ALGO specializes in various areas:
Computational geometry is the area within algorithms research dealing with the design and analysis of algorithms and data structures for spatial data. It combines clever algorithmic techniques with
beautiful geometric concepts to obtain efficient solutions to algorithmic problems involving spatial data. We study fundamental algorithmic as well as combinatorial problems in this area, with the
aim of developing an algorithmic foundation to tackle geometric problems arising in other domains.
Over the past years the availability of devices that can be used to track moving objects – GPS satellite systems, mobile phones, and more – has increased dramatically, leading to an explosive growth
in movement data. Naturally the goal is not only to track objects but also to extract information from the resulting data. We develop algorithms both for basic analysis tasks, such as clustering, and
to answer questions from domain scientists. Our work encompasses both point trajectory data (cars, animals, …) and moving non-point objects (hurricanes, river networks, …). A particular focus are
models and algorithms for data uncertainty and stability (small changes in the data should lead to small changes in the result).
Computing the similarity between moving curves Proc. 23rd European Symposium on Algorithms (ESA), pp. 928-940, LNCS 9294, 2015.
Topological stability of k-centers. (Work in progress.)
Parameterized complexity offers a rigorous toolset to develop exact algorithms for NP-hard problems. The goal is to develop algorithms which are provably efficient on inputs whose complexity, as
measured by natural parameters, is limited. This gives much needed insights into many computationally hard problems that would be unreachable by the standard toolset. An important subfield is
kernelization, which is the formal study of efficient preprocessing. It develops reduction rules which reduce the size of an input without changing the answer. Applying such reduction rules as a
preprocessing step can speed up algorithms by several orders of magnitude.
Fine-grained complexity of k-OPT in bounded-degree graphs for solving TSP Proc. 27th European Symposium on Algorithms (ESA), pp. 23:1-23:14, 2019.
A Turing kernelization dichotomy for structural parameterizations of F-minor-free deletion Proc. 45th International Workshop on Graph-Theoretic Concepts in Computer Science (WG), pp. 106-119, 2019.
Turing kernelization for finding long paths and cycles in restricted graph classes Journal of Computer and System Sciences (85), pp. 18-37, 2017.
Computing the chromatic number using graph decompositions via matrix rank Theoretical Computer Science (795), pp. 520-539, 2019.
Maps are effective tools for visualizing and communicating spatial data. Algorithms play an important role in automated cartography and geovisualization, for example, in placing elements and
connections, and creating scale-appropriate representations. Our work encompasses the design and computation both of thematic maps and of spatially informed visualizations, that move beyond the
typical geographically accurate maps. A particular focus are schematic geovisualizations where we apply a deliberate, controlled deformation of space to give context and accentuate structure in the
data, while maintaining key spatial aspects.
Big data is everywhere. One increasingly important type of big data is in the form of networks: gene regulatory networks, disease networks, and social networks. Most of these networks are not static:
the data constantly evolves according to some unknown but measurable dynamics. As the network behavior evolves over time, the resulting data becomes too large and comes in too fast to even store in
the computer’s memory, requiring (sketching and sampling-based) algorithms (1) to manipulate the data immediately as it streams by using relatively little memory, or (2) to parallelize the data on a
big number of machines having moderate memory by using few communication rounds.
We explore cross-disciplinary applications of computational geometry to engineering problems motivated by mobile agents. These include path planning and routing of single- and multi-agent systems,
assembly and reconfiguration of modular systems, and coordinated distributed computation for programmable matter. Our goal is to develop an algorithmic foundation that supports the design of
effective solutions for mobile agent systems.
Networks for communication, transportation, finance and energy form the backbone of modern society. Hence, it is important to design and use networks in the most efficient manner. This leads to many
challenging algorithmic questions concerning the design and maintenance of good network structures as well as the performance of processes running on the network. For example, how can we design
networks that are sparse while still providing a relatively short path between every pair of nodes? We study algorithmic questions for abstract networks as well as for networks that are embedded in
geometric spaces.
The increased digitization of cultural heritage artifacts, such as books or manuscripts, as well as the prevalence of social media, create an ever-growing set of highly complex data which humanities
researchers aim to analyze and understand. Recent advances in the accuracy of language analysis technologies, relying on generic language models trained on vast amounts of data, enable automated
analysis of humanities data, but also pose a challenge: language models are essentially black boxes and it is unclear what exactly they learn and how. Our recent work focuses on visual analytics
solutions for data exploration as well as topological analysis of high dimensional semantic spaces. | {"url":"https://algo.win.tue.nl/","timestamp":"2024-11-10T14:30:28Z","content_type":"text/html","content_length":"30704","record_id":"<urn:uuid:56cdecb4-c01a-4d0a-aaea-cb987a69008a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00438.warc.gz"} |
[QSMS Seminar 2021.09.30] Schur-Weyl duality, old and new & Quantum symmetric pairs and Schur-type dualities
Date : September 30, AM 10:30~11:30, 11:40~12:40 in Seoul Time
AM 09:30~10:30, 10:40~11:40 in Taiwan time
Place : Zoom online
Speaker : Chun-Ju Lai (Academia Sinica)
Title & Abstract
1. Schur-Weyl duality, old and new
In the beginning stage of representation theory, Schur's 1901 thesis on polynomial and rational representations of the general linear groups has played an influential role. A double centralizer
property with the symmetric groups (i.e., Weyl groups of type A), now known as the Schur-Weyl duality, has been utilized to study the representation theory of the general linear Lie algebra and of
the symmetric groups simultaneously. Interests in Schur's ideas still continue nowadays, in a modern setting. In this talk I will focus on the quantum Schur algebras of type A and go over the
connection with the quantum groups via coordinate construction, with the Hecke algebras, and with rational Cherednik algebras.
2. Quantum symmetric pairs and Schur-type dualities
There are various (distinct) generalizations of the theory regarding Schur dualities beyond type A. For finite and affine classical types, the quantum symmetric pairs arise naturally from the double
centralizer properties with Hecke algebras of finite and affine classical types. I will talk about a type B/C generalization joint with Nakano and Xiang, as well as an affine type C generalization
joint with Fan, Li, Luo, Wang and Watanabe. | {"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&sort_index=title&order_type=desc&page=6&l=en&document_srl=1776","timestamp":"2024-11-11T16:59:25Z","content_type":"text/html","content_length":"67686","record_id":"<urn:uuid:e22d7360-fcb4-4a37-b7a6-3daeaf0c4107>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00555.warc.gz"} |
Answers related to Statistical optics - Given the | SolutionInn
Questions and Answers of Statistical Optics
• Show that any two statistically independent random variables have a correlation coefficient that is zero.
• Given the random variables\[ \begin{aligned} & U=\cos \Theta \\ & V=\sin \Theta \end{aligned} \]with\[ p_{\Theta}(\theta)= \begin{cases}1 / \pi & -\frac{\pi}{2}
• Prove the following properties of characteristic functions:(a) Every characteristic function has value unity at the origin.(b) The second-order characteristic function \(\mathbf{M}_{U
• A random variable \(U\) is uniformly distributed on the interval \((-A, A)\) and its probability density function is zero otherwise.(a) Find an expression for \(A\) in terms of the variance
• Show that if the moment \(\overline{u^{n} v^{m}}\), if it exists, can be found from the joint characteristic function \(\mathbf{M}\left(\omega_{U}, \omega_{V}\right)\) by the formula\[
• Find the probability density function of the random variable \(Z\) in terms of the known density function \(p_{U}(u)\) when(a) \(z=a u+b\), where \(a\) and \(b\) are real constants.(b)
• Using the method given in Eq. (2.5-25), find the joint probability density function \(p_{W Z}(w, z)\) when\[ \begin{align*} & w=u^{2} \\ & z=u+v \tag{p.2-1} \end{align*} \]and \(p_{U V}(u,
• Consider two independent, identically distributed random variables \(\Theta_{1}\) and \(\Theta_{2}\), each of which obeys a probability density function\[ p_{\Theta}(\theta)=\left\{\begin{array}
• Consider two identically distributed and independent random variables \(X_{1}\) and \(X_{2}\) with common probability density function \(p_{X}(x)\). Show that the probability density function of
• Consider the random phasor sum of Section 2.9 with the single change that the phases \(\phi_{k}\) are uniformly distributed on \((-\pi / 2, \pi / 2)\). Find the following quantities: \(\bar{r},
• Let the random variables \(U_{1}\) and \(U_{2}\) be jointly Gaussian, with zero means, equal variances, and correlation coefficient \(ho eq 0\). Consider the new random variables \(V_{1}\) and
• Consider \(n\) independent random variables \(U_{1}, U_{2}, \ldots, U_{n}\), each of which obeys a Cauchy density function,\[ p_{U}(u)=\frac{1}{\pi
• A certain computer contains a random number generator that generates random numbers with uniform relative frequencies (or probability density) on the interval \((0,1)\). Suppose, however, that it
• Let the real-valued random process U(t)U(t) be defined byU(t)=Acos(2πvt−Φ)U(t)=Acos(2πvt−Φ)where vv is a known constant, ΦΦ is a random variable uniformly distributed on
• Consider the random process \(U(t)=A\), where \(A\) is a random variable uniformly distributed on \((-1,1)\).(a) Sketch some sample functions of this process.(b) Find the time autocorrelation
• An ergodic real-valued random process \(U(t)\) with autocorrelation function \(\Gamma_{U}(\tau)=\left(N_{0} / 2\right) \delta(\tau)\) is applied to the input of a linear, time-invariant filter
• Consider the random process \(Z(t)=U \cos \pi t\), where \(U\) is a random variable with probability density function\[ p_{U}(u)=\frac{1}{\sqrt{2 \pi}} \exp \left(-\frac{u^{2}}{2}\right) \](a)
• Find the statistical autocorrelation function of the random process\[ U(t)=a_{1} \cos \left(2 \pi v_{1} t-\Phi_{1}\right)+a_{2} \cos \left(2 \pi v_{2} t-\Phi_{2}\right) \]where \(a_{1}, a_{2},
• A certain random process \(U(t)\) takes on equally probable values +1 or 0 with changes occurring randomly in time. The probability that \(n\) changes occur in time \(\tau\) is known to be\[
• A certain random process \(U(t)\) consists of a sum of (possibly overlapping) pulses of the form \(p\left(t-t_{k}\right)=\operatorname{rect}\left(\left(t-t_{k}\right) / b\right)\) occurring with
• Assuming that the random process \(U(t)\) is wide-sense stationary, with mean \(\bar{u}\) and variance \(\sigma^{2}\), which of the following functions represent possible structure functions for
• Prove that the Hilbert transform of the Hilbert transform of a function \(u(t)\) is \(-u(t)\), up to a possible additive constant.
• (a) Show that for an analytic signal representation of a real-valued, narrowband random process, the autocorrelation function of the resulting complex process \(\mathbf{U}(t)\) (assumed
• Let \(\mathbf{V}(t)\) be a linearly filtered complex-valued, wide-sense stationary random process with sample functions given by\[ \mathbf{v}(t)=\int_{-\infty}^{\infty} \mathbf{h}(t-\tau)
• Find the power spectral density of a doubly stochastic Poisson impulse process having a rate process \(\Lambda(t)\) described by\[ \Lambda(t)=\lambda_{0}[1+\cos (2 \pi \bar{v} t+\Phi)] \]where
• Starting with Eq. (4.1-10), show that if \(\Delta v \ll \bar{v}\) and \(r \ll 2 c / \Delta v\) for all \(P_{1}\), then\[ \mathbf{u}\left(P_{0}, t\right) \approx \iint_{\Sigma} \frac{e^{j 2 \pi(r
• (a) Show that the Jones matrix of a polarization analyzer set at angle \(\alpha\) to the \(X\)-axis is given by\[ \underline{\mathbf{L}}(\alpha)=\left[\begin{array}{cc} \cos ^{2} \alpha & \sin
• By finding the trace of the appropriate coherency matrix, show that the average intensity transmitted by a polarization analyzer set at \(+45^{\circ}\) to the \(X\)-axis can be expressed as\[
• By finding the trace of the appropriate coherency matrix, show that the average intensity transmitted by a quarter-wave plate followed by a polarization analyzer set at \(+45^{\circ}\) to the
• Consider a light wave that has \(X\) - and \(Y\)-polarization components of its electric field at point \(P\) given by\[ \begin{aligned} & \mathbf{u}_{X}(t)=\exp \left[-j 2
• Show that the characteristic function of the intensity of polarized thermal light is given by\[ \mathbf{M}_{I}(\omega)=\frac{1}{1-j \omega \bar{I}} \]
• Show that the standard deviation \(\sigma_{I}\) of the instantaneous intensity of partially polarized thermal light is\[ \sigma_{I}=\sqrt{\frac{1+\mathcal{P}^{2}}{2}} \bar{I} \]
• When light falls on a balanced detector (i.e., a detector pair, with one detector output subtracted from the other's output), the output current is proportional to the difference of the
• Let the field emitted by a multimode laser oscillating in \(N\) equals strength and independent modes be represented by\[ \mathbf{u}(t)=\sum_{n=1}^{N} \exp \left[-j\left(2 \pi v_{n}
• Consider a single-mode laser emitting light described by the analytic signal\[ \mathbf{u}(t)=\exp (-j[2 \pi \bar{v} t-\theta(t)]) \](a) Assuming that \(\Delta
• Show that the second moment \(\overline{I^{2}}\) of the intensity of a wave is not equal to the fourth moment \(\overline{\left[u^{(r)}\right]^{4}}\) of the real amplitude of that wave, the
• An idealized model of the normalized power spectral density of a gas laser oscillating in \(N\) equal-intensity axial modes is\[ \widehat{\mathcal{G}}(v)=\frac{1}{N} \sum_{n=-(N-1) / 2}^{(N-1) /
• The gas mixture in a helium-neon laser (end mirrors removed) emits light at \(633 \mathrm{~nm}\) with a Doppler-broadened spectral width of about \(1.5 \times 10^{9} \mathrm{~Hz}\). Calculate the
• (Lloyd's mirror) A point source of narrowband light is placed at distance \(s\) above a perfectly reflecting planar mirror. At distance \(d\) away, the interference fringes are observed on a
• Consider a Young's interference experiment performed with broadband light.(a) Show that the field incident on the observing screen can be expressed as\[ \mathbf{u}(Q, t)=\tilde{K}_{1} \frac{d}{d
• As shown in Fig. 5-7p, a positive lens with focal length \(f\) is placed in contact with a pinhole screen in a Young's experiment. The lens and pinhole plane are at distance \(z_{1}\) from the
• Consider a Michelson interferometer that is used in a Fourier spectroscopy experiment. To obtain high resolution in the computed spectrum, the interferogram must be measured out to large
• In the Young's interference experiment shown in Fig. 5-5-9pp, the normalized power spectral density \(\widehat{\mathcal{G}}(v)\) of the light is measured at point \(Q\) by a spectrometer. The
• A monochromatic, unit-amplitude plane wave falls normally on a "sandwich" of two diffusers. The diffusers are moving in opposite directions with equal speeds, as shown in Fig. 5-5-10pp. The
• Show that under quasimonochromatic conditions, mutual intensity \(\mathbf{J}\left(P_{1}, P_{2}\right)\) obeys a pair of Helmholtz equations\[ \begin{aligned} & abla_{1}^{2} \mathbf{J}\left(P_{1},
• Consider an incoherent source radiating with spatial intensity distribution \(I(\xi, \eta)\).(a) Using the Van Cittert-Zernike theorem, show that the coherence area of the light (mean wavelength
• A Young's interference experiment is performed in the geometry shown in Fig. 5-14p. The pinholes are circular and have finite diameter \(\delta\) and spacing \(s\). The source has bandwidth \(\
• The sun subtends an angle of about \(32 \mathrm{~min}\) of arc ( 0.0093 radians) on Earth. Assuming a mean wavelength of \(550 \mathrm{~nm}\), calculate the coherence diameter and coherence area
• Show that for quasimonochromatic, stationary thermal light, the fourth-order coherence function\[ \boldsymbol{\Gamma}_{1234}\left(t_{1}, t_{2}, t_{3}, t_{4}\right)=E\left[\mathbf{u}\left(P_{1},
• The output of a single-mode, well-stabilized laser is passed through a spatially distributed phase modulator (or a phase-only spatial light modulator that is changing with time). The field
• Show that for light with a Lorentzian spectral profile, the parameter \(\mathcal{M}\) is given by\[ \mathcal{M}=\left[\frac{\tau_{c}}{T}+\frac{1}{2}\left(\frac{\tau_{c}}{T}\right)^{2}\left(e^{-2
T /
• Examination of Fig. 6.5 shows that a relatively abrupt threshold in values of \(\tilde{\lambda}_{n}\) occurs as \(n\) is varied. In particular, as a rough approximation,\[ \tilde{\lambda}_{n} \
• Suppose that we wish to know the standard deviation of the phase of \(\mathbf{J}_{12}(T)\) under the condition that the measurement time \(T\) is sufficiently long compared with the correlation
• Under the same conditions described in the previous problem, the fluctuations of the length of \(\mathbf{J}_{12}(T)\) are caused mainly by the fluctuations in the real part of the noise. Making
• Compare the rms signal-to-(self) noise ratios achievable using amplitude interferometry and intensity interferometry to measure \(\mu_{12}\) under the assumption that \(T / \tau_{c} \gg 1 /
• Find what modification of Eq. (6.3-32) must be made if the light incident on the detectors is unpolarized thermal light. SM rms M/2 1+2 B/b - b 4B (6.3-32)
• Given that the condenser lens of Fig. 7.4 is circular with diameter \(D\), specify the diameter required of a circular incoherent source to assure that the approximation of Eq. (7.2-7) is valid.
• In the optical system shown in Fig. 7-4p, a square incoherent source ( \(L \times L\) meters) lies in the source plane. The object consists of two pinholes separated in the \(\xi\) direction by
• A partially polarized thermal light wave is incident on a photodetector. The total incident integrated intensity can be regarded to consist of two statistically independent components, \(W_{1}\)
• Light from a partially polarized pseudothermal source is found to have a coherence matrix of the form\[ \underline{\mathbf{J}}=\bar{I}\left[\begin{array}{ll} 1 / 2 & -1 / 6 \\ -1 / 6 & 1 / 2
• Assuming factorability of the complex degree of coherence of polarized thermal light, demonstrate that when the photodetector area is much larger than the coherence area of the incident light,
• Given the assumption that the energy levels of an harmonic oscillator can take on only the values \(n h v\) and given the Maxwell-Boltzmann distribution of occupation numbers, show that the | {"url":"https://www.solutioninn.com/study-help/life-sciences-statistical-optics","timestamp":"2024-11-13T03:12:49Z","content_type":"text/html","content_length":"80207","record_id":"<urn:uuid:eb5410c3-c2bc-4c09-8ea7-00d1b00d3940>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00360.warc.gz"} |
Category Archives: Uncategorized
My second column appears in the May 2019 issue of the Notices of the American Mathematical Society. The article talks about reviewers – the thousands of people who put the “Reviews” in “Mathematical
Reviews”! Besides giving some general information, there … Continue reading
A recent installment of the My Favorite Theorem podcast by Kevin Knudson and Evelyn Lamb features Ursula Whitcher, who is an Associate Editor at Mathematical Reviews. Listen to the interview here.
Laure Saint-Raymond is a mathematician working in partial differential equations, fluid mechanics, and statistical mechanics. She is a professor at l’École Normale Supérieure de Paris and the
Université Pierre et Marie Curie (also known as Paris VI). In 2013, she became … Continue reading
Jonathan Borwein passed away on August 1st. He was a prolific mathematician, with 427 publications as of this writing. He was also quite broad, publishing in number theory, operations research,
calculus of variations, and many other subjects. Many people knew … Continue reading
Over the course of Mathematical Reviews’ 75-year history, there have been many famous researchers who were also active reviewers. As pointed out by Norman Richert in his article Mathematical Reviews
Celebrates 75 Years, the first issue of Mathematical Reviews (in January … Continue reading
Earlier, I posted an announcement that Mathematical Reviews is looking to hire a new Associate Editor to start in spring or summer 2016. This is the re-post I promised. And now everyone knows that
this is an opportunity to work in a … Continue reading | {"url":"https://blogs.ams.org/beyondreviews/category/uncategorized/","timestamp":"2024-11-04T21:49:37Z","content_type":"text/html","content_length":"51881","record_id":"<urn:uuid:46fafc88-cf43-4c7e-9158-86ef42e7093b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00468.warc.gz"} |
Day trading success rate - Breaking Down Finance
Day trading success rate
The assessment of the predictability of a (day) trading system is essential in designing a successful trading rule. This is not only because higher success rates lead to higher portfolio
returns. Also the decrease the variance of the performance is valuable. This can be very valuable in highly volatile markets.
Day trading success rate formula
The assessment of significant predicitability can be done by using a normal approximation of the binomial distribution to assess significant difference between the proportions of 2 categories. First,
you need to define the success rate. This is simply the amount of successes registered, S, divided by the total amount of recordings N:
Secondly, the standard deviation of the proportion can be estimated based on the following formula:
Since we are only interested in whether the system is better than a rondom 50% prediction 1-sided confidence intervals are used. The 1-sided confidence interval is calculated by using the
observed success rate, the estimated standard deviation and the z-score on a certain significance level α.
Portfolio success rate example
Consider the following day trading system which tries to beat the market. Over the past 100 days, the trading system realized a higher return than the market in 63 out of 100 cases. In 37 cases, the
return was lower. The million dollar question which can be asked is: is this trading system based on the recorded data able to predict the direction better than randomly going long or short. In other
words, is the probability of success, 63% significantly larger than 50%?
Portfolio success rate interpretation
Using these confidence intervals, it is possible to know whether the success rate of a portfolio is significantly higher than 50%. In other words, it is possible to assess whether a trading system
has significant predictive ability. This is simply done by assessing whether 50% lies in the range of the confidence interval for a given significance level. If it is not, than this means that a
trading system has predictive ability.
Assessing the predictability of a trading system is essential in the implementation of a sound trading strategy. This can be assessed by using a normal approximation of the binomial distribution to
assess difference in proportions between to categories. Trading systems with higher predictability can have both higher and less volatile returns.
Want to have an implementation in Excel? Download the Excel file: Day trading success rate | {"url":"https://breakingdownfinance.com/finance-topics/performance-measurement/day-trading-success-rate/","timestamp":"2024-11-09T13:51:01Z","content_type":"text/html","content_length":"239542","record_id":"<urn:uuid:4afd6551-340a-4b20-aec1-0247f8de324d>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00619.warc.gz"} |
The possessive case 2
Nouns and pronouns (me, you, her, him, it, them, and us, to name a few) used
in the
objective case
function as direct objects, indirect objects, and objects
of the preposition.
The direct object is a noun or pronoun that answers the question ‘‘who?’’ or
‘‘what?’’ after an action verb.
➲ You asked me an interesting question. (What did you ask me?—an interesting
question. Thus, question is the direct object.)
➲ The dog drank the water and the lemonade. (What did the dog drink?—
the water and the lemonade. Thus, water and lemonade are the compound
direct objects.)
The indirect object is a noun or pronoun that answers the question ‘‘for
whom?’’ or ‘‘to whom?’’ after an action verb. If a sentence includes an indirect
object, itmust also have a direct object.
➲ George brought his mom some groceries. (Mom is the indirect object,
and groceries is the direct object.)
➲ We gave her and him a new car. (The two pronouns, her and him, answer
the question ‘‘to whom?’’ did we give a new car. Therefore, her and him
are the compound indirect objects, and car is the direct object.)
The object of the preposition is a noun or pronoun that usually ends the
phrase begun by the preposition.
➲ Sherry walked into the cafeteria. (The prepositional phrase, into the cafeteria,
includes the object of the preposition, cafeteria.)
➲ They sat beside her and me. (The prepositional phrase, beside her and me,
includes the compound objects of the preposition, her and me.) | {"url":"https://examaxe.com/English-Grammar/674","timestamp":"2024-11-05T17:14:59Z","content_type":"text/html","content_length":"13700","record_id":"<urn:uuid:784122b9-5307-472c-87de-352df7df9a53>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00657.warc.gz"} |
How do you solve 14-5x-x^2<0? | HIX Tutor
How do you solve # 14-5x-x^2<0#?
Answer 1
Just to make things easier, we multiply both sides by #-1# and flip the inequality sign: #x^2+5x-14>0#. This way the coefficient before #x^2# is positive. (Note: this is not a necessary step.)
We factorize the expression: #x^2+5x-14=(x+7)(x-2)>0# to find the roots: #x=-7# or #2#.
These are the only two possible points where the expression can change signs. Thus, there are three regions that have the same sign:
Since there are no repeated roots, the signs of the region alternate. We substitute an arbitrary number in the second region to find the sign: #x=0#. Then #x^2+5x-14=0^2+5*0-14=-14<0#. Thus, the
second region is negative.
Therefore, the first and third regions are positive (you can use substitution to be sure). Looking at the inequality, #x^2+5x-14>0#, we notice that we need to find regions where the value is
The solution is the first and third regions: #x<-7 uu x>2#.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To solve ( 14 - 5x - x^2 < 0 ):
1. Rearrange the equation to set it equal to zero: ( -x^2 - 5x + 14 < 0 ).
2. Factor the quadratic expression if possible.
3. Find the critical points by setting the expression equal to zero.
4. Use test points in each interval determined by the critical points to determine where the inequality holds true.
5. Write the solution set based on the intervals where the inequality is satisfied.
Alternatively, you can use the quadratic formula to find the roots of the quadratic equation and then analyze the sign of the expression within the intervals determined by the roots.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-solve-14-5x-x-2-0-8f9af9386e","timestamp":"2024-11-02T21:20:17Z","content_type":"text/html","content_length":"571952","record_id":"<urn:uuid:17f89b60-b60d-40ec-bfdd-b8996f98f0ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00230.warc.gz"} |
Discrete and Algorithmic Mathematics
New trends in arithmetic combinatorics and related fields, Instituto de Matemáticas de la Universidad de Granada, Granada, June 1-5, 2025.
Arithmetic combinatorics has kept developing and diversifying its topics to the present day. In particular, the last five years, and especially 2023, have seen outstanding advances. This workshop
will bring together many world-leading researchers who are working at the cutting edge of these latest developments across several fields.
The 8th Workshop on Design Theory, Hadamard Matrices and Applications (Hadamard 2025), Sevilla, 26-30 May, 2025.
The purpose of the workshop is to bring together researchers and students interested in design theory, especially as it relates to Hadamard matrices and their applications, as well as in related
areas in coding theory, association schemes, sequences, finite geometry, difference sets, quantum information theory, theoretical physics and computer security. The audiences would learn about the
latest developments in these areas, discuss the latest findings, take stock of what remains to be done on classical problems and explore different visions for setting the direction for future work.
Hadamard 2025 is the eighth in a series of conferences.
XIV Encuentro Andaluz de Matemática Discreta, Universidad de Almería, January 22-24, 2025.
The Department of Mathematics of the University of Almeria, the research group TIC 146 Supercomputación-Algoritmos (SAL) and the Center for Development and Transfer of Mathematical Research to
Industry (CDTIME), organize the XIV Andalusian Meeting of Discrete Mathematics, a space to bring together different research groups from Andalusia, Spain and the rest of the world working in the
field of Discrete Mathematics.
BGSMath Advanced Course: Threshold phenomena in random structures, Centre de Recerca Matemàtica - Universitat Politècnica de Catalunya, Barcelona, November 11-22, 2024.
The course will present the notion of threshold, a phenomenon consisting on a sudden behavioural change in the random structure produced by a small increase of the modelling parameter (temperature,
energy, probability, … ). We will survey the most recent advances in the area as well as explore applications in other fields such as Sociology, Biology, Statistics and Computer Science. The last
part of the course will be devoted to covering two very recent breakthroughs in the area: (1) the Kahn-Kalai conjecture, now Park-Pham theorem, asserting that thresholds can not differ too much from
expected thresholds [10]. We plan to give a full proof of it; and (2) The satisfiability conjecture, now the Ding-Sly-Sun theorem, that pins down the thresh- old for a random k-SAT formula being
satisfiable. The course is aimed at master and doctoral students, and at postdoctoral researchers in the areas of Mathematics, Computer Science and Physics, but is also open to all early career
researchers and faculty in other areas that are interested in the fundamentals of threshold phenomena and its applications to different disciplines.
Interplays between Algebra, Combinatorics and Proof Formalization, Centre de Recerca Matemàtica, Barcelona, July 15 , 2024.
Exploratory workshop on popular formal logic systems used to formalize mathematical proofs, in the context of Algebra and Combinatorics. The workshop will explore the recent advances in between the
areas, pionnered by June Huh. This is a collaborative initiative between the Centre de recherches mathématiques (CRM) in Montreal and the Centre de Recerca Matemàtica in Barcelona.
Combinatorial Designs and Codes (CODESCO’24), Sevilla, July 8-12, 2024.
The goal of the conference is to bring together researchers interested in combinatorial design theory, coding theory, graph theory, algebraic combinatorics and finite geometry, with particular
emphasis on establishing new synergies among them, and new applications to other fields and to the real world, including artificial intelligence, communication networks, cryptography and machine
learning. The conference is a satellite event of the 9th European Congress of Mathematics.
International Meeting on Numerical Semigroups 2024 (IMNS2024), Jerez de la Frontera, 8-12 July, 2024.
This is the eighth meeting of a series that started in 2008, with the aim of gathering researchers working on numerical semigroups, or related subjects, from different points of view: Factorization
Theory, Algebraic Geometry,Combinatorics, Commutative Algebra, Coding Theory, Number Theory, Semigroup Theory. This is a satellite event of the 9th European Congress of Mathematics.
Discrete Mathematics Days 2024, Alcalá de Henares, July 3-5, 2024.
The Discrete Mathematics Days (DMD 2024) will be held on July 3-5, 2024, at the Universidad de Alcalá, in Alcalá de Henares (Spain). The main focus of this international conference is on current
topics in Discrete Mathematics. Known as Discrete Mathematics Days since 2016, this conference inherits the tradition of the Jornadas de Matemática Discreta y Algorítmica (JMDA), the Spanish biennial
meeting on Discrete Mathematics started in 1998. The program consists consist of four plenary talks, a number of shorter contributed talks in two parallel sessions, and a poster session. The plenary
speakers are: Julia Böttcher (London School of Economics), Irit Dinur (Weizmann Institute), Arnau Padrol (Universitat de Barcelona) and Alex Scott (University of Oxford). The Ramon Llull prize on
Discrete Mathematics will be awarded during the conference. The conference is a satellite event of the 9th European Congress of Mathematics.
VI Encuentro Conjunto RSME-SMM, Valencia, July 1-5, 2024.
Two sessions on Discrete and Algorithmic Mathematics will be held within the event, sponsored by the network. The first one, “Gráficas y gráficos” is organised by K. Knauer, A. Montejano and G.
Perarnau and the, second one “Geometría discreta y matroides”, is organised by K. Knauer, L. Martinez and E. Roldan.
Congreso Biennal RSME, Pamplona, January 22-26, 2024.
Special Session devoted to Discrete and Algorithmic Mathematics within the Congreso Bienal de la Real Sociedad Matemática Española (RSME) 2024. The invited speakers of the session are: Maria
Bras-Amorós (U. Rovira i Virgili), Simon Briend (U. Pompeu Fabra), Pablo Candela (U. Autònoma Madrid), Pedro A. García-Sánchez (U. Granada), Delia Garijo (U. Sevilla), María Ángeles Hernández Cifre
(U. Murcia), Kolja Knauer (U. Barcelona), Julio J. Moyano-Fernández (U. Jaume I), Marc Noy (U. Politècnica Catalunya), Mariana Rosas-Ribeiro (U. Rovira i Virgili), Francisco Santos (U. Cantabria).
The session is organised by Delia Garijo, Marc Noy and Francisco Santos.
Santander Workshop on Geometric and Algebraic Combinatorics, Santander, January 15-19, 2024.
The Santander Workshop on Geometric and Algebraic Combinatorics will have a hybrid format of mini-courses and invited lectures, plus contributions from participants. The workshop will take place at
Universidad de Cantabria in Santander, Spain. It is partially funded by the Spanish Research Agency and the Bank of Santander.
Barcelona Math Days, Barcelona, November 2-3,2023. Special Session devoted to Discrete Mathematics in the third edition of the trienal conference of the Societat Catalana de Matemàtiques. Kolja
Knauer (UB) is organising the session, and the speakers are: Felipe Rincón (Queen Mary), Vincent Pilaud (UB), Fiona Skerman (Uppsala U.), Amanda Montejano (U. Nacional Autónoma de México), Alexandra
Wesolek (Technische U. BerlIn) and Ignacio García Marco (U. De la Laguna).
Encuentro andaluz de Matemática Discreta 2024, Cádiz, July 5-7, 2023.
The 13th edition of the Encuentro Andaluz de Matemática Discreta takes place in Cádiz, organized by the Departments of Mathemtaics and Statistics and OR.
XX Spanish Meeting on Computational Geometry, Santiago de Compostela, July 3-5, 2023
The XX Spanish Meeting on Computational Geometry took place from July 3 to 5, 2023, at Universidad de Santiago de Compostela, Santiago de Compostela, Spain. The core of this international conference
is composed by the most current issues in the field of Discrete and Computational Geometry, both in its theoretical and applied aspects. The EGC series began in 1990.
Discrete Mathematics Days 2022, Santander, July 4-6, 2022.
The Discrete Mathematics Days (DMD20/22) will be held on July 4-6, 2022, at Facultad de Ciencias of the Universidad de Cantabria (Santander, Spain). The main focus of this international conference is
on current topics in Discrete Mathematics. The previous editions were held in Sevilla in 2018 and in Barcelona in 2016, inheriting the tradition of the Jornadas de Matemática Discreta y Algorítmica
(JMDA), the Spanish biennial meeting (since 1998) on Discrete Mathematics. | {"url":"https://dam-network.github.io/activities/","timestamp":"2024-11-09T00:41:06Z","content_type":"text/html","content_length":"17232","record_id":"<urn:uuid:4d905f12-7e91-4569-a235-b4852074486e>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00719.warc.gz"} |
College Physics Problem Genesis Mission
• Thread starter aamartineng17
• Start date
In summary, the question asks for the average rate at which the 210-kg Genesis Mission Capsule did work on the desert when it crashed at a speed of 311 km/h and buried itself 81.0 cm deep. The
provided equation for solving this type of problem is W = F*d = m * a *d, and additional help is suggested in the form of the equation v2=v20+2aΔx.
Physics Question
When the 210-Kg Genesis Mission Capsule Crashed with a speed of 311 Km/h, it buried itself 81.0 cm deep in the desert floor.
Assuming constant acceleration during the crash, at what average rate did the capsule do work on the desert?
Given: m = 210 kg Find W =?
vo = 311 km /h = 86.4 m/s
x = 81.0 cm = 0.81 m
I know the following equation: W = F*d = m * a *d
*Note: I am stuck on this Solving this type of Physics Problem, could somebody please provide me some insight on how to calculate this problem. I would really appreciate any help you can give me.
Try adding v^2=v^2[0]+2aΔx to your arsenal and attack it again ...
Last edited:
FAQ: College Physics Problem Genesis Mission
What is the "College Physics Problem Genesis Mission"?
The "College Physics Problem Genesis Mission" is a research project that aims to develop a comprehensive database of college-level physics problems that can be used by students and educators for
learning and teaching purposes.
Who is involved in the "College Physics Problem Genesis Mission"?
The "College Physics Problem Genesis Mission" is a collaborative effort between a team of scientists, educators, and experts in the field of physics and education. It is supported by various
institutions, including universities and scientific organizations.
Why is the "College Physics Problem Genesis Mission" important?
The "College Physics Problem Genesis Mission" is important because it provides a valuable resource for students and educators to access a wide range of physics problems that are relevant and
challenging. It also promotes collaboration and innovation in the field of physics education.
How will the "College Physics Problem Genesis Mission" benefit students?
The "College Physics Problem Genesis Mission" will benefit students by providing them with a diverse selection of physics problems to practice and enhance their understanding of the subject. It also
encourages critical thinking and problem-solving skills.
How can educators use the "College Physics Problem Genesis Mission"?
Educators can use the "College Physics Problem Genesis Mission" as a tool to supplement their teaching materials and create engaging and challenging physics problems for their students. They can also
use it as a source of inspiration for developing new teaching strategies and approaches. | {"url":"https://www.physicsforums.com/threads/college-physics-problem-genesis-mission.583607/","timestamp":"2024-11-03T08:53:50Z","content_type":"text/html","content_length":"74690","record_id":"<urn:uuid:8d82ac1a-f3e2-4276-999a-560396262756>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00445.warc.gz"} |
Income inequality falls in 2018 - The Right Facts
• The Gini index, a measure of income inequality, fell 0.7 percent in 2018.
• This indicates that the level of wealth inequality has declined.
Economist Corner:
The Gini index is a statistical measure of income inequality where 0 indicates all households having the same share of income and 1 indicates one household has all the income. Thus, a reduction in
the Gini index indicates that the distribution of income between households has become more equal from 2017 to 2018. | {"url":"https://therightfacts.org/2019/11/12/income-inequality-falls-in-2018/","timestamp":"2024-11-09T06:50:13Z","content_type":"text/html","content_length":"51830","record_id":"<urn:uuid:45aeefd2-cebf-4877-be07-cbd01e22f276>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00548.warc.gz"} |
Cosmic Embryo #1: My Erdös Number Is 2i
Icon for Cosmic Embryo: Erupting star V838 Monocerotis
”On 2010-09-26, at 5:01 AM, Frithjof A.S. Sterrenburg wrote:
Dear Richard!
Sorry to hear you have been in the claws of the barber-surgeons. Hope everything is well now!… As for your itinerant existence, you begin to resemble the mathematician Erdös! Cheers, boy, get well
My Erdös Number is 2i where i is the imaginary number: √-1.
This is because I post-doc'ed with Stanislaw Ulam 1967-68, who published with Paul Erdös (Erdös&Ulam, 1968), but I didn’t see eye to eye with Ulam at all, and thus published nothing with him. Erdös
was an itinerant mathematician who wandered the world doing mathematics with colleagues who would house and feed him in exchange (Hoffman, 1998). A great cloud of mathematicians centered on Erdös has
formed in ether space (Grossman, Ion&De Castro, 2010), each counting their Erdös number as the minimal published paper distance to the wandering master (De Castro&Grossman, 1999).
I have, indeed, become an itinerant theoretical biologist. As I like to say, I have two scientific careers, embryology and computed tomography, and one scientific hobby: diatoms. I also occasionally
plunge into social issues such as HIV/AIDS, the science granting system, sustainable energy, and refurbishing the university libraries of Afghanistan with up to date medical
and other books with a network of volunteers called Books With Wings.
Frithjof Sterrenburg is a diatomist of the long tradition of published “amateurs”, meaning that he doesn’t get a paycheck for his fine academic work (“Gentlemen diatomists of independent means” in:
(Gordon et al., 2009)). But he stays put in Heiloo, The Netherlands with wife Jos while I wander North America with my colleague/wife Natalie, our two guard dogs Fred and Trusty, and one crazy cat
Klinger, in our 28’ Keystone Passport trailer, pulled by our Ford pickup truck
with an inverted Grumman canoe on its Leer cap, a rough approximation to the flying amphibious turtledove submarine proposed in the late 1950s for the military in Mad Magazine. These adventures are
starting to bear fruit, and thus on the prodding of David Deamer, I have decided to begin writing this blog, Cosmic Embryo.
Figure 1.1. Our “wheel house”, named such by our grandchildren, parked here in Panacea, Florida.
So what went wrong between me and Ulam (who had I learned might be the “real father” of the H-bomb (Gorelik, 2009))? It’s actually rather simple. I was a young man then, with a PhD in Chemical
Physics at age 23, eager to solve biological problems. Ulam sought interesting mathematical problems inspired by biology. These proved opposite and incompatible motivations.
I had published a crude model for morphogenesis that produced a two dimensional (2D) “snail” (Gordon, 1966) (Figure 1.2), half of my thesis, inspired by Magorah Maruyama (Maruyama, 1963), who had
heard a talk by Ulam on the growth of patterns (Ulam, 1962). So Ulam hired me sight unseen for my very first postdoc, 1967-1968. I was hired as a “go between”. It was my
job to act as a translator between friends Stanislaw Ulam, the great mathematician, and Theodore Puck, who had solved the problem of how to grow mammalian cells in tissue culture (Puck, 1959).
I was later to encounter another (biologist, mathematician) pair, Conrad Waddington and René Thom, who though they thought they spoke a common language, I doubt ever understood
one another at all.
I was working on the problem of the movement of multiple ribosomes across messenger RNA (mRNA) (Gordon, 1969), and Ulam regarded the math as trivial, which it was. I held my ground, more intrigued by
my first attempt at understanding a central component of biology, but the “collaboration” came to naught. Thus 2i am I.
Figure 1.2. Computer simulated snail with feedback control (Gordon, 1966).
De Castro, R.&J.W. Grossman (1999). Famous trails to Paul Erdös. Mathematical Intelligencer 21(3), 51-63.
Erdös, P.&S. Ulam (1968). On equations with sets as unknowns. Proceedings
of the National Academy of Sciences of the United States of America 60(4), 1189-1195.
Gordon, R. (1966). On stochastic growth and form. Proc. Natl. Acad. Sci. USA 56, 1497-1504.
Gordon, R. (1969). Polyribosome dynamics at steady state. J. Theor. Biol. 22(3), 515-532.
Gordon, R., D. Losic, M.A. Tiffany, S.S. Nagy&F.A.S. Sterrenburg (2009). The Glass Menagerie:
diatoms for novel applications in nanotechnology [invited]. Trends in Biotechnology 27(2), 116-127.
Gorelik, G. (2009). The paternity of the H-bombs: Soviet-American perspectives. Physics in Perspective 11(2), 169-197.
Grossman, J., P. Ion&R. De Castro. (2010). The Erdös Number Project. http://www.oakland.edu/enp/
Hoffman, P. (1998). The Man Who Loved Only Numbers: The Story of Paul Erdös and the Search for Mathematical Truth. New York, Hyperion.
Maruyama, M. (1963). The second cybernetics: deviation-amplifying mutual causal processes. Amer. Sci. 51(2), 164-179.
Puck, T.T. (1959). Quantitative studies on mammalian cells in vitro. Reviews of Modern Physics 31(2), 433-448.
Ulam, S. (1962). On some mathematical problems connected with patterns of growth of figures. Symposium in Applied Mathematics 14, 215-224. | {"url":"https://www.science20.com/cosmic_embryo/cosmic_embryo_1_my_erd%C3%B6s_number_2i","timestamp":"2024-11-13T19:24:33Z","content_type":"text/html","content_length":"38862","record_id":"<urn:uuid:e231349b-ed7e-49b0-90ba-73f261b3e15a>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00710.warc.gz"} |
A. Sorguç, "Architecture of Mathematics, Mathematics in Architecture," Arredemento Mimarlık , no.256, pp.67-72, 2012
Sorguç, A. 2012. Architecture of Mathematics, Mathematics in Architecture. Arredemento Mimarlık , no.256 , 67-72.
Sorguç, A., (2012). Architecture of Mathematics, Mathematics in Architecture. Arredemento Mimarlık , no.256, 67-72.
Sorguç, ARZU. "Architecture of Mathematics, Mathematics in Architecture," Arredemento Mimarlık , no.256, 67-72, 2012
Sorguç, ARZU. "Architecture of Mathematics, Mathematics in Architecture." Arredemento Mimarlık , no.256, pp.67-72, 2012
Sorguç, A. (2012) . "Architecture of Mathematics, Mathematics in Architecture." Arredemento Mimarlık , no.256, pp.67-72.
@article{article, author={ARZU SORGUÇ}, title={Architecture of Mathematics, Mathematics in Architecture}, journal={Arredemento Mimarlık}, year=2012, pages={67-72} } | {"url":"https://avesis.metu.edu.tr/activitycitation/index/1/ed7bdecd-1371-4a3d-8e21-263f43d940aa","timestamp":"2024-11-09T03:39:59Z","content_type":"text/html","content_length":"10157","record_id":"<urn:uuid:f8d1b7ce-5967-4069-99fc-e6cfb038df43>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00768.warc.gz"} |
What is Function Notation Formula? Examples
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
Function Notation Formula
Being a critical part of mathematics, functions and function notation formula exists at the center of mathematical analysis studies. Function notation is a symbolic representation of a function.
Function notation helps in describing lengthy functions in a simpler way and their representations help us to understand them in a much easier way. Let's understand the function notation formula.
What is Function Notation Formula?
A function is an operator which operates on an input variable and produces an output. Functions arise whenever one quantity depends on another. Let's look into the usage of the function notation
formula to understand the relationship between the input and output variables of a function.
In general, a function is represented using the letter 'f'. Also, other lower case letters such as 'g' or 'h' are also used to represent the function. 'f' along with the input variable enclosed
inside parentheses() where the input variable is usually represented as 'x' forms the function notation formula.
Let's take an example to understand the function notation formula.
Consider the relation y = x^2 where x is any real number. This equation tells us that y is dependent on x, because y is the square of x. In technical terms, y is a function of x, and this is
specified using function notation formula as follows:
y = f(x) or f : X → Y
• f denotes the function name
• x is an element from the domain set X
• y or f(x) is an element from the range set Y
• The arrow indicates the mapping of input to the output
In other words, x is the input variable producing an output y or f(x).
As we know that, y = x^2 thus, our function notation formula will be:
f(x) = x^2
Let's look into some examples to understand the application of the function notation formula.
Break down tough concepts through simple visuals.
Math will no longer be a tough subject, especially when you understand the concepts through visualizations with Cuemath.
Solved Examples Using Function Notation Formula
Example 1: y is a function of x, and the function definition is given as follows:
\[y = f\left( x \right) = \frac{1}{{1 + {x^2}}}\]
Find the output values of the function for \(x = 0\), \(x = - 1\) and \(x = \sqrt 2 \) using function notation formula.
The function notation formula given is:
\(y = f\left( x \right) = \frac{1}{{1 + {x^2}}}\)
Thus, by substituting the values of x we have,
\(&f\left( 0 \right) = \frac{1}{{1 +{{\left( 0 \right)}^2}}} = \frac{1}{1} = 1\\&f\left( { - 1} \right) = \frac{1}{{1+ {{\left( { - 1} \right)}^2}}} = \frac{1}{2}\\&f\left( {\sqrt 2 } \right) =\frac
{1}{{1 + {{\left( {\sqrt 2 } \right)}^2}}} = \frac{1}{{1 + 2}} =\frac{1}{3}\)
Answer: Thus, f(0) = 1, f(-1) = 1/2 and f(√2) = 1/3
Example 2: A cone has a variable height h and a variable base radius r, but the sum of h and r is fixed. The cone is made of a material of density \(\rho \). Express the mass m of cone as a function
of its height h.
The volume V of a cone is given by:
\(V = \frac{1}{3}\pi {r^2}h\)
Let the (fixed) sum of h and r be k. Thus, \(r = k - h\), and so:
\(V = \frac{1}{3}\pi h{\left( {k - h} \right)^2}\)
Using function notation formula, the mass of the cone can now we expressed as a function of h; m will be \(\rho \) times the volume V:
\(m = f\left( h \right) = \rho V = \frac{1}{3}\pi\rho h{\left( {k - h} \right)^2}\)
Answer: Thus, \(m = f\left( h \right) = \rho V = \frac{1}{3}\pi\rho h{\left( {k - h} \right)^2}\) is the required function.
Math worksheets and
visual curriculum | {"url":"https://www.cuemath.com/function-notation-formula/","timestamp":"2024-11-02T21:50:49Z","content_type":"text/html","content_length":"199427","record_id":"<urn:uuid:e010fc62-4603-40d5-b84b-61c2027ada93>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00291.warc.gz"} |
Warm-up: Number Talk: Division (10 minutes)
The purpose of this Number Talk is for students to demonstrate strategies and understandings they have for finding whole-number quotients of whole numbers with up to four-digit dividends and
two-digit divisors. These understandings help students develop fluency and will be helpful later in this lesson when students will develop their own number talk activity.
• Display one problem.
• “Give me a signal when you have an answer and can explain how you got it.”
• 1 minute: quiet think time
• Record answers and strategy.
• Keep problems and work displayed.
• Repeat with each problem.
Student Facing
Find the value of each expression mentally.
• \(28\div14\)
• \(70\div14\)
• \(98\div14\)
• \(350\div14\)
Activity Synthesis
• “What did the writer of this activity have to pay attention to when they designed this activity?” (The expressions need to be done mentally so they can't be too complex.)
• “Where do we see those things in how the expressions change during the Number Talk?” (The first problem helps to do the second one and the first 3 help to do the last one.)
• “Imagine this number talk continued with a fifth expression. How does \(700 \div 28\) fit in with this number talk?” (It doubles the dividend and the divisor from the previous expression, which
means the quotient is the same, 25.)
Activity 1: Number Talk: Design 1 (15 minutes)
The purpose of this activity is for students to reason about strategies for finding whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors. Students add one
expression to a partially-completed Number Talk activity. If there is time, students can facilitate their Number Talk with another group.
MLR8 Discussion Supports: Prior to solving the problems, invite students to make sense of the situations and take turns sharing their understanding with their partner. Listen for and clarify any
questions about the context.
Advances: Reading, Representing
Engagement: Provide Access by Recruiting Interest. Synthesis: Optimize meaning and value. Invite students to share how they chose the last expression to complete the number talk with their favorite
math teacher.
Supports accessibility for: Conceptual Processing, Language
• Groups of 2 or 4
• “Now you will work with your group to complete a Number Talk activity. This activity has one expression missing. Decide on an expression that would complete the Number Talk and write it on the
blank line.”
• 10 minutes: small-group work time
• As students work, monitor for groups who discuss and design an expression based on some of the following:
□ They keep the same divisor.
□ They add the dividends, but keep the same divisor.
□ They subtract the dividends and keep the same divisor.
□ They make the dividend smaller to get \(15 \div 15\) or \(3 \div 15\).
□ They use a combination of multiples, half, double, or triple the dividend and divisor to change both the dividend and divisor. For example, \(15 \div 60\).
Student Facing
Write an expression to complete the number talk. Be prepared to explain how you chose the last expression.
• \(30\div15\)
• \(45 \div15\)
• \(300\div15\)
Activity Synthesis
• Choose small groups to share who had different reasons for their fourth expression.
• Ask students to share their completed Number Talk and ask the class to share reasons for the last expression.
• As each group shares, continually ask others in the class if they agree or disagree and the reasons why.
Activity 2: Number Talk: Design 2 (15 minutes)
The purpose of this activity is for students to reason about strategies for finding whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors. Students add two
expressions to a partially-completed Number Talk activity. If there is time, students can facilitate their Number Talk with another group.
• Groups of 2 or 4
• “Now you will work with your group to complete a Number Talk activity. This activity has two options that each have two expressions missing. Decide on expressions that would complete the Number
Talk and write them in the blank lines.”
• 10 minutes: small-group work time
• As students work, monitor for groups who discuss and design expressions based on some of the following:
□ They adjust the dividend but keep the divisor.
□ They adjust the divisor and keep the dividend.
□ Use partial quotients either through addition or subtraction.
□ Use multiplicative relationships such as halving and doubling and reason how that impacts the quotient.
□ Use multiplicative relationships such as multiplying by 2 or 10 and dividing by 2 or 10 and reason how that impacts the quotient.
Student Facing
Choose one of the number talks to complete. Be prepared to share your reasoning for the expressions you chose.
Option 1:
• \(220 \div 22\)
• \(66 \div 22\)
• ________________
• ________________
Option 2:
• \(260 \div 26\)
• \(260 \div 13\)
• ________________
• ________________
Activity Synthesis
• Choose small groups to share that had different reasons for their expressions.
• Ask students to share their completed Number Talk and ask the class to share reasons for their expressions.
• As each group shares, continually ask others in the class if they agree or disagree and the reasons why.
Activity 3: Number Talk: Design 3 (15 minutes)
The purpose of this activity is for students to reason about strategies for finding whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors. Students add three
expressions to a partially-completed Number Talk activity. If there is time, students can facilitate their Number Talk with another group.
• Groups of 2 or 4
• “Now you will work with your group to complete a Number Talk activity. This activity has three expressions missing. Decide on expressions that would complete the Number Talk and write them in the
blank lines.”
• 10 minutes: small-group work time
• As students work, monitor for groups who discuss and design expressions based on some of the following:
□ They adjust the dividend but keep the divisor.
□ They adjust the divisor and keep the dividend.
□ Use partial quotients either through addition or subtraction.
□ Use multiplicative relationships such as multiplying by 2 or 10 and dividing by 2 or 10 and reason how that impacts the quotient.
Student Facing
Write expressions to complete the Number Talk. Be prepared to share your reasoning for the expressions.
• \(430 \div 43\)
• __________
• __________
• __________
Activity Synthesis
• Choose small groups to share that had different reasons for their expressions.
• Ask students to share their completed Number Talk and ask the class to share reasons for their expressions.
• As each group shares, continually ask others in the class if they agree or disagree and the reasons why.
Lesson Synthesis
“What were the most important things about your expressions you had to consider as you created your Number Talk? Why were these things important?” (I needed to make sure my expressions could be
evaluated mentally. I also needed to make sure my reasoning to connect the expressions was also visible to others.)
Cool-down: Reflection (5 minutes) | {"url":"https://curriculum.illustrativemathematics.org/k5/teachers/grade-5/unit-8/lesson-16/lesson.html","timestamp":"2024-11-14T10:21:46Z","content_type":"text/html","content_length":"84575","record_id":"<urn:uuid:b2302ca4-81ac-45dd-b4ad-b747855f58ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00439.warc.gz"} |
Feature Column from the AMS
Complex Networks
7. References
Aingworth, D. and C. Chekuri, P. Indyk, R. Motwani, Fast estimation of diameter and shortest paths (without matrix multiplication), SIAM J. Comput., 28 (1999) 1167-1181.
Albert, B and A.-L. Barabási, Topology of evolving networks: Local events and universality, Phys. Rev. Lett., 85 (2000) 5234-5237.
Alon, N. and J. Spencer, The Probabilistic Method, John Wiley, New York, 1992.
Barabási. A., Linked, Perseus, New York, 2002.
Bavelas, A., A mathematical model for group structures, Human Organization, 7 (1948) 16-30.
Bavelas, A., Communications patterns in task oriented groups, J. Acoust. Soc. Amer., 10 (1965) 271-282.
Beauchamp, M., Elements of Mathematical Sociology, Random House, New York, 1970.
Bollobás, B., Random Graphs, Academic Press, London, 1985 (2nd edition, 2001).
Bollobás, B. and F. Chung, The diameter of a cycle plus a random matching, SIAM J. Discrete Math., 1 (1988) 328-
Chung, F. and M. Garey, Diameter bounds for altered graphs, J. Graph Theory, 8 (1984) 511-534.
Coleman, J., Introduction to Mathematical Sociology, Free Press, New York, 1964.
Dezsõ, Z. and A. Barabási, Halting viruses in scale free networks, Physical Review E, 65 (2002) 055103R.
Diestel, R., Graph Theory, Springer-Verlag, New York, 1997.
Dorogovtsev, S. and J. Mendes, Evolution of Networks: From Biological Nets to the Internet and WWW, Oxford U. Press, Oxford, 2003.
Eppstein, D. and J. Wang, Fast approximation of centrality, in Proceeding of the Twelfth Annual ACM-SIAM Symposium on Discrete Algorithms, ACM-SIAM, New York and Philadelphia, 2001, p. 228-229,
Erdös, P., Some remarks on the theory of graphs, Bull. Amer. Math. Soc., 53 (1947) 292-294.
Erdös, P. and A. Rényi, On the evolution of random graphs, Mat. Kutató Int. Közl., 5 (1960) 17-60.
Erdös, P. and G Szekeres, A combinatorial problem in geometry, Compositio Math., 2 (1935) 463-470.
Flament, C., Applications of Graph Theory to Group Structure, Prentice-Hall, Englewood Cliffs, 1963.
Freeman, L., Centrality in social networks: I. conceptual clarification, Social Networks, 1 (1979) 215-239.
Frieze, A. and G. Grimmett, The shortest-path problem for graphs with random arc-lengths, Discrete Applied Math., 10 (1985) 57-77.
Guare, J., Six Degrees of Separation: A Play, Vintage Books, New York, 1990.
Hage, P. and F. Harary, Structural Models in Anthropology, Cambridge U. Press, New York, 1963.
Harary, F. and R. Norman, D. Cartwright, Structural Models, John Wiley, New York, 1965.
Hayes, B., Graph theory in practice: Part I, Amer. Sci., 88 (2000) 9-13.
Hayes, B., Graph theory in practice: Part: II, Amer. Sci., 88 (2000) 104-109.
Janson, S. and T. Luczak, A. Rucinski, Random Graphs, John Wiley, New York 1999.
Killworth, P. and H. Barnard, Reverse small world experiment, Social Networks 1 (1978) 159-192.
Kleinberg, J., The small-world phenomenon: An algorithmic perspective, in Proc. 32nd Annual ACM Symposium in the Theory of Computing, ACM, New York, 2000, p. 163-170.
Kleinberg, J., Navigation in a small world, Nature 406 (2000), p. 485.
Kochen, M., (ed.), The Small World, Ablex, Norwood, 1989.
Leik, R. and B. Meeker, Mathematical Sociology, Prentice-Hall, Englewood Cliffs, 1975.
Milgram, S., The small world problem, Psychology Today 2 (1967) 60-67.
Milo, R. and S. Itzkowitz, N. Kashtan, R. Levitt, S. Shen-Orr, I. Ayzenshtat, M. Sheffer, U. Alon, Superfamiles of evolved and designed networks, Science, 303 (2004) 1538-1542.
Newman, M., The structure and function of complex networks, SIAM Review, 45 (2003) 167-256.
Palmer, E., Graphical Evolution, John Wiley, New York, 1985.
Travers, J., and S. Milgram, An experimental study of the small world problem, Sociometry 32 (1969) 425-443.
West, D., Introduction to Graph Theory, 2nd. edition, Prentice-Hall, Upper Saddle River, 2001.
Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic
information and reviews of some these materials. Information related to some of the papers above can accessed via the ACM Portal. | {"url":"http://www.ams.org/publicoutreach/feature-column/fcarc-networks7","timestamp":"2024-11-11T10:24:56Z","content_type":"text/html","content_length":"50218","record_id":"<urn:uuid:7047eb8d-c8af-4a6d-afb5-df7e41953bcf>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00807.warc.gz"} |
The structure labeled with the pin is called the | GradePack
The structure labeled with the pin is called the
The structure lаbeled with the pin is cаlled the
(9 pоints) Wаter аt 1 аtm is heated in a mechanically pоlished stainless-steel pan, with its surface is electrically heated tо Ts = 110°C. The pan’s diameter is 20 cm. What is
the power required to boil the water? Use the following properties for your calculations: Tsat = 100°C, liquid phase’s density = 957.9 kg/m3, vapor phase’s density = 0.5956 kg/m3, liquid phase’s
specific heat = 4217 J/(kg×K), liquid phase’s dynamic viscosity = 279×10-6 Pa×s, liquid phase’s Prandtl number = 1.76, hfg = 2,257,000 J/kg, Surface tension = 0.0589 N/m.
The mechаnism оf weight gаin in pаtients оn antipsychоtics relates to the
following effect: 1. Direct effect on D2
receptors2. Increased sensitivity to leptin 3. H1 antagonism 4. Prolactin suppression 5. HT2c agonism
Which оf the fоllоwing аreаs of concern or feаr would most likely be the main focus for a toddler scheduled for surgery? | {"url":"https://gradepack.com/the-structure-labeled-with-the-pin-is-called-the/","timestamp":"2024-11-09T21:08:07Z","content_type":"text/html","content_length":"40711","record_id":"<urn:uuid:71a2e383-7936-48e9-8998-416c0185c7e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00574.warc.gz"} |
Disk Intersection Graphs: Models, Data Structures, and Algorithms
Paul Seiferth:
Disk Intersection Graphs: Models, Data Structures, and Algorithms
Let P be a set of n point sites in the plane. The unit disk graph UD(P) on P has vertex set P and an edge between two sites p,q of P if and only if p and q have Euclidean distance |pq| <= 1. If we
interpret P as centers of disks with diameter 1, then UD(P) is the intersection graph of these disks, i.e., two sites p and q form an edge if and only if their corresponding unit disks intersect. Two
natural generalizations of unit disk graphs appear when we assign to each point p of P an associated radius r_p > 0. The
first one is the disk graph D(P), where we put an edge between p and q if and only if |pq| <= r_p + r_q, meaning that the disks with centers p and q and radii r_p and r_q intersect. The second one
yields a directed graph on P, called the transmission graph of P. We obtain it by putting a directed edge from p to q if and only if |pq| <= r_p, meaning that q lies in the disk with center p and
radius r_p. For disk and transmission graphs we define the radius ratio Psi to be the ratio of the largest and the smallest radius that is assigned to a site in P. It turns out that the radius ratio
is an important measure of the complexity of the graphs and some of our results will depend on it.
For these three classes of disk intersection graphs we present data structures and algorithms that solve four types of graph theoretic problems: dynamic connectivity, routing, spanner construction,
and reachability oracles; see below for details. For disk and unit disk graphs, we improve upon the currently best known results, while the problems we consider for transmission graphs abstain
non-trivial solutions so far.
Dynamic Connectivity:
First, we present a data structure that maintains the connected components of a unit disk graph UD(P) when P changes dynamically. It takes O(log^2 n) amortized time to insert or delete a site in P
and O(log(n)/loglog(n)) worst-case time to
determine if two sites are in the same connected component. Here, n is the maximum size of P at any time. A simple variant improves the amortized update time to O(log(n)loglog(n)) at the cost of a
slightly increased worst-case query time of O(log(n)).
Using more advanced data structures, we can extend our approach to disk graphs. While the worst-case query time remains
O(log(n)/loglog(n)), an update now requires O(Psi^2 2^(alpha(n))log^(10)(n)) amortized expected time, where Psi is the radius ratio of the disk graph and alpha(n) is the inverse Ackermann function.
As the second problem, we consider routing in unit disk graphs. A routing scheme R for UD(P) assigns to each site s of P a
label l(s) and a routing table rho(s). For any two sites s and t of P, the scheme R must be able to route a packet from s to t in the following way: given a current site r (initially, r = s), a
header h (initially empty), and the target label l(t), the scheme R may consult the current routing table rho(r) to compute a new site r' and a new header h', where r' is a neighbor of r in UD(P).
The packet is then routed to r', and the process is repeated until the packet reaches t. The resulting sequence of sites is called the routing path. The stretch of R is the maximum ratio of the
(Euclidean) length of the routing path produced by R and the shortest path in UD(P), over all pairs of distinct sites in P.
For any given eps > 0, we show how to construct a routing scheme for UD(P) with stretch 1+eps using labels of O(log(n)) bits and routing tables of O(eps^(-5)log^2(n)log^2(D)) bits, where D is the
(Euclidean) diameter of UD(P). The header size is O(log(n)log(D)) bits.
Next, we construct sparse approximations of transmission and disk graphs. Let G be a transmission graph. A t-spanner for G is
a subgraph H of G with vertex set P so that for any two sites p and q of P, we have d_H(p, q) <= td_G(p, q), where d_H and
d_G denote the shortest path distance in H and G (with Euclidean edge lengths). We show how to compute a t-spanner for G with O(n) edges in O(n(log(n) + log(Psi))) time, where Psi is the radius ratio
of P. Utilizing advanced data structures, we obtain a
construction that runs in O(n log^5(n)) time, independent of Psi. This construction can be adapted to disk graphs and gives a t-spanner for D(P) in expected time O(n2^(alpha(n))log^(10)(n)), where
alpha(n) is the inverse Ackermann function.
As an application we show that our t-spanner can be used to find a BFS tree in a transmission or disk graph for any given start vertex in O(n log(n)) additional time.
Reachability Oracles:
Finally, we compute reachability oracles for transmission graphs. These are data structures that answer reachability queries: given two sites p and q, is there a directed path between them? The
quality of an oracle is measured by the space S(n), the query time Q(n), and the preproccesing time. We present three reachability oracles whose quality depends on the radius ratio
Psi: the first one works only for Psi < sqrt(3) and achieves Q(n) = O(1) with S(n) = O(n) and preprocessing time O(n log(n));
the second data structure gives Q(n) = O(Psi^3 sqrt(n)) and S(n) = O(Psi^3 n^(3/2)); the third data structure is randomized with
Q(n) = O(n^(2/3)(log(n) + log(Psi))) and S(n) = O(n^(5/3)(log(n) + log(Psi))) and answers queries correctly with high probability.
As a second application for our spanners, we employ them to achieve a fast preprocessing time for our reachability oracles. | {"url":"https://www.mi.fu-berlin.de/inf/groups/ag-ti/theses/phd_finished/seiferth_paul/index.html","timestamp":"2024-11-03T00:02:39Z","content_type":"text/html","content_length":"31386","record_id":"<urn:uuid:98f329a9-278f-47cc-b46d-265f653e27dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00329.warc.gz"} |
Aggarwal, V and Dharni, K (2020). Deshelling the Shell Companies Using Benford’s Law: An Emerging Market Study. Vikalpa 45(3), pp. 160-169. DOI:10.1177/0256090920979695.
Alves, AD, Yanasee, HH and Soma, NY (2016). An analysis of bibliometric indicators to JCR according to Benford’s law. Scientometrics 107(3), pp. 1489–1499. DOI:10.1007/s11192-016-1908-3.
Alves, AD, Yanasse, HH and Soma, NY (2014). Benford's Law and articles of scientific journals: comparison of JCR® and Scopus data. Scientometrics 98, pp. 173-184. ISSN/ISBN:0138-9130.
Burgos, A and Santos, A (2021). The Newcomb–Benford law: Scale invariance and a simple Markov process based on it (Previous title: The Newcomb–Benford law: Do physicists use more frequently
the key 1 than the key 9?). Preprint arXiv:2101.12068 [physics.pop-ph]; last accessed August 8, 2022; Published Am. J. Phys. 89, pp. 851-861.
Egbunike, FC and Amakor, CI (2013). Fraud & auditors analytical procedure: A test of Benford’s law. EBS Journal of Management Sciences 1(1), pp. 14-31.
Egghe, L (2011). Benford’s law is a simple consequence of Zipf’s law. ISSI Newsletter 7(3), pp. 55–56.
Egghe, L and Guns, R (2012). Applications of the generalized law of Benford to informetric data. Journal of the American Society for Information Science and Technology 63(8), pp. 1662-1665.
ISSN/ISBN:1532-2882. DOI:10.1002/asi.22690.
Hürlimann, W (2015). On the uniform random upper bound family of first significant digit distributions. Journal of Informetrics, Volume 9, Issue 2, pp. 349–358. DOI:10.1016/j.joi.2015.02.007.
Ileanu, B-V, Ausloos, M, Herteliu, C and Cristescu, MP (2019). Intriguing behavior when testing the impact of quotation marks usage in Google search results. Quality & Quantity 53(5), pp.
2507-2519. DOI:10.1007/s11135-018-0771-0.
Mangalagiri, J, Jyothi, CSP and Ramya, P (2018). Benford’s Law and Stock Market - The Implications for Investors: The Evidence from India Nifty Fifty. Jindal Journal of Business Research 7(2),
pp. 103–121 . DOI:10.1177/2278682118777029.
Mir, TA (2014). The Benford law behavior of the religious activity data. Physica A 408, pp. 1-9. DOI:10.1016/j.physa.2014.03.074.
Mir, TA (2016). Citations to articles citing Benford's law: a Benford analysis. arXiv:1602.01205; posted Feb 3, 2016.
Pröger, L, Griesberger, P, Hackländer, K, Brunner, N and Kühleitner, M (2021). Benford’s Law for Telemetry Data of Wildlife. Stats 4(4), pp. 943–949. DOI:10.3390/ stats4040055.
Tseng, H-C, Huang, W-N and Huang, D-W (2017). Modified Benford’s law for two-exponent distributions. Scientometrics 110(3), pp. 1403–1413. DOI:DOI 10.1007/s11192-016-2217-6. | {"url":"https://benfordonline.net/references/up/1131","timestamp":"2024-11-05T23:01:35Z","content_type":"application/xhtml+xml","content_length":"20986","record_id":"<urn:uuid:86fb9e34-5e75-45b7-a59b-597ce812555a>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00346.warc.gz"} |
ball mill charge volume calculation
The ball mill rotates clockwise at various constant fractions N of the critical speed of rpm, at. Charge behaviour. Fig. 1 shows typical charge shapes predicted for our 'standard' 5 m ball mill
and charge (described above) filled to 40% (by volume) for four rotation rates that span the typical range of operational speeds.
WhatsApp: +86 18838072829
Generally, filling the mill by balls must not exceed 30%35% of its volume. The productivity of ball mills depends on the drum diameter and the relation of ∫ drum diameter and length. The optimum
ratio between length L and diameter D, L: D, is usually accepted in the range
WhatsApp: +86 18838072829
The starting point for ball mill media and solids charging generally starts as follows: 50% media charge. Assuming 26% void space between spherical balls (nonspherical, irregularly shaped and
mixedsize media will increase or decrease the free space) 50% x 26% = 13% free space. Add to this another 10%15% above the ball charge for total of 23% ...
WhatsApp: +86 18838072829
Mill Volume Filling Calculation. This calculation will estimate the volume percentage of charge in a mill based upon the number of lifters exposed above the mill charge. Enter the total number of
lifters installed in the mill. Enter the number of lifters that are exposed above the mill charge. Fractions are acceptable and should be used in the ...
WhatsApp: +86 18838072829
Ball Mill Design/Sizing Calculator *** Evaluate plant performance using CUSUM Chart Example; Charge Volume of a Grinding Mill (Method 1) *** Charge Volume of a Grinding Mill (Method 2) ***
Calculating Motor Mill Size (Short Tons) Calculating Motor Mill Size (Metric Tons) Mill Critical Speed; Example Mill Metallurgy Summary/Report; Rod Mill ...
WhatsApp: +86 18838072829
Results show that with the six parameters abovementioned estimated, the charge mixture is fully characterized with about 5 10 % deviation. Finally, the estimated ... Breakage mechanism in
tumbling ball mills 20 Impact breakage 21 Abrasion breakage 21 Breakage by attrition 22 Population balance model 23
WhatsApp: +86 18838072829
The ball charge is determined by the operator targeting the balance between grind and throughput, the higher the ball charge the more aggressive the milling becomes. With softer oxide ores lower
ball charges are usual however the harder sulphides does need more media energies to meet the P80 target and an acceptable throughput.
WhatsApp: +86 18838072829
The original ball load in the mill was 6614 lb. (3000 kg.) and the load at the end of the 694 hr. was 6338 lb. ( kg.). During this time, 590 lb. ( kg.) of balls less than 3 in. ( mm.) in diameter
were discarded from the mill. The screen analysis of the ball charge at the end of the operation is shown in Table 20.
WhatsApp: +86 18838072829
V — Effective volume of ball mill, m3; G2 — Material less than in product accounts for the percentage of total material, %; G1 — Material less than in ore feeding accounts for in the percentage
of the total material, %; q'm — Unit productivity calculated according to the new generation grade (), t/(). The values of q'm are determined by ...
WhatsApp: +86 18838072829
BALL MILL VOLUME LOADING. Chamber Unit H (ceiling height) m D (effective diameter) m Chamber length m Reffective m H D. VL % Media volume m3 Media density t/m3 grinding media bed
WhatsApp: +86 18838072829
In an article by Anselm translated by Pearson a method for the calculation of the time of passage of cement through a ball mill was given. The basis of the method is that the volume rate of
throughput is known and it is assumed that the surface of the powder is coincident with the surface of the ball charge. It follows that the average ...
WhatsApp: +86 18838072829
If so, half of the occupied volume would be filled by the charge, the other half by the balls real volume (BCVR = 1:1 v/v), and the total occupied volume would be 73% (27% left empty). The
maximum ...
WhatsApp: +86 18838072829
For wet grinding with grinding balls < 3 mm the ball charge should make up 60 % of the jar volume, while the sample amount should be 30 %. The density of the grinding ball materials is used to
calculate the mass of the required amount of grinding balls. High Energy Ball Mill Emax
WhatsApp: +86 18838072829
Maximum ball size (MBS) Please Enter / Stepto Input Values. Mill Feed Material Size F, mm. Specific weight of Feed SG, g/cm 3.
WhatsApp: +86 18838072829
Reading Lecture In ball mills, steel balls or hard pebbles to break particle based on impact and attrition. A rotating mill charged with media and ore is lifted against the inside perimeter. Some
of the media falls and impacts the ore particles at the bottom of the mill.
WhatsApp: +86 18838072829
Figure 11 — SAG mil l power, charge mass, and mill charge relation ship s at 10 % and 15 % ball charge The major difficul ty with this approach is that the mill load reported to the control room
WhatsApp: +86 18838072829
Ball mill filling volume is calculated using Equation (6), assuming that the bed porosity of balls is 40%. ... The mill charge consisted of kg of steel balls, of mm, mm and mm monosizes. ... ;
MenéndezAguado, Product size distribution function influence on interpolation calculations in the Bond ball mill ...
WhatsApp: +86 18838072829
If a ball mill contained only coarse particles, then 100% of the mill grinding volume and power draw would be applied to the grinding of coarse particles. In reality, the mill always contains
fines: these fines are present in the ball mill feed and are produced as the particles pass through the mill.
WhatsApp: +86 18838072829
In the present invention a method, apparatus and computer program are presented, with which the amount of balls among ore material contained in a grinding mill is estimated as a percentage by
volume of the total volume of the mill. Preferably, the invention relates to semiautogenous mills. In the invention, an expanded Kalman filter is used to estimate the ball charge such that
measurements ...
WhatsApp: +86 18838072829
Closed Circuit = W 2. Open Circuit, Product Topsize not limited = W 3. Open Circuit, Product Topsize limited = W to W Open circuit grinding to a given surface area requires no more power than
closed circuit grinding to the same surface area provided there is no objection to the natural topsize.
WhatsApp: +86 18838072829
Could you recommend a suitable calculation for ball mill charge volume using the chord length of the ball charge? We are currently using the calculation for distance between charge to top of
mill. We would like to use the chord length method as well, and take the average of the two for our final charge volume result. ...
WhatsApp: +86 18838072829
Ball top size (bond formula): calculation of the top size grinding media (balls or cylpebs):Modification of the Ball Charge: This calculator analyses the granulometry of the material inside the
mill and proposes a modification of the ball charge in order to improve the mill efficiency:
WhatsApp: +86 18838072829
We can calculate the steel charge volume of a ball or rod mill and express it as the % of the volume within the liners that is filled with grinding media. While the mill is stopped, the charge
volume can be gotten by measuring the diameter inside the liners and the distance from the top of the charge to the top of the mill.
WhatsApp: +86 18838072829
A ball mill is a type of grinder widely utilized in the process of mechanochemical catalytic degradation. It consists of one or more rotating cylinders partially filled with grinding balls (made
WhatsApp: +86 18838072829
Investigations were carried out in a laboratory ball mill having the size of DxL = 160x200 mm with a ribbed inside surface of the drum. The mill ball loading was 40% by volume, the rotation rate
was equal to 85% of the critical speed. Balls were made from steel: S4146, extra high quality, having hardness 62 ± 2 HRC according to Rockwell.
WhatsApp: +86 18838072829
The ball mill was grinding to a P 80 of 50 to 70 µm, therefore the traditional marker size (75 µm) was not ..., 1961. Crushing and Grinding Calculations Parts 1 and 2. British Chemical
Engineering 6, 378385, 543548. Morrell, S., 2009. Predicting the overall specific energy requirement of crushing, high pressure grinding roll and ...
WhatsApp: +86 18838072829
Calculation method and its application for energy consumption of ball mills in ceramic industry based on power feature deployment February 2020 Advances in Applied Ceramics 119(4):112
WhatsApp: +86 18838072829
Long Tons of Solids: N = W x T/40 x C (R/L + 1/S) Short Tons of Solids: N = W x T/45 x C (R/L + 1/S) In the above formulas, no allowance is made for the degree of aeration of the pulp nor the
decrease in the volume of same, during the flotation operations.
WhatsApp: +86 18838072829
To calculate the charge volume of a ball mill, you will need to know the internal diameter of the mill and the distance between the top of the charge and the top of the mill. The...
WhatsApp: +86 18838072829
Mill Ball (%) = 100{[(ball volume in the m3 sample)* ] / .4}/ (3) According to the results (Table 1), the ball filling percentage was obtained %. However, these results may have some field
errors. Acquired Results from Abrasion Test Ball abrasion was calculated in 4 conditions which are as follows: Ball charge program ...
WhatsApp: +86 18838072829
Wet Ball Mill Calculations For Fill Volume,can You Spared . . Get Price And Support Online; AMIT 135: Lesson 7 Ball Mills Circuits Mining Mill . ... How to Calculate Charge Volume in Ball or Rod
Mill . ... the charge level will be greater than if the mill had been ... material in the ball mill or when the ... » Learn More.
WhatsApp: +86 18838072829
Then calculate the volume of the mill using the formula for the volume of a cylinder (V = πr²h). Measure the height of the charge (grinding media plus ore) using a measuring tape or ruler.
WhatsApp: +86 18838072829
ball charge is close to 95 kg (209 lb), occupying 34 percent of the mill volume. Balls are used that represent a fully graded charge based on constant wear rate in a continuously operating mill.
Slurry volume occupies the 40 percent voids volume in the 34 percent ball charge. The drive shaft is torquemetered and records once per second.
WhatsApp: +86 18838072829 | {"url":"https://www.sokoldamaslawek.pl/2021-Apr-30/2706.html","timestamp":"2024-11-03T00:30:29Z","content_type":"application/xhtml+xml","content_length":"28786","record_id":"<urn:uuid:3b67ec07-e5d7-4c70-ab92-a29e35acf368>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00782.warc.gz"} |
The Elements of Euclid, with many additional propositions, and explanatory notes, by H. Law. Pt. 2, containing the 4th, 5th, 6th, 11th, & 12th books
Popular passages
... have an angle of the one equal to an angle of the other, and the sides about those angles reciprocally proportional, are equal to une another.
... if the segments of the base have the same ratio which the other sides of the triangle have to one another...
If two triangles have two angles of the one equal to two angles of the other, each to each, and also one side of the one equal to the corresponding side of the other, the triangles are congruent.
From the point A draw a straight line AC, making any angle with AB ; and in AC take any point D, and take AC the same multiple of AD, that AB is of the part which is to be cut off from it : join BC,
and draw DE parallel to it : then AE is the part required to be cut off.
Equiangular parallelograms have to one another the ratio which is compounded of the ratios of their sides.
Convertendo, by conversion ; when there are four proportionals, and it is inferred, that the first is to its excess above the second, as the third to its excess above the fourth.
A and B are not unequal ; that is, they are equal. Next, let C have the same ratio to each of the magnitudes A and B ; then A shall be equal to B.
For the same reason, CD is likewise at right angles to the plane HGK. Therefore AB, CD are each of them at right angles to the plane HGK.
FB ; (i. 4.) for the same reason, CF is equal to FD : and because AD is equal to BC, and AF to FB, the two sides FA, AD are equal to the two FB, BC, each to each ; and the base DF was proved equal to
the base FC ; therefore the angle FAD is equal to the angle FBC: (i. 8.) again, it was proved that GA is equal to BH, and also AF to FB; therefore FA and AG are equal...
C, they are equiangular, and also have their sides about the equal angles proportionals (def. 1. 6.). Again, because B is similar to C, they are equiangular, and have their sides about the equal
angles proportionals (def. 1. 6.) : therefore the figures A, B, are each of them equiangular to C, and have the sides about the equal angles of each of them, and of C, proportionals. Wherefore the
rectilineal figures A and B are equiangular (1. Ax. 1.), and have their sides about the equal angles proportionals...
Bibliographic information | {"url":"https://books.google.com.jm/books?id=wzADAAAAQAAJ&dq=editions:UOM39015064332870&lr=&source=gbs_book_other_versions_r&cad=3","timestamp":"2024-11-14T07:03:13Z","content_type":"text/html","content_length":"57652","record_id":"<urn:uuid:03b630a6-653b-4814-9870-3a324a93c6fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00344.warc.gz"} |
Solar Radio Astronomy - CESRAAcceleration and Storage of Energetic Electrons in Magnetic Loops in the Course of Electric Current Oscillations by V.V. Zaitsev and A.V. Stepanov
Acceleration and Storage of Energetic Electrons in Magnetic Loops in the Course of Electric Current Oscillations
by V.V. Zaitsev and A.V. Stepanov
There are long-lived radio events on the Sun and stars like in type IV solar radio bursts with sudden reductions and pulsating type III bursts (Slottje, 1972; Huang et al. 2016) as well as intriguing
intense radio emission from ultracool stars that lasts for several rotation periods (Hallinan et al. 2007). This can be the result of the multiple injections of accelerated electrons into the coronal
magnetic loops. The idea of the acceleration and storage of energetic electrons in the magnetic loops in the course of electric current oscillations was suggested by Zlotnik et al. (2003). This idea
is developed here on the base of the analogy of the coronal loop with a RLC-circuit and on the modern observations.
Excitation of Induced Electric Field
Convective motions of the photosphere matter interacting with the magnetic field near the loop legs generate the electric current in the loop. The current-carrying coronal loop can be presented as an
electric (RLC) circuit whose eigen frequency depends on the magnitude of the electric current $I_0$, the electron density $n$, the loop radius $r_0$, and the length $l$ of the loop (Zaitsev et al.
\nu_{\rm{RLC}}=\frac{c}{2\pi \sqrt{LC(I_0)}} \approx \frac{1}{2\pi \sqrt{2\pi \zeta}} \frac{I_0}{c r_0^2 \sqrt{nm_i}}, \zeta=\ln (4l/\pi r_0) – (7/4).
Electric current oscillations are connected with the oscillations of the azimuthal component of the magnetic field $B_{\varphi}(r,t)=2rI_z (t)/cr^2_0$. These oscillations in accordance with the
equation $\rm{curl}\mathbf{E}=-(1/c)\partial \mathbf{B}_{\varphi}/\partial t$ lead to generation of an electric field directed along the component of the magnetic field $B_z$, and therefore it should
efficiently accelerate charged particles. Assuming $I_z(t)=I_0+\Delta I \sin (2\pi \nu _{\rm{RLC}} t)$ one can obtain the mean value of the electric field along the loop radius:
\overline{E}=\frac{4}{3}\frac{\nu_{\rm{RLC}}I_0}{c^2} \frac{\Delta I}{I_0} \propto I^2_0 (\Delta I/I_0).
The self-consistent equation for the electric current oscillations in an equivalent electric circuit can be written in the form (Khodachenko et al. 2009):
\frac{d^2 y}{d\tau ^2} – \epsilon \left( \delta -2y-y^2 \right) \frac{dy}{d\tau} + \left( 1+\frac{3}{2} y+ \frac{1}{2} y^2 \right) y=0, y=( I – I_0) / I_0.
Here $\delta = \big[ (\left| V_r \right| l_1) / c^2 r_1 R(I_0) \big] -1$, $V_r$ is the radial component of the convergent flow of the photosphere matter in the loop legs, $l_1$ and $r_1$ are the
length and radius of the flux tube near loop footpoints. Because Q-factor of RLC-oscillation $G \gg 1$ the small parameter $\epsilon =1/Q \ll 1$ in the last equation makes it possible to apply the
Van der Pol method and the solution has the form
y=2\sqrt{\delta}\cos \big[ 2\pi \nu_{\rm{RLC}}(1+\frac{3}{4} \delta) t \big].
Thus, the nonlinearity leads to the establishment of a finite amplitude of oscillations, as well as to a small shift in the oscillation frequency. Note that our lumped circuit approach suggests that
electric field oscillations must be in-phase at all points of the loop. On the other hand, the electric current variations propagate along the loop with the Alfvén speed. Therefore, for the in-phase
condition, the Alfvén time $\tau_A = l/V_A \approx 100$ s must be less than the period of RLC oscillations $(\nu_{\rm{RLC}})^{-1}$.
Energization Rate and Energy of Accelerated Electrons
The induced electric field $\overline{E}$ accelerates some part of electron population to a velocity exceeding $V > (E_D / E_z)^{1/2} V_{Te}$, where $V_{Te}$ is the electron thermal velocity, $E_D=e
\Lambda \omega_p^2 / V_{Te}^2$ is the Dricer field, $\Lambda$ is the Coulomb logarithm, and $\omega_p$ is the Langmuir frequency. In the case of sub-Dreicer field, $x=E_D / E_z \gg 1$, the theory
yields the number rate of runaway electrons accelerated by a DC-electric field: $\dot{N}_s=0.35 n \nu_{ei} V_a x^{3/8} \exp (-\sqrt{2x} –x/4)$, where $\nu_{ei} = 5.5n\Lambda / T^{3/2}$ is the
effective frequency of electron-ion collisions, $T$ is the plasma temperature, and $V_a$ is the volume of the acceleration region.
Estimation for the flare on 19 July, 2012 with the set of type III bursts displaying in-phase oscillations with period 270 s ($\nu_{\rm{RLC}}\approx 3.7 \times 10^{-3}$ Hz) within the range of 0.7-3
GHz (Huang et al. 2016) and with $I_0=4 \times 10^{9}$ A, $\Delta I/I_0=10^{-3}$ gives the electron energization rate $\dot{N}_s \approx 3\times 10^{33}$~s$^{-1}$. This is compatible with the
energization rate obtained from the RHESSI data. The energy gain for $\overline{E} \approx 2 \times 10^{-5}V$ cm$^{-1}$ at the distance $\Delta l= 10^9$ cm is $\epsilon \approx 20$ keV. For the TVLM
513-46546, a young radio-active M8.5V dwarf with $M_*=0.07M_{\odot}$, $R_* \approx 0.1R_{\odot}$, and an effective temperature $T_{\rm{eff}} \approx 2200$ K, the frequency of high-Q oscillations was
estimated as $\nu_{\rm{RLC}} \approx 8\times 10^{-3}$ Hz (period $\approx 130$ s) and the electric field is $\overline{E}_z \approx 8\times 10^{-4} V$ cm$^{-1}$ (Zaitsev and Stepanov 2017). With this
electric field, electrons can be accelerated to an energy $\approx 800$ keV at a distance $\approx 10^9$ cm. Energization rate can be as high as $\dot{N}_s \approx 10^{34}$ s$^{-1}$. With this
energization rate one can expect quite high level of HXR emission from ultracool stars.
Several models have been suggested to interpret quasi-periodic electron acceleration in coronal magnetic loops. Quasi-periodic acceleration may be associated with the bursty regime of spontaneous
magnetic reconnection. The MHD oscillation of flare loops could be a possible option to explain the features under question. However, the sausage and kink MHD modes are not capable of providing
synchronous pulsations in a wide frequency interval and have quite low Q-factor. The mechanism of acceleration and storage of electrons driven by the electric current oscillations in a loop as an
equivalent RLC-circuit was first suggested by Zlotnik et al. (2003). Here, we develop this idea, but for periodic groups of type III bursts and for the oscillation period exceed 100 s. By this the
individual type III bursts forming the periodic groups can be triggered by the bursty regime of magnetic reconnection in the fine loop structure, the flux tubes with cross-section area of about $10^
{14} – 10^{15}$ cm$^{2}$. The model developed here can explain the electron acceleration in long-lived type IV radio continuum with the fine structures such as type III bursts and sudden reductions.
Proposed model can explain also the origin of accelerated electrons in ultracool stars. Note that if $\nu_{\rm{RLC}}$ coincides with the frequency of the loop MHD oscillations, the ratio $\Delta I/
I_0$ grows and the acceleration and storage processes can be more effective.
Based on a recently published article: Zaitsev V.V., Stepanov A.V., Acceleration and Storage of Energetic Electrons in Magnetic Loops in the Course of Electric Current Oscillations, Solar Physics,
292, 141-152 (2017). doi: 10.1007/s11207-017-1168-2
Hallinan, G. et al. 2007, ApJL, 663, L25. doi: 10.1086/519790
Huang, J. et al. 2016, ApJ, 831, 119. doi: 10.3847/0004-637X/831/2/119
Khodachenko, M.L. et al. 2009, Space Sci. Rev. 149, 83. doi: 10.1007/s11214-009-9538-1
Slottje, C. 1972 Solar Phys. 25, 210. doi: 10.1007/BF00155758
Zaitsev, V.V. et al. 1998, A&A, 337, 887. Bibl. code:1998A&A…337..887Z
Zaitsev, V.V., Stepanov, A.V. 2017, R&QE, 59, 867. doi: 10.1007/s11141-017-9757-3
Zlotnik, E.Y. et al. 2003, A&A, 410, 1011. doi: 10.1051/0004-6361:20031250 | {"url":"https://www.astro.gla.ac.uk/users/eduard/cesra/?p=1593","timestamp":"2024-11-04T20:25:26Z","content_type":"text/html","content_length":"61910","record_id":"<urn:uuid:b632991d-cbef-40f3-8a07-472f5facf1de>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00653.warc.gz"} |
Uncrossing lines
This is the sixth post in an open-ended series on proving the correctness of TLA specifications using Isabelle/HOL:
I recently came across a video lecture by Dijkstra on reasoning about program correctness in which he discusses two little algorithms that have interesting correctness proofs. The first has a
slightly surprising safety (i.e. partial correctness) property, but is very easy to show that it terminates, whereas the second has an obvious safety property but a trickier termination proof. I
thought it’d be interesting to code the second up in TLA and play around with it.
The setup is that there are n red points and n blue points in the plane, where each red point is joined to a blue point by a straight line segment:
The objective is to find a way of pairing up the red points with the blue points so that none of the line segments cross. The algorithm to do this starts with any pairing and repeatedly selects a
pair of line segments which cross and “uncrosses” them, swapping the pairings between the two red points and the corresponding blue points:
Since the algorithm repeats until there are no crossing line segments, if it terminates then it has successfully found an arrangement with no crossing segments. In other words, it’s easy to see that
this algorithm has a good partial correctness property. However it’s not immediately obvious that this algorithm does ever terminate. It’s tempting to want to try something like induction on the
number of crossing pairs of segments: the act of uncrossing a pair of segments certainly removes one crossing, but the problem with this approach is that an uncrossing can also introduce arbitrarily
many other crossings, so this induction doesn’t work:
As there are only finitely many points there are only finitely many arrangements from which to choose, so if the algorithm fails to terminate then it must repeatedly visit the same state. The trick
is to notice that uncrossing two crossing lines always makes the total length of all the lines strictly shorter, so it’s not possible to visit any state more than once. This is because of the
triangle inequality:
Actually it isn’t always true that the total length always gets shorter: the triangle inequality is only strict if the triangle is not degenerate, i.e. if its vertices are not collinear.
And indeed it’s possible to find cases where the four points involved in an uncrossing are all collinear, the lines intersect, and yet swapping the pairings over does not decrease the total length of
the lines or remove the intersection:
Does this count as “crossing”? Arguably no, two line segments are “crossing” only if there is some of each line segment lies on both sides of the other, which isn’t the case if the endpoints are
collinear. But they do intersect, which is a little more general, and there are some other cases where the line segments intersect (but don’t strictly cross) and where swapping the pairing of the
endpoints does remove the intersection:
Dijkstra avoids this whole issue by insisting that of all the red and blue points no three of them are collinear, which excludes the problematic case but also the case above; also if only three of
the four endpoints involved in an intersection are collinear then swapping the pairing always decreases the total length and removes the intersection:
It turns out that we can be a bit more precise about the conditions under which this algorithm works.
This post is a tour of EWD-pairings.thy in my TLA examples Github repository.
Geometry in Isabelle/HOL
There seems to be a surprisingly small amount of geometry included in the Isabelle/HOL standard library. There’s a theory of real numbers, and of vector spaces which specialises to ℝ², including the
(non-strict) triangle inequality, but I couldn’t find much about lines and their intersections and so on, so I started with these definitions:
type_synonym point = "real × real"
definition lineseg :: "point ⇒ point ⇒ real ⇒ point"
where "lineseg p0 p1 l ≡ (1-l) *R p0 + l *R p1"
(* The *R operator is scalar multiplication *)
definition closed_01 :: "real set"
where "closed_01 ≡ {x. 0 ≤ x ∧ x ≤ 1}"
definition segment_between :: "point ⇒ point ⇒ point set"
where "segment_between p0 p1 ≡ lineseg p0 p1 ` closed_01"
definition segments_cross :: "point ⇒ point ⇒ point ⇒ point ⇒ bool"
where "segments_cross r0 b0 r1 b1
≡ segment_between r0 b0 ∩ segment_between r1 b1 ≠ {}"
In words, lineseg p0 p1 is a function real ⇒ point whose range is the line through p0 and p1, and segment_between p0 p1 is the segment of this line between p0 and p1 (inclusive). The property we care
about is whether the two segments intersect, given by segments_cross.
Starting from these definitions I did manage to show that, if no three points are collinear, then uncrossing two crossing segments does indeed decrease the sums of their lengths, but it was quite a
long and unilluminating proof so I started casting around for examples of other geometry that has been developed in Isabelle/HOL to see if there were any better ideas.
Signed areas
I came across Laura I. Meikle’s PhD thesis which uses formalisation of geometric results as a running example to motivate various usability improvements in automated proof tooling. Included is a
substantial formalisation of a convex hull algorithm, including novel work to show that the algorithm works if some points are collinear, which seems to be considered as something of a special case
in computational geometry even though I imagine it occurs quite often in practice. This motivated me to try a little harder and dig into the cases where some of the points could be collinear and try
and isolate just the problematic cases of collinearity.
The basic idea of Meikle’s formalisation, apparently from Knuth’s Axioms and Hulls, is to use the idea of signed area to classify arrangements of three points. The signed area is the area of the
triangle whose vertices are the three points, if ordered anticlockwise, or the negation of its area if the points are clockwise. This has a particularly simple formula in terms of the coordinates of
the points:
definition signedArea :: "point ⇒ point ⇒ point ⇒ real"
where "signedArea p0 p1 p2 ≡ case (p0, p1, p2) of
((x0,y0), (x1,y1), (x2,y2)) ⇒ (x1-x0)*(y2-y0) - (x2-x0)*(y1-y0)"
(In fact this gives twice the area of the triangle in question, but really we mostly only care about the sign, and the extra factor of ½ makes everything harder, so it’s better just to leave it out).
The key point is that if signedArea p0 p1 p2 is negative then p2 is to the left of the directed line segment from p0 to p1 and if it is positive then p2 is to the right:
lemma signedArea_left_example: "signedArea (0,0) (1,0) (1, 1) > 0" by (simp add: signedArea_def)
lemma signedArea_right_example: "signedArea (0,0) (1,0) (1,-1) < 0" by (simp add: signedArea_def)
If the signed area is zero then the points are collinear. Since we mostly only care about the sign of the signed area, it makes sense to throw away its magnitude like this:
datatype Turn = Left | Right | Collinear
definition turn :: "point ⇒ point ⇒ point ⇒ Turn"
where "turn p0 p1 p2 ≡ if 0 < signedArea p0 p1 p2 then Left
else if signedArea p0 p1 p2 < 0 then Right
else Collinear"
One cute fact about signed areas is that they interact nicely with the linear interpolation used in lineseg. Recalling that lineseg r1 b1 l ≡ (1-l) *R r1 + l *R b1, it follows that:
lemma signedArea_lineseg:
"signedArea r0 b0 (lineseg r1 b1 l)
= (1-l) * signedArea r0 b0 r1
+ l * signedArea r0 b0 b1"
Classifying crossing segments
A bunch of preliminary results lead to a quite neat classification of crossing segments in terms of their turn directions:
lemma segments_cross_turns:
assumes "segments_cross r0 b0 r1 b1"
shows "collinear {r0, b0, r1, b1}
∨ (turn r0 b0 r1 ≠ turn r0 b0 b1
∧ turn r1 b1 r0 ≠ turn r1 b1 b0)"
This means that two line segments intersect either when all four points are collinear, or else when the two endpoints of each line segment are on different sides of the other line, and what’s really
nice is that this includes cases where some of the points are collinear: “different sides” means “left and right” or “left and collinear” or “right and collinear”. In this latter case the segments do
genuinely cross …
lemma turns_segments_cross:
assumes "turn r0 b0 r1 ≠ turn r0 b0 b1"
assumes "turn r1 b1 r0 ≠ turn r1 b1 b0"
shows "segments_cross r0 b0 r1 b1"
… and swapping the endpoints does decrease the length:
lemma non_collinear_swap_decreases_length:
assumes distinct: "r0 ≠ r1" "b0 ≠ b1"
assumes distinct_turns: "turn r0 b0 r1 ≠ turn r0 b0 b1"
"turn r1 b1 r0 ≠ turn r1 b1 b0"
shows "dist r0 b1 + dist r1 b0 < dist r0 b0 + dist r1 b1"
The side-conditions r0 ≠ r1 and b0 ≠ b1 are necessary: without them, swapping endpoints doesn’t change the line segments. In the algorithm we do not allow line segments to share endpoints.
Dealing with collinear points
The cases where the segments intersect and all four points are collinear are easy enough to enumerate:
Of these, the only cases in which swapping the endpoints does not decrease the total length, or remove the intersection, is the last two in which both of the blue points are on the same side of the
two red points. This suggests the following definition to try and exclude the problematic case:
definition badly_collinear :: "point ⇒ point ⇒ point ⇒ point ⇒ bool"
where "badly_collinear r0 b0 r1 b1
≡ ({b0,b1} ⊆ lineseg r0 r1 ` { l. 1 < l }
∨ {b0,b1} ⊆ lineseg r1 r0 ` { l. 1 < l })"
It’s not too hard to show that this is a very problematic case, in the sense that it means that the line segments intersect …
lemma badly_collinear_intersects:
assumes "badly_collinear r0 b0 r1 b1"
shows "segments_cross r0 b0 r1 b1"
… you can’t escape by swapping the endpoints …
lemma badly_collinear_swap_still_badly_collinear:
assumes "badly_collinear r0 b0 r1 b1"
shows "badly_collinear r0 b1 r1 b0"
… and swapping the endpoints doesn’t change the total length of the segments …
lemma badly_collinear_swap_preserves_length:
assumes bad: "badly_collinear r0 b0 r1 b1"
shows "dist r0 b1 + dist r1 b0 = dist r0 b0 + dist r1 b1"
… which means that this case definitely prevents the algorithm from terminating. However, it’s also possible to show that this is the only situation that can prevent the algorithm from terminating:
lemma swap_decreases_length:
assumes distinct: "distinct [r0, b0, r1, b1]"
assumes segments_cross: "segments_cross r0 b0 r1 b1"
assumes not_bad: "¬ badly_collinear r0 b0 r1 b1"
shows "dist r0 b1 + dist r1 b0 < dist r0 b0 + dist r1 b1"
The proof of this is a slightly fiddly set of nested case splits of the form that Isabelle excels at compared with doing this manually. It’s really hard to convince yourself you’ve got all the cases,
particularly when working geometrically, as some of the case splits yield geometrically-impossible situations which can’t be drawn but which you still have to prove to be impossible.
Setting up the algorithm
The analysis above suggests the following setup for modelling the algorithm itself:
locale UncrossingSetup =
fixes redPoints bluePoints :: "point set"
assumes finite_redPoints: "finite redPoints"
assumes finite_bluePoints: "finite bluePoints"
assumes cards_eq: "card redPoints = card bluePoints"
assumes red_blue_disjoint: "redPoints ∩ bluePoints = {}"
assumes not_badly_collinear:
"⋀r1 r2. ⟦ r1 ∈ redPoints; r2 ∈ redPoints ⟧
⟹ card (bluePoints ∩ lineseg r1 r2 ` {l. 1 < l}) ≤ 1"
The condition not_badly_collinear says that if you draw a line segment between any two red points, and extend it past one of the red points, then it only ever hits at most one blue point. This allows
for a lot more situations than simply forbidding any collinearity. For instance, all the red points can be collinear, as can all the blue points:
It’s also permissible to have a bunch of blue points on the line between two red points and vice versa:
In this setup, any uncrossing reduces the total length of the line segments involved:
context UncrossingSetup
lemma uncross_reduces_length:
fixes r1 r2 b1 b2
assumes colours: "r1 ∈ redPoints" "r2 ∈ redPoints" "b1 ∈ bluePoints" "b2 ∈ bluePoints"
assumes distinct: "r1 ≠ r2" "b1 ≠ b2"
assumes segments_cross: "segments_cross r1 b1 r2 b2"
shows "dist r1 b2 + dist r2 b1 < dist r1 b1 + dist r2 b2"
Moreover the total length of any pairing of the red points and the blue points is in this set…
definition valid_total_lengths :: "real set"
where "valid_total_lengths
≡ (λpairs. ∑ pair ∈ pairs. dist (fst pair) (snd pair))
` Pow (redPoints × bluePoints)"
… which is finite …
lemma finite_valid_total_lengths: "finite valid_total_lengths"
… so this means that transitions which reduce the total length form a wellfounded relation, suitable for induction:
definition valid_length_transitions :: "(real × real) set"
where "valid_length_transitions ≡ Restr {(x,y). x < y} valid_total_lengths"
lemma wf_less_valid_total_length: "wf valid_length_transitions"
The algorithm
At this point we are in a position to look at the algorithm itself in TLA:
definition swapPoints :: "point ⇒ point ⇒ point ⇒ point"
where "swapPoints p0 p1 p ≡ if p = p0 then p1 else if p = p1 then p0 else p"
locale Uncrossing = UncrossingSetup +
fixes blueFromRed :: "(point ⇒ point) stfun"
assumes bv: "basevars blueFromRed"
fixes blueFromRed_range :: stpred
defines "blueFromRed_range
≡ PRED ((op `)<blueFromRed,#redPoints> = #bluePoints)"
fixes step :: action
defines "step ≡ ACT (∃ r0 r1 b0 b1.
#r0 ∈ #redPoints
∧ #r1 ∈ #redPoints
∧ #r0 ≠ #r1
∧ #b0 = id<$blueFromRed,#r0>
∧ #b1 = id<$blueFromRed,#r1>
∧ #(segments_cross r0 b0 r1 b1)
∧ blueFromRed$ = (op ∘)<$blueFromRed,#(swapPoints r0 r1)>)"
fixes Spec :: temporal
defines "Spec ≡ TEMP (Init blueFromRed_range ∧ □[step]_blueFromRed
∧ WF(step)_blueFromRed)"
We have a single state variable, blueFromRed, which assigns the corresponding blue point to each red point. The predicate blueFromRed_range holds initially, and each (non-stuttering) transition finds
two intersecting line segments and uncrosses them. The expression
blueFromRed$ = (op ∘)<$blueFromRed,#(swapPoints r0 r1)>)"
is the Isabelle translation of what would be written in TLA+ something like
blueFromRed' = blueFromRed ∘ (swapPoints r0 r1)
i.e. that updating the blueFromRed function simply swaps an assignment over and leaves everything else alone, and the predicate blueFromRed_range is a translation of
blueFromRed ` redPoints = bluePoints
i.e. that blueFromRed is a surjection from the red points onto the blue points (and these sets are finite and have equal cardinalities which means it’s a bijection). It’s not too hard to show that
this predicate is an invariant of the algorithm:
lemma blueFromRed_range_Invariant: "⊢ Spec ⟶ □blueFromRed_range"
Furthermore it implies that the total length of all the line segments is in valid_total_lengths:
definition total_length :: "real stfun"
where "total_length s ≡ ∑ r ∈ redPoints. dist r (blueFromRed s r)"
lemma blueFromRed_range_valid_total_length:
"⊢ blueFromRed_range ⟶ total_length ∈ #valid_total_lengths"
We can define what it means for all the line segments to be uncrossed:
definition all_uncrossed :: stpred
where "all_uncrossed ≡ PRED (∀ r0 r1 b0 b1.
#r0 ∈ #redPoints
∧ #r1 ∈ #redPoints
∧ #r0 ≠ #r1
∧ #b0 = id<blueFromRed,#r0>
∧ #b1 = id<blueFromRed,#r1>
⟶ ¬ #(segments_cross r0 b0 r1 b1))"
It follows that the algorithm can only stutter once all line segments are uncrossed:
lemma stops_when_all_uncrossed:
"⊢ Spec ⟶ □($all_uncrossed ⟶ blueFromRed$ = $blueFromRed)"
All of the preliminary work above allows us to show that a step transition causes the change in total_length to be in valid_length_transitions:
lemma step_valid_length_transition:
assumes "(s,t) ⊨ step"
assumes "s ⊨ blueFromRed_range"
shows "(total_length t, total_length s) ∈ valid_length_transitions"
From this, and because valid_length_transitions is a wellfounded relation, there can only be finitely many step transitions, which means that eventually the algorithm does terminate as required.
lemma "⊢ Spec ⟶ ◇□all_uncrossed"
This algorithm is interesting because its correctness proof relies on a geometric argument, something that I’d not seen before. The actual computational content (i.e. the interaction with TLA) was
quite small and simple, once the geometry was all sorted out.
I’ve never done any formalisation of geometry before, and the idea of signed areas helped to show me a way of working that was a lot more elegant than the bare-hands coordinate geometry that I
initially tried. It also helped massively in the analysis of the problematic situations involving collinear points. For these situations it was very useful to work in Isabelle as there were a few
places where I had to proceed by enumerating cases and it’s all to easy to miss one case when working by hand, especially when working with geometry and doubly so when trying to consider degenerate
cases. The proof of swap_decreases_length in particular involved nested case splits that would have been tricky to cover exhaustively by hand.
This algorithm works in terms of real numbers, which made it simple to analyse but means it cannot be implemented as-is. Naively replacing all the real numbers with floating-point double variables
seems like it’ll give rise to some problems, particularly for points that are close to collinear, in part because the formula for signedArea has lots of subtractions of quantities that could be quite
similar. This needs more thought as I haven’t yet been able to construct an obviously-bad situation for double variables, so bad behaviour is just a hunch. An obvious cop-out is to compute exactly
using integer or rational coordinates instead. | {"url":"https://davecturner.github.io/2018/07/24/uncrossing-lines.html","timestamp":"2024-11-05T20:05:50Z","content_type":"text/html","content_length":"32441","record_id":"<urn:uuid:71293c75-dab5-422e-87c8-c47dd1d960c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00672.warc.gz"} |
New Probit Models: U.S. Recession Risk is Currently Low
Last week I wrote about using statistical tools to forecast recessions and referenced James Picerno, who provided the inspiration for this idea through his articles on the Capital Spectator Economic
Trend Index (CS-ETI) and the use of probit models to estimate the probability of a recession. I used Picerno's explanatory variables as a starting point for developing two new recession forecasting
models, which I will describe in this article.
Review of the Capital Spectator Framework
Before discussing the new models, let's review the Capital Spectator (CS) framework. Picerno uses 18 variables (see Figure 1 below) to construct a diffusion index. The CS diffusion index represents
the percentage of the 18 independent variables that are increasing, which is indicative of an expansion. If all of the variables were trending higher, the value of the diffusion index would be 1.0
or 100 percent. If nine of the 18 variable were rising, the value of the diffusion index would be 0.50 or 50 percent.
Picerno then uses a probit model to convert the diffusion index values into actual recession probability estimates. Picerno explains his process:
"I'm using a standard probit model that uses the monthly data for the 3-month average of CS-ETI as the independent variable and NBER's monthly recession readings (0=no recession, 1=recession). I
estimate the cumulative normal distribution of the alpha and beta coefficients via a maximum likelihood technique in both Excel and R. It's all relatively conventional statistics."
Motivation for New Models
As I mentioned last week, I was intrigued by the power of these tools and wanted to experiment with a number of different variable combinations. I was also interested in the possibility of using the
recession probability estimates with my existing trading strategies, which are based primarily on technical analysis.
I also have a more practical reason for wanting to develop my own objective, systematic recession forecasting models. Over the past year, I placed undue weight on ECRI's fall 2011 recession call,
which caused me to overrule subsequent buy signals from my systematic trading strategies - all of which would have been quite profitable. While these decisions did not generate any actual losses,
there were definitely opportunity costs.
ECRI has an excellent reputation and track record, but their methodology is proprietary and it now appears that their 2011 recession call for the U.S. was premature at best and possibly unwarranted.
Given my experience over the past year, I could no longer justify relying solely on ECRI's proprietary black box for recession forecasts. I needed to develop my own systematic, transparent model for
forecasting recessions to augment my existing technical strategies.
New Model Construction
Picerno's work was an excellent place to start. His model was easy to understand and has performed very well historically. Nevertheless, I had a few ideas for improvements. First, instead of
building one model, I created two. The first model predicts the probability of being in a recession, as defined by the National Bureau of Economic Research (NBER). This is the same dependent
variable used by Picerno at CapitalSpectator.com.
The second model was more difficult to estimate, but is potentially more useful. Markets typically peak before recessions begin and reach their troughs before recessions end. As a result, the
second model attempts to forecast the probability of being between the peak and trough of an NBER-defined recession. Peaks and troughs not associated with NBER recessions were ignored.
To calculate the diffusion index, Picerno uses a 12-month look-back period for every variable, which eliminates possible seasonal adjustment biases, but does not necessarily minimize forecasting
errors. Instead, I calculated the optimal look-back period and threshold for each independent variable in isolation - to best explain the behavior of the dependent variables. Based on this
research, I discarded several of Picerno's variables and added two new variables that were derived from leading economic indicators.
Picerno's diffusion index (CS-ETI) represents the percentage of independent variables that are trending higher. I reversed this convention for my models. My diffusion index represents the
percentage of independent variables that are indicating a recession. Both of my new models use the same diffusion index, which is based on a common set of 15 independent or explanatory variables.
Picerno's probit model uses a single independent variable: "the 3-month average of CS-ETI." Instead, I use the most recent value of the diffusion index, which allows the model to respond faster.
I also added a second independent variable to both of my probit models: the change in the diffusion index over the past several months. This means that the models can differentiate between entering
a recession (when the diffusion index is increasing) and exiting a recession (when the diffusion index is decreasing). This allows the probit models to respond directly to changes in the diffusion
index, which was especially advantageous in estimating the peak-trough model.
Diffusion Model Results
Figure 2 below is a graph of the resulting diffusion index from 1960 to early November 2012. The red line is the value of the diffusion index, which uses the left vertical axis. The gray shaded
regions denote the NBER recessions. Finally, the blue line represents the S&P 500 index, which uses a log scale on the right vertical axis.
The values of the diffusion index are not as intuitive as the probit model probability forecasts, but the historical diffusion index data suggest a possible rule-of-thumb: the probability of an NBER
recession is high when the value of the diffusion index exceeds 40 percent. This threshold also eliminates some false signals. Currently only one out of 15 variables is indicating a recession; the
resulting diffusion index value is only 6.7 percent (1/15).
Recession Probit Model Results
The first probit model uses the value of the diffusion index and the change in the diffusion index over the past few months to estimate the probability that the U.S. economy is currently in a
recession. The historical probit model estimates are depicted in Figure 3 below (red line - left vertical axis). Again, the gray shaded regions represent NBER recessions and the blue line
represents the log value of the S&P 500 index.
The model fits the data extremely well, but you will notice the NBER recessions begin after the S&P 500 peaks and end after the S&P 500 bottoms. Nevertheless, the model is still a very useful tool.
When the recession probability estimate exceeds 50 percent, a U.S. recession is a virtual certainty. Based on the most recent probit model forecast, the probability that the U.S. is currently in a
recession is less than 1.0 percent.
Peak-Trough Probit Model Results
The peak-trough probit model (Figure 4 below) does not fit the data as well as the recession model. As you would expect, it is much more difficult to predict market peaks and troughs using economic
data. However, a 40-50 percent threshold would eliminate most of the false signals, but would still provide more warning than the probit recession model.
The peak-trough estimates (red line - left vertical axis) represent the probability that the S&P 500 is currently between a peak and trough associated with a NBER recession. These probability
estimates should increase before a recession begins and fall before a recession ends.
The gray shaded regions in Figure 4 below represent the peak-trough periods associated with NBER recessions - NOT the recession periods that were depicted in the previous charts. To determine the
peak-trough periods, I identified the highs and lows of the S&P 500 within 6-9 months of the NBER recession periods.
The probability that the S&P 500 is currently between a peak and trough associated with a NBER recession in the U.S. is only 11.1 percent, well below the suggested warning threshold.
Practical Considerations
I have been careful to note that the above models forecast U.S. recessions, not global recessions. Until I analyzed the worldwide economic data in more detail, I did not fully appreciate the
importance of this distinction.
I have written several posts about the JP Morgan's Global Manufacturing PMI, which is an excellent leading indicator for global recessions. The weakness in the Global Manufacturing PMI has been
prophetic; the majority of OECD (Organization for Economic Co-operation and Development) countries are currently undergoing contractions.
While that raises the risk of a U.S. recession, it is not a foregone conclusion. Every U.S. recession in the past 50 years occurred during an OECD global recession, but not every OECD global
recession resulted in a U.S. recession. Given the precarious state of the global economy, if a U.S. recession were to occur, it could come on quickly and it could be severe. As a result, the probit
models should be monitored closely for signs of any deterioration.
The above models use revised economic data, which means the forecasts will change when new data revisions are released. It also means that the historical forecasts presented above would have been
different in real-time. However, I did incorporate appropriate lags into the data, accounting for the release dates for each data series.
The other important caveat is that significant equity market pullbacks are not limited to recessionary environments. On Black Monday in October 1987, the S&P 500 Index declined by over 20 percent in
a single day. And the economy was not in a recession at the time. Even if cycle forecasts were perfect, they could not prevent trading losses. However, when combined with technical analysis and
market sentiment, they represent a formidable arsenal of decision-making resources.
Here are several examples of how recession probability forecasts could be combined with technical tools and strategies:
• Verifying recession probabilities are low when entering bullish trend-following trades
• Ensuring recession probabilities are high when entering bearish trend-following trades
• Buying pullbacks when recession risk is low
• Selling market spikes when recession risk is high
I am pleased with the initial models, but I plan to continue to research new explanatory variables that could lead to improved performance. Now that I have the historical data, it would be very easy
to add new variables and re-estimate the model coefficients.
I have also developed neural network (NN) models in the past and would like to attempt to create a NN model to forecast the peak-trough probabilities. Unfortunately, my previous NN development
platform is incompatible with the Windows operating system on my new computer. If I can find a reasonably-priced NN software package with an adequate feature set, I may pursue this approach as well.
I will closely monitor the forecasts for both of the new probit models going forward and I also plan to continue to track the CS-ETI. The advantages of having continual U.S. recession and
peak-trough probability estimates cannot be overstated.
The effects of recessions are not limited to equities. Recessions affect all markets: equities, bonds, currencies, energy, metals, grains, meats, and softs. These tools have the potential to
increase returns and reduce risk across all sectors and strategies.
Your comments, feedback, and questions are always welcome and appreciated. Please use the comment section at the bottom of this page or send me an email.
Do you have any questions about the material? What topics would you like to see in the future?
If you found the information on www.TraderEdge.Net helpful, please pass along the link to your friends and colleagues or share the link with your social or professional networks.
The "Share / Save" button below contains links to all major social and professional networks. If you do not see your network listed, use the down-arrow to access the entire list of networking sites.
Thank you for your support.
Brian Johnson
Copyright 2012 - Trading Insights, LLC - All Rights Reserved.
This entry was posted in Economic Indicators, Fundamental Analysis, In-Depth Article, Market Commentary, Market Timing, Recession Forecasting Model and tagged economic cycle, economic cycle
forecasting, probit model, recession, recession forecast 2012, recession forecasting, recession modeling, trader, traders. Bookmark the permalink.
5 Responses to New Probit Models: U.S. Recession Risk is Currently Low
1. According to the Congressional Budget Office the probability of a near term recession is directly related to the outcome of the current sequestration problem. Would the probit model address that
kind of circumstance.
Best regards
Pete Kasper
□ Pete,
Thanks for the insightful and timely question. The answer to your question is both yes and no. The beginning and end of NBER recessions are defined (after the fact) by the trends in the
economic data, which should be captured by the probit recession model.
However, if the recession were triggered by an instantaneous rise in taxes and a discrete drop in spending (the fiscal cliff), it would be very difficult if not impossible for the peak-trough
probit model to provide any advance warning of the market decline.
A failure to resolve the fiscal cliff would greatly accelerate the normal cyclical transition from expansion to contraction, which would compromise the performance of the peak-trough model
and reduce the value of the recession model. Nevertheless, if the recession were long and severe (which it could be given the fragile state of the global economy), both models should still
provide some downside protection. They would also be useful in timing the end of the recession, regardless of the cause.
Unfortunately, forecasting models in general cannot deal with exogenous shocks that are not directly captured by the data. This includes terrorist attacks, conflict in the Middle East,
political disruptions, exits from the Euro, country defaults, etc.
I should have covered this more fully in the article. Thanks for the question and for your continued interest in Trader Edge.
Brian Johnson | {"url":"https://traderedge.net/2012/11/08/new-probit-models-us-recession-risk-is-currently-low/","timestamp":"2024-11-13T13:02:45Z","content_type":"text/html","content_length":"81426","record_id":"<urn:uuid:ebd04cf7-8e05-441a-8635-8a4454ef0027>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00342.warc.gz"} |
Inference using the revdbayes Package
We consider two uses of posterior predictive inference. In the first we simulate from the posterior predictive distribution many replicates of the observed data. A comparison of these replicates to
the observed data provides a way to assess the fit of the model: systematic differences in behaviour between the observed and replicated data suggest features of the data that are not represented
well by the model. For greater detail see Chapter 6 of Gelman et al. (2014). Secondly, we define as the variable of interest the largest value \(M_N\) to be observed over a future time period of
length \(N\) years and estimate under the extreme value model the conditional distribution of \(M_N\) given the observed data. This accounts for uncertainty in model parameters and for uncertainty
owing to the variability of future observations.
We produce posterior samples for later use. The argument nrep = 50 to rpost results in 50 simulated replicates of the data being returned in object$data_rep.
### GEV model for Port Pirie Annual Maximum Sea Levels
mat <- diag(c(10000, 10000, 100))
pn <- set_prior(prior = "norm", model = "gev", mean = c(0,0,0), cov = mat)
gevp <- rpost(n = n, model = "gev", prior = pn, data = portpirie, nrep = 50)
### GP model for Gulf-of-Mexico significant wave heights
u <- quantile(gom, probs = 0.65)
fp <- set_prior(prior = "flat", model = "gp", min_xi = -1)
gpg <- rpost(n = 1000, model = "gp", prior = fp, thresh = u, data = gom, nrep = 50)
Posterior predictive model checking
The pp_check method pp_check.evpost provides an interface to the posterior predictive checking graphics available in the bayesplot package (Gabry and Mahr 2017). For details see the bayesplot
vignette Graphical posterior predictive checks. bayesplot functions return a ggplot object that can be customised using the ggplot2 package (Wickham 2009).
#> This is bayesplot version 1.11.1
#> - Online documentation and vignettes at mc-stan.org/bayesplot
#> - bayesplot theme set to bayesplot::theme_default()
#> * Does _not_ affect other ggplot2 plots
#> * See ?bayesplot_theme_set for details on theme setting
We show three examples of the graphical posterior predictive checks that are available from bayesplot.
Overlaid density and distribution functions
Calling pp_check with type = "overlaid" produces plots in which either the empirical distribution functions or kernel density estimates of the observed and replicated data are compared. If the model
fits well then the observed data should look like a typical replication from the posterior predictive distribution, which seems to be the case here.
Multiple plots
Using type = multiple produces multiple plots, rather than overlaid plots, with subtype indicating the type of plots to be drawn. By default only 8 plots of replicated data are drawn, but this can be
changed using the nrep argument. Again, the plot of the observed data is not obviously different from those of the replicated data.
Posterior predictive test statistics
The default setting for pp_check is to produce a plot that compares the values of test statistics for the observed and replicated data. The argument stat defines the test statistic (or a pair of test
statistics) to use, using either the name of a standard function or a user-defined function. For a model that fits well the value of the statistic calculated from the observed data should not be
unusual compared to the values calculated from the replicated data. In this example, the plots do not suggest clear lack-of-fit.
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
pp_check(gpg, stat = "max")
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
pp_check(gpg, stat = c("min", "max"))
iqr <- function(y) diff(quantile(y, c(0.25, 0.75)))
pp_check(gpg, stat = "iqr")
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
Posterior predictive extreme value inference
Posterior predictive inferences for the largest value \(M_N\) to be observed over a future time period of length \(N\) years is performed by the predict method for the objects of class evpost
returned from rpost. Objects returned from predict.evpost have class evpred. The plot method for these objects produces graphical summaries of the output from predict.
To perform predictive inference about \(M_N\) we need to provide, or assume, information about the timescale covered by the data. To do this we need to supply the mean number npy of non-missing
observations per year, either in the call to rpost or the call to predict. For the GEV and OS models it is often the case that the input data are annual maxima, so npy = 1 is the default value if npy
is not supplied. For the PP model a similar assumption is made: see the documentation for predict.evpost for details. For the binomial-GP model (note that we need the binomial part in order to
account for the rate at which the threshold is exceedance) npy must be provided by the user.
Let \(F_{M_N}(z; \theta)\) denote the distribution function of \(M_N\) conditional on the parameters \(\theta\) of an extreme value model and let \(\pi(\theta \mid x)\) be the posterior distribution
of \(\theta\) given the observed data. Then, if we assume that given \(\theta\) future observations are independent of the data \(x\) then posterior predictive distribution function of \(M_N\) given
only the data \(x\) is given by \[ P(M_N \leqslant z \mid x) = \int F_{M_N}(z; \theta) \, \pi(\theta \mid x) {\rm ~d}\theta. \] We estimate \(P(M_N \leqslant z)\) using \[ \hat{P}(M_N \leqslant z \
mid x) = \frac1m \sum_{j=1}^m F_{M_N}(z; \theta_j), \] where \(\theta_j, j=1, \ldots, m\) is a sample from the posterior distribution \(\pi(\theta \mid x)\).
The form of \(F_{M_N}(z; \theta)\) depends on the extreme value model. The GEV, OS and PP models are each parameterised by the location, scale and shape parameters of a GEV distribution. Let \(G(z; \
theta)\) be the GEV distribution function for an annual maximum under a given such model, after appropriate adjustment for the value of npy. Then, \(F_{M_N}(z; \theta) = G(z; \theta)^{N}\). For the
binomial-GP model, provided that \(z\) is the greater than the threshold \(u\), \(F_{M_N}(z; \theta) = F(z; \theta)^{n_{py} N}\), where \(n_{py}\) is the value of npy and \[ F(z; \theta) = 1-p_u \
left\{1+\xi\left(\frac{z-u}{\sigma_u}\right)\right\}^{-1/\xi}. \] See Northrop, Attalides, and Jonathan (2017) for greater detail.
We repeat the posterior simulation for the Gulf-of-Mexico example, changing model = gp to model = bingp in the call to rpost to add inferences about the probability of threshold exceedance based on a
binomial distribution.
### Binomial-GP model for Gulf-of-Mexico significant wave heights
u <- quantile(gom, probs = 0.65)
fp <- set_prior(prior = "flat", model = "gp", min_xi = -1)
bp <- set_bin_prior(prior = "jeffreys")
# We need to provide the mean number of observations per year (the data cover a period of 105 years)
npy_gom <- length(gom)/105
bgpg <- rpost(n = 1000, model = "bingp", prior = fp, thresh = u, data = gom,
bin_prior = bp, npy = npy_gom, nrep = 50)
Posterior predictive density and distribution functions
We plot, for the analyses of the portpirie (GEV) and gom (binomial-GP) datasets, the estimated posterior predictive density functions and distributions functions of \(M_N\) for \(N = 100\) and \(N =
1000\). The argument n_years gives the value(s) of \(N\).
# GEV (Portpirie)
plot(predict(gevp, type = "d", n_years = c(100, 1000)), cex = 0.7)
plot(predict(gevp, type = "p", n_years = c(100, 1000)), cex = 0.7)
# binGP (Gulf-of-Mexico)
plot(predict(bgpg, type = "d", n_years = c(100, 1000)), cex = 0.7)
plot(predict(bgpg, type = "p", n_years = c(100, 1000)), cex = 0.7)
As one would expect there is greater uncertainty for \(N=1000\) than for \(N=100\). In making inferences over a period of length 1000 years a large degree of uncertainty is to be expected. For the
Gulf-of-Mexico data in particular, this results in appreciable probability on values of significant wave height thought by experts to be physically unrealistic.
Posterior predictive intervals
The default setting for predict.evpost is to estimate posterior predictive intervals for \(M_N\). In addition to n_years we can specify the probability level level of the interval(s), that is, the
estimated probability that \(M_N\) lies within the interval. Two types of interval can be calculated. One is an equi-tailed interval in which the estimated probability that \(M_N\) lies below the
lower limit of the interval is equal to the estimated probability that \(M_N\) lies above the upper limit of the interval. Typically, there will exist intervals of the required probability level that
is shorter than the equi-tailed interval. predict.evpost can search for the shortest such interval. If the estimated posterior predictive distribution is unimodal then this interval is the highest
predictive density (HPD) interval, with the property that at all values inside the interval the estimated posterior predictive density is greater than at all values outside the interval. If hpd =
FALSE then only equi-tailed intervals are returned. If hpd = TRUE then both types of interval are returned.
The plot.evpred method plots the estimated posterior predictive intervals for each value in n_years and each level in level. The argument which_int controls which of the two types of interval are
included. The dashed lines give the HPD intervals and the solid lines the equi-tailed intervals. Each line is labelled by the level of the interval.
i_gevp <- predict(gevp, n_years = c(100, 1000), level = c(50, 95, 99), hpd = TRUE)
plot(i_gevp, which_int = "both")
#> Warning in graphics::par(u): argument 1 does not name a graphical parameter | {"url":"https://cran.mirror.garr.it/CRAN/web/packages/revdbayes/vignettes/revdbayes-c-predictive-vignette.html","timestamp":"2024-11-03T15:41:51Z","content_type":"text/html","content_length":"94038","record_id":"<urn:uuid:2dab42a3-ca0e-496d-b041-01ebac875d50>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00156.warc.gz"} |
New stub hom-connection. I should figure it out once. While tensor product is involved in many constructions in algebra, some are dual with Hom instead, for example there are contramodules in
addition to comodules over a coring. In similar vain hom-connections were devised, but there are some really intriguing examples (including superconnections, right connections of Manin etc.) and
there are relations to examples of noncommutative integration of various kind. | {"url":"https://nforum.ncatlab.org/discussion/2184/homconnection/","timestamp":"2024-11-09T10:27:19Z","content_type":"application/xhtml+xml","content_length":"12215","record_id":"<urn:uuid:ff35ec38-44b9-46ec-aa32-459ebc32ac6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00515.warc.gz"} |
Perceptrons, Explained – R-Craft
Perceptrons, Explained
This article is originally published at https://www.sharpsightlabs.com
Unless you’ve been living in cave somewhere in remote Eurasia, you should know that deep learning is very popular, and very powerful.
A variety of tools from self driving cars to Chat GPT use deep learning; they complex neural networks with many hidden layers.
In the modern Tech environment, it can be very valuable to understand deep learning and neural networks.
But before you understand deep learning – which is sort of an advanced topic – it helps to understand the simple foundations of neural networks.
In particular, it’s helpful to first learn about the simplest neural network structure, the artificial neuron, which we call a perceptron.
So being the generous guy that I am, I’m going to explain perceptrons in this blog post. I’ll explain what perceptrons are, how they’re structured, and how they fit into the bigger picture of deep
learning and neural networks.
If you need something specific, just click on one of the following links.
Table of Contents:
A Quick Introduction to Perceptrons
You’re probably familiar with deep learning and artificial neural networks or at least heard the terms.
Perceptrons are related to deep learning and ANNs.
In fact, perceptrons are one of the simplest types of artificial neuron, so we can think of them as the foundation of neural networks.
Perceptrons are Based on Biological Neurons
Originally proposed in 1957, perceptrons were one of the earliest components of artificial neural networks.
The structure of the perceptron is based on the anatomy of neurons.
Neurons have several parts, but for our purposes, the most important parts are the dendrites, which receive inputs from other neurons, and the axon, which produces outputs.
The outputs of an axon can then serve as the inputs to the dendrites of another neuron (which allow neurons to operate in networks, like in the human brain).
Importantly, a neuron will produce an output only if it receives a sufficient number of input signals at the dendrites. Remember: neurons in a brain are organized into a network of many neurons. So
the dendrites of a neuron typically receive inputs from many other neurons.
Neuron Activation
Neurons “fire” – that is, produce an output – in an all or nothing way. The outputs of a neuron are essentially 0 or 1. On or off.
A neuron will “fire” if the input signals at the dendrites are sufficiently large, collectively. If the amount of input signal at the dendrites is high enough, the neuron will “fire” an produce an
output. But if the amount of input signal is insufficient, the neuron will not produce a output. Put simply, the neuron sums up the inputs, and if the collective input signals meet a certain
threshold, then it will produce an output. If the collective input signals are under the threshold, it will not produce an output.
This is, of course, a very simple explanation of how a neuron works (because they are very complex at the chemical level), but it’s roughly accurate.
Why is this important?
Because scientists suggested that we would be able to create artificial neurons, modeled after animal neurons, which would be able to process input data in a similar way.
How Perceptrons are Structured
Perceptrons are essentially artificial neurons.
They have a very similar structure to biological neurons.
There are inputs, the inputs are weighted, and if the weighted inputs collectively reach a certain threshold, then the perceptron produces an output.
Let’s quickly discuss a few of these parts of a perceptron.
Perceptrons have multiple numeric input values,
In a perceptron, these input x values can be real numbers.
However, in other artificial neuron models, the allowed input values are constrained. For example, in the McCulloch-Pitts neuron (which is slightly different from a perceptron), the input values are
binary 0/1 values.
In a perceptron, every input value is multiplied by a weight,
So collectively, we have the inputs
Alternatively, if you think of the input x values as a vector
The perceptron takes the product of the inputs and the weights, and sums them up.
This can be represented with the equation:
But in a perceptron and other forms of artificial neurons, we typically apply a function to these summed weighted inputs, which brings us to the activation function.
Step Function (i.e., Activation Function)
As you learn more about artificial neurons and neural networks, you’ll learn a lot more about activation functions.
An activation function is a function that’s applied to the summed inputs times weights,
There are actually a variety of different activation functions, but the simplest is arguably the step function.
The step function, as the name implies, visually looks like a step.
In the step function visualized above, if the input value is greater than 0, then the output of the step function is 1.
If the input value is less than 0 or equal to 0, then the output of the step function is 0.
So the step function forces the output into a binary 0/1 range.
The step activation function is important for the final output, which we’ll see in the next section.
We can model the output of a perceptron like this:
That’s sort of an English-language version.
A slightly more mathematical version is as follows:
Again though, this assumes that we’re strictly using a step function as our activation function (i.e., that we’re modeling a traditional perceptron).
Perceptrons in Machine Learning and Deep Learning
So now that we’ve looked at the structure of a Perceptron, let’s talk about how we use them and where they sit in the overall discussion of machine learning and deep learning.
Perceptrons are a type of Binary Linear Classifier
Perceptrons are a type of simple machine learning model.
Most specifically, perceptrons are a type of classifier.
Recall what we discussed earlier about the step function output of Perceptrons. Perceptrons output either a 0 or 1, depending on whether the inputs collectively reach a threshold.
So, we can characterize a perceptron as a simple type of binary linear classifier.
By themselves, they can solve classification problems with 2 possible target classes, where the data are linearly separable.
We Can Combine Perceptrons into Networks
Although it’s possible to use a perceptron all by itself as a machine learning technique, they’re much more interesting when we combine them together.
One simple neural network architecture that uses perceptrons is the multilayer perceptron.
Put simply, a multilayer perceptron combines together multiple perceptrons into an architecture that has multiple “layers.”
A multilayer perceptron has an input layer and an output layer (much like a lone perceptron).
But importantly, multilayer perceptrons also have what’s called a “hidden layer.”
What’s a hidden layer?
A hidden layer is a layer of neurons between the input layer and the output layer.
So a multilayer perceptron architecture must have at least one hidden layer.
The addition of these hidden layers allows multilayer perceptrons to solve more complex problems.
Perceptrons and Multilayer Perceptrons Form the Foundation of Deep Learning
So why does all this matter?
Perceptrons are the foundation of multilayer perceptrons, and multilayer perceptrons form the foundation of deep learning.
One of the simplest deep network architectures is a multilayer perceptron with many stacks of hidden layers. A “deep” stack of hidden layers (that’s why we call it “deep” learning).
There’s More to Learn
I’ve tried to give you an overview of perceptrons in this blog post.
But, I’ve left a lot out.
Particularly when we start talking about multi layer perceptrons and deep learning, there are a lot of additional, relevant details.
So if you want to understand and use neural networks, there’s a lot more to learn.
For more tutorials about machine learning and deep learning, sign up for our email list
All that said, if you want to learn more about machine learning and deep learning, then sign up for our email list.
Here at Sharp Sight, we publish multiple tutorials per month about:
• NumPy
• Pandas
• Base Python
• Scikit learn
• Machine learning
• Deep learning
• … and more.
And when you sign up for our email list, you’ll get these tutorials delivered direct to your inbox.
Thanks for visiting r-craft.org
This article is originally published at https://www.sharpsightlabs.com
Please visit source website for post related comments. | {"url":"https://r-craft.org/perceptrons-explained/","timestamp":"2024-11-08T21:18:13Z","content_type":"text/html","content_length":"132385","record_id":"<urn:uuid:d2d4de66-d4fc-4b92-9743-a807496c1a90>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00765.warc.gz"} |
Worksheet On Converting Improper Fractions To Mixed Numbers 2024 - NumbersWorksheets.net
Worksheet On Converting Improper Fractions To Mixed Numbers
Worksheet On Converting Improper Fractions To Mixed Numbers – Portion Amounts Worksheets are a very good way to apply the thought of fractions. These worksheets are meant to educate individuals in
regards to the inverse of fractions, and may assist them to be aware of the connection among fractions and decimals. Many students have trouble converting fractions to decimals, but they can benefit
from these worksheets. These computer worksheets will assist your university student to get far more informed about fractions, and they’ll make sure you have fun carrying out them! Worksheet On
Converting Improper Fractions To Mixed Numbers.
Free mathematics worksheets
Consider downloading and printing free fraction numbers worksheets to reinforce their learning if your student is struggling with fractions. These worksheets may be tailored to match your person
requirements. Additionally they include solution keys with comprehensive recommendations to steer your university student throughout the process. A lot of the worksheets are split up into distinct
denominators so your pupil can exercise their expertise with an array of issues. After, individuals can recharge the site to acquire a different worksheet.
These worksheets help individuals comprehend fractions by generating comparable fractions with assorted denominators and numerators. They have got lines of fractions which are counterpart in worth
and each row carries a lacking denominator or numerator. The scholars fill the lacking numerators or denominators. These worksheets are useful for rehearsing the expertise of reducing fractions and
discovering fraction procedures. They come in distinct quantities of problems, starting from very easy to moderate to difficult. Every worksheet consists of in between ten and thirty difficulties.
Free of charge pre-algebra worksheets
No matter if you are in need of a free pre-algebra portion numbers worksheet or you will need a printable version to your individuals, the Internet can present you with many different choices. Some
offer you cost-free pre-algebra worksheets, by incorporating notable exceptions. When several of these worksheets may be customized, a couple of cost-free pre-algebra fraction numbers worksheets
could be acquired and printed for added process.
One particular good source for down loadable free of charge pre-algebra fraction figures worksheet will be the University or college of Maryland, Baltimore County. Worksheets are free to use, but you
should be careful about uploading them on your own personal or classroom website. You are free to print out any worksheets you find useful, and you have permission to distribute printed copies of the
worksheets to others. You can use the free worksheets as a tool for learning math facts, or as a stepping stone towards more complex concepts.
Cost-free math worksheets for course VIII
If you are in Class VIII and are looking for free fraction numbers worksheets for your next maths lesson, you’ve come to the right place! This assortment of worksheets is based on the CBSE and NCERT
syllabus. These worksheets are fantastic for scrubbing on the concepts of fractions to be able to do far better with your CBSE exam. These worksheets are really easy to use and deal with each of the
concepts which are necessary for reaching great spots in maths.
Many of these worksheets involve evaluating fractions, ordering fractions, simplifying fractions, and surgical procedures using these phone numbers. Use genuine-daily life good examples during these
worksheets which means your students can relate to them. A dessert is simpler to correspond with than 50 % of a rectangular. An additional fantastic way to practice with fractions is using
counterpart fractions types. Try using the real world cases, say for example a half-dessert plus a rectangular.
Cost-free arithmetic worksheets for changing decimal to fraction
If you are looking for some free math worksheets for converting decimal to a fraction, you have come to the right place. These decimal to portion worksheets may be found in a variety of formats. You
can download them inhtml and PDF, or random format. Most of them come with a solution crucial and can even be colored by little ones! You can use them for summertime discovering, arithmetic centres,
or as part of your normal arithmetic course load.
To transform a decimal into a small percentage, you have to simplify it first. Decimals are written as equivalent fractions if the denominator is ten. Furthermore, you can also get worksheets on the
way to convert combined numbers to some small fraction. Cost-free arithmetic worksheets for transforming decimal to fraction come with combined numbers and examples of these two conversion processes.
However, the process of converting a decimal to a fraction is easier than you might think. Adopt these measures to begin.
Gallery of Worksheet On Converting Improper Fractions To Mixed Numbers
Leave a Comment | {"url":"https://www.numbersworksheets.net/worksheet-on-converting-improper-fractions-to-mixed-numbers/","timestamp":"2024-11-13T00:00:20Z","content_type":"text/html","content_length":"63833","record_id":"<urn:uuid:a4a06763-60c7-4178-a139-015039d836e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00295.warc.gz"} |
The Ultimate Guide to Using a Savings Interest Calculator - Age calculator
The Ultimate Guide to Using a Savings Interest calculator
When it comes to saving money, one of the most significant factors you need to consider is how to make the most of your financial resources. One way to do that is by utilizing a savings interest
calculator. A savings interest calculator is a valuable tool that enables you to know the potential growth of your savings and aids you in making sound financial decisions.
This article serves as the ultimate guide to utilizing a savings interest calculator to assist you in understanding its usefulness and how to use it effectively.
What Is a Savings Interest calculator?
A savings interest calculator is a simple online tool that calculates the estimated interest earned on your savings account over a specific period. Usually, savings interest calculators require you
to input the initial balance, the amount you plan to save regularly, the interest rate, and the number of years you plan to save. Once you input all the necessary details, the calculator will
generate the total amount of your savings, the total interest earned, and the final balance.
How to Use a Savings Interest calculator
Using a savings interest calculator is easy and straightforward. Below are the steps:
1. Find a savings interest calculator
There are many savings interest calculators available online. You can search for one that is reputable and easy to use.
2. Input the initial balance
The first step is to input the amount of money you have in your savings account. If you don’t have a savings account yet, you can input zero or the amount you intend to deposit initially.
3. Input regular savings
If you plan to save regularly, you can input the amount of money you intend to save periodically. For instance, if you intend to save $100 every month, you can input that in the calculator.
4. Input the interest rate
The interest rate is the rate at which your money will grow when you save it in a bank account. Input the interest rate of your bank account.
5. Set the savings period
Input the number of years you plan to save your money. This helps to determine the total amount you will have saved at the end of the savings period.
6. Calculate
Once you input all the necessary details, click the calculate button, and the calculator will generate the total amount of your savings, the total interest earned, and the final balance.
Benefits of Using a Savings Interest calculator
There are several benefits of utilizing a savings interest calculator. Some of them include:
1. Helps in making informed decisions
A savings interest calculator aids you in making wise financial decisions. It enables you to have a better understanding of how much you can save for future projects and how long it will take to
reach your financial goals.
2. Encourages savings
Having a clear understanding of the growth of your savings encourages you to save more. With a savings interest calculator, you can see the total interest earned and adjust your savings amount to
achieve your financial goals faster.
3. Helps in choosing the right bank account
A savings interest calculator can assist you in picking the right bank account that offers the best interest rate. It is always advisable to compare several bank accounts and choose the one that
aligns with your financial goals.
4. Easy to use
Savings interest calculators are easy to use and do not require any technical knowledge. All you need to do is input the details, and the calculator does the rest.
Frequently Asked Questions (FAQs)
1. What is the difference between simple and compound interest?
Simple interest is calculated based on the initial balance. For instance, if you have $10,000 in your savings account with a 5% interest rate, you will earn $500 in interest annually. Compound
interest, on the other hand, is calculated based on the total balance, including interest earned from previous periods. This means that the interest for the second year will be calculated based on
$10,500, assuming the interest is compounded annually.
2. Can I change the interest rate and savings period while using the calculator?
Yes, you can change the interest rate and savings period while using the calculator. This is useful when comparing different bank accounts or when you want to see how changing your savings period
affects the growth of your savings.
3. What is the best way to maximize savings?
There are many ways to maximize your savings. One way is to start saving early and make it a habit. Another way is to minimize your expenses and increase your income. Also, investing your money in
various portfolios can yield a higher rate of return.
Savings interest calculators are a useful tool that can help you make informed financial decisions. Using a savings interest calculator is easy, and it only takes a few minutes to calculate the
estimated growth of your savings. The benefits of using a savings interest calculator cannot be overemphasized, and it is advisable to utilize it regularly to track your savings progress. Always
remember that the more you save, the faster you can achieve your financial goals.
Recent comments | {"url":"https://age.calculator-seo.com/the-ultimate-guide-to-using-a-savings-interest-calculator/","timestamp":"2024-11-04T05:55:30Z","content_type":"text/html","content_length":"304349","record_id":"<urn:uuid:5705da72-6b2b-4ba1-ad26-484a633c4b70>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00019.warc.gz"} |
The Wilkins pendulum mystery resolved
The Wilkins pendulum mystery resolved
Last March, I pointed out that:
• John Wilkins had defined a natural, decimal system of measurements,
• that he had done this in 1668, about 110 years before the Metric System, and
• that the basic unit of length, which he called the "standard", was almost exactly the same length as the length that was eventually adopted as the meter
("John Wilkins invents the meter", 3 March 2006.)
This article got some attention back in July, when a lot of people were Google-searching for "john wilkins metric system", because the UK Metric Association had put out a press release making the
same points, this time discovered by an Australian, Pat Naughtin.
For example, the BBC Video News says:
According to Pat Naughtin, the Metric System was invented in England in 1668, one hundred and twenty years before the French adopted the system. He discovered this in an ancient and rare book...
Actually, though, he did not discover it in Wilkins' ancient and rare book. He discovered it by reading The Universe of Discourse, and then went to the ancient and rare book I cited, to confirm that
it said what I had said it said. Remember, folks, you heard it here first.
Anyway, that is not what I planned to write about. In the earlier article, I discussed Wilkins' original definition of the Standard, which was based on the length of a pendulum with a period of
exactly one second. Then:
Let d be the distance from the point of suspension to the center of the bob, and r be the radius of the bob, and let x be such that d/r = r/x. Then d+(0.4)x is the standard unit of measurement.
(This is my translation of Wilkins' Baroque language.)
But this was a big puzzle to me:
Huh? Why 0.4? Why does r come into it? Why not just use d? Huh?
Soon after the press release came out, I got email from a gentleman named Bill Hooper, a retired professor of physics of the University of Virginia's College at Wise, in which he explained this
puzzle completely, and in some detail.
According to Professor Hooper, you cannot just use d here, because if you do, the length will depend on the size, shape, and orientation of the bob. I did not know this; I would have supposed that
you can assume that the mass of the bob is concentrated at its center of mass, but apparently you cannot.
The usual Physics I calculation that derives the period of a pendulum in terms of the distance from the fulcrum to the center of the bob assumes that the bob is infinitesimal. But in real life the
bob is not infinitesimal, and this makes a difference. (And Wilkins specified that one should use the most massive possible bob, for reasons that should be clear.)
No, instead you have to adjust the distance d in the formula by adding I/md, where m is the mass of the bob and I is the moment of intertia of the bob, a property which depends on the shape, size,
and mass of the bob. Wilkins specified a spherical bob, so we need only calculate (or look up) the formula for the moment of inertia of a sphere. It turns out that for a solid sphere, I = 2mr^2/5.
That is, the distance needed is not d, but d + 2r^2/5d. Or, as I put it above, d + (0.4)x, where d/r = r/x.
Well, that answers that question. My very grateful thanks to Professor Hooper for the explanantion. I think I might have figured it out myself eventually, but I am not willing to put a bound of less
than two hundred years on how long it would have taken me to do so.
One lesson to learn from all this is that those early Royal Society guys were very smart, and when they say something has a mysterious (0.4)x in it, you should assume they know what they are doing.
Another lesson is that mechanics was pretty well-understood by 1668.
[Other articles in category /physics] permanent link | {"url":"https://blog.plover.com/physics/pendulum-mystery.html","timestamp":"2024-11-12T16:20:31Z","content_type":"text/html","content_length":"28566","record_id":"<urn:uuid:56cb5fb7-0719-4296-b7b1-ddc37667a626>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00685.warc.gz"} |
An analysis of the uncertainties in critical loads and target loads of sulphur (S) and nitrogen (N) for 182 European forest soils was carried out using the Very Simple Dynamic (VSD) model. The VSD
model was calibrated with a Bayesian approach using prior probability functions for model parameters based on literature data, data from 200 Dutch forest sites and from simulated denitrification
rates from a detailed ecosystem model. The calibration strongly improved the fit of the model to observed soil and soil solution concentrations, especially for pH and base saturation. Calibration
also narrowed down the ranges in input parameters. The uncertainty analysis showed which parameters contribute most to the uncertainty in the critical loads and target loads. Base cation weathering
and deposition and the parameters describing the H-Al equilibrium in the soil solution determine the uncertainty in the maximum critical loads for S, CLmax(S), when based on the aluminium to base
cation (Al/Bc) criterion. Uncertainty in CLmax(S) based on an acid neutralizing capacity (ANC) criterion is completely determined by base cation inputs alone. The denitrification fraction is the most
important source of uncertainty for the maximum critical loads of N, CLmax(N). N uptake and N immobilisation determine the uncertainties in the critical load for N as a nutrient, CLnut(N).
Calibration of VSD reduced the uncertainty: the coefficient of variation (CV) was reduced for all critical loads and criteria. After calibration, the CV for CLmax(S) was below 0.4 for almost all
plots; however for CLmax(N) high values occurred for plots with high denitrification rates. Model calibration also improved the robustness of target load estimates: after calibration, no target loads
were needed in any of the simulations for 40% of the plots, with the uncalibrated model there was a positive probability for the need of a target load for almost all plots. | {"url":"https://www.wsl.ch/walddyn/lwf_publ_abstract.php?ref=25522","timestamp":"2024-11-04T14:27:59Z","content_type":"text/html","content_length":"3391","record_id":"<urn:uuid:039be100-0a0c-4b2b-a40f-3fda35151610>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00212.warc.gz"} |
Efficiency of 2021 HVAC Systems: Is it Worth Upgrading? | | Epic Energy
Reading Time: 5 minutes
Efficiency of 2021 HVAC Systems: Is it Worth Upgrading?
“HVAC” refers to the combination of systems that heat and cool your home, including air conditioners, heat pumps, and furnaces. The age of the HVAC system determines its energy efficiency. If your
HVAC unit is a few years old, installing a new one now might save you money in the long run.
Why upgrade?
According to the US Department of Energy (DOE), “building energy codes are projected to save $126 billion cumulatively by 2040.” Additionally, “cumulative utility bill savings [due to appliance and
equipment standards] to consumers are estimated to be more than … $2 trillion by 2030.”
This two trillion dollars is split between all energy consumers who upgrade their systems to comply with new standards.. These standards include, amongst other things, upgrading HVAC systems to be
more energy-efficient. Who wouldn’t want to jump on that two-trillion-dollar bandwagon?
The DOE also reports that, “building energy codes have a 40-year history of reducing consumer energy bills. Today’s energy codes provide more than 30% savings compared to those of less than a decade
The typical utility bill for a US homeowner is 50% heating and cooling. A 30% savings on half your energy bill is significant.
The math: keeping it general
The DOE predicted in 2016 that upgrading ten-year-old systems would save you $500 a year. Now in 2020, upgrading from a ten-year-old system would save you about the same amount of money–efficiencies
have continued to increase at a similar rate.
An older HVAC model.
While it is impossible to compare the efficiency of each individual furnace here, HVAC systems are rated using their “SEER” rating, or “Seasonal Energy Efficiency Ratio.” While the SEER rating of a
unit tends to decrease over time, comparing SEER ratings across HVAC units can still be a useful metric for deciding whether replacing your system is worth it.
The minimum SEER rating in 1992 was 10. Since 2015, the minimum has been 14.
An increase in SEER from 14 to 17 saves an average of $66 a year, but qualifies you for a $300 federal tax credit. The highest SEER rating in a commercial central air conditioner is 26 (the Lennox
XC25). Upgrading from a system with a SEER 14 to the Lennox XC25 would save you $167 a year in addition to the tax credit. The numbers start to look juicy. (Keep in mind that if your HVAC system you
purchased 10 years ago had an initial SEER rating of 14, the rating has likely decreased since then.)
The math: up-close and personal
You may still be asking yourself: but is an upgrade worth it for me?
Here is a formula to calculate how much installing an improved HVAC unit will save you annually (based on Coolray’s formula).
1. Calculate how much energy each air conditioner will use annually using this equation:
[ (Size of AC system in tons x 12,000) / SEER ] * 1500 = number of watt-hours used annually
For example, if you realize your old, less efficient system is too large for your home, you might replace a 3-ton, 14-SEER system with a 2-ton, 17 SEER system, the equations would be:
[(3*12000)/14]*1500 = 3,857,143 watt-hours per year
[(2*12000)/17]*1500 = 2,117,647 watt-hours per year
2. Find the difference between these two annual energies.
3,857,143 – 2,117,647 = 1,739,496 watt-hours per year
3. Convert this to kilowatt-hours by dividing by 1000
1,739,496 / 1000 = 1,739.5 kWh per year
4. Multiply this number by your area’s electricity rate. For Sacramento, this is 15.34¢ per kWh.
1,739.5 * 0.1534 = $266.84
So replacing a 3-ton, 14-SEER system with a 2-ton, 17-SEER system would save you about $267 per year.
The most energy efficient central air conditioners and air source heat pumps as of 2020 can be found on the Energy Star website. Comparing these products with the cost/energy use of your current
model can help you see exactly how much money upgrading could save you.
A newer HVAC model
TL;DR: skipping the math
Putting aside the numbers, here are some specific situations where replacing your HVAC unit is probably the way to go:
• If your heat pump/air conditioner/HVAC is more than 10 years old. Efficiency of a unit decreases over time, and the efficiencies of HVAC units available on the market increase quickly. A general
guideline is that units over 10 years old are worth replacing.
• If your equipment requires frequent repairs. Eventually buying a new unit is more economically feasible than constantly repairing an old one.
• If your energy bill is consistently going up. This is a sign that the efficiency of your unit is decreasing, or that a part of it is failing, which means it’s time for a new unit.
• If your equipment heats or cools your home unevenly. This is again a sign that its efficiency is decreasing.
• If you do not have a programmable thermostat and leave the house for long, regular periods of time. Updating your system and getting one with a programmable thermostat would allow it to idle
while you’re away, therefore decreasing its energy consumption.
• If your system is noisy; noise typically signals an inefficiency in the system.
• If your score on the Home Energy Yardstick is below five (or below average compared to other homes surveyed in the U.S. DOE’s Residential Energy Consumption Survey (RECS)). This is just another
metric way to say that the efficiency of your HVAC unit could be much better.
• If your HVAC unit is improperly sized for your home; in the case of HVAC, bigger is not better. Bigger HVAC units turn on and off frequently as they heat or cool your home, which increases their
energy consumption; they require more energy to turn on, run on high, and turn back off than they do to run constantly at a lower level.
If in doubt, you could always ask a home energy auditor to assess your home.
Hopefully these few tips help you decide whether it is worth replacing your old HVAC unit with a new one. | {"url":"https://www.thinkepic.com/hvac/efficiency-of-2020-hvac-systems-is-it-worth-upgrading/","timestamp":"2024-11-14T11:15:32Z","content_type":"text/html","content_length":"116791","record_id":"<urn:uuid:32c8def9-5ba0-4aed-9702-8d51b99f4f6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00402.warc.gz"} |
5.5 - Analysis | STAT 507
Analytic Methods for Non-Matched Case-Control Studies Section
With case-control studies, we essentially work down the columns of the 2 × 2 table. Cases are identified first, then controls. The investigator then determines whether cases and controls were exposed
or not exposed to the risk factor.
2 × 2 Table for Non-Matched Case-Control Data:
Case Controls Total Exposure
Category (Number) (Number) (Number)
Exposed A B Total[Exposed]
Not Exposed C D Total[NotExposed]
Total Total[Cases] Total[Controls] Total
We calculate the odds of exposure among cases (A/C) and the odds of exposure among controls (B/D). The odds ratio is then (A/C)/(B/D), which simplifies after cross-multiplication to (A*D)/(B*C).
For case-control studies, since the ratio of cases to controls is not necessarily representative of the ratio in the population, the odds ratio must be used as the summary measure. The relative risk
is not an accurate measure in this type of study.
Analytic methods for non-matched case-control studies include:
• Chi-square 2 × 2 analysis;
• Mantel-Hanszel statistic (This test takes into account the possibility that there are different effects for the different strata (e.g., effect modification))
• Fisher’s Exact test (This test is used if the expected cell size is <5)
• Unconditional logistic regression (The method is used to simultaneously adjust for multiple confounders; a multivariable analysis).
For the obesity and microscopic colitis example (Obesity is associated with decreased risk of microscopic colitis in women), the data from table 2 can be used to construct this 2x2 table for the
comparison of microscopic colitis between those with low and high BMI.
Comparison of Microscopic Colitis between those with Low and High
Case Controls Total Exposure
Category (Number) (Number) (Number)
Exposed 22 105 Total[Exposed] (127)
(BMI >=30)
Not Exposed 50 73 Total[NotExposed] (123)
(BMI < 25)
Total Total[Cases] Total[Controls] Total (250)
(72) (178)
OR = (22*73)/(50*105) = 0.31
As we see in the text: As shown in Table Table2, the risk for microscopic colitis was lower for … BMI > 30 kg/m2 (OR 0.31, 95%CI: 0.17-0.55) compared to under- or healthy weight (BMI < 25 kg/m2) as
the reference.
To review, for a simple non-matched case-control study, you find a case, then determine whether the person is exposed or not. Find a control; determine their exposure status.
Analytic Methods of Matched Case-Control Studies Section
In an analysis of a matched study design, only discordant pairs are used. A discordant pair occurs when the exposure status of the case is different from the exposure status of the control. The most
commonly used analytic method for matched case-control studies is conditional logistic regression, conditioned upon the matching.
The matched case-control study has linked a case to a control based on the matching of one or more variables. The summary table will differ for a matched case-control study
2 × 2 Table for Matched Case-Control Data:
Controls Exposed Not-Exposed Total
(Number) (Number) (Number)
Exposed A B Total[ExposedControls]
(Number) (Concordant Pair) (Discordant Pair)
Not-Exposed C D Total[Not ExposedControls]
(Number) (Discordant Pair) (Concordant Pair)
Total Total[ExposedCases] Total[Not ExposedCases] Total
Let's look at an example. Suppose we plan to match cases to controls by gender and age (+/- 5 years). We first identify the following case:
Male, 45 years of age (Patient 1);
Exposure status: Exposed
If this was a non-matched study, the case would be counted in cell A in the non matched 2x2 table because he is exposed. However, in the age- and gender-matched case-control study we must also find a
male control within five years of age. Searching in the appropriate control population, we locate the following control:
Male 48 years of age (Person 47);
Exposure status: Exposed
If Person 47 were counted in an unmatched study, he would belong in cell B of the preceding table. In a matched case-control study, however, we are interested in results for the matched pair. The
data from Patient 1 and Person 47 are linked for the duration of the study. The appropriate table for the matched study is depicted below. Where do Patient 1 and Person 47 belong?
Patient 1 is a case and he is exposed so he fits into either cell A or cell C. Based upon his control's status we determine which cell is the correct placement for this pair. Patient 1's control is
exposed, therefore Patient 1 and Person 47 fit into cell A as a pair. This is a concordant pair because both are exposed. Concordancy is based upon exposure status. In a matched case-control study,
the cell counts represent pairs, not individuals. In the statistical analysis, only the discordant pairs are important. Cells B and C contribute to the odds ratio in a matched design. Cells A and D
do not contribute to the odds ratio. If the risk for disease is increased due to exposure, C will be greater than B. The odds ratio is then (B/C).
Comparing Matched and Non-Matched Case-Control Studies Section
Stop and Think!
Come up with an answer to the questions and then click on the button below to reveal the answer.
1. Can you think of more than one reason why a matched case-control study could take longer to complete than an unmatched study?
First, you must identify matched controls, sometimes more than one per case. Second, since only the discordant pairs contribute to the statistical analysis, achieving a desired statistical power
depends on obtaining a particular number of discordant pairs.
2. Why bother with matching if it means a longer case-control study?
When performing statistical analysis, the matched variables are not included in the statistical model.
(In a cohort study, confounding is dealt with by including the terms in the model to adjust for their effects. In a matched case-control study, the adjustment for this confounding can be made
through the matching.) | {"url":"https://online.stat.psu.edu/stat507/lesson/5/5.5","timestamp":"2024-11-12T09:14:14Z","content_type":"text/html","content_length":"81455","record_id":"<urn:uuid:7265f8e9-bdc9-4da5-aab9-8b6555d0c6e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00854.warc.gz"} |
seminars - The proof of the l^2 decoupling conjecture and some applications
Decomposing a function is widely used tool to deal with many situations. Among them, one important tool is decomposing some specific functions(Fourier supported in a neighborhood of some
hyper-surfaces) with pieces of small supports in the Fourier side, rather than the physical side. After the decomposition, we need to control our original function with decomposed pieces. Precisely,
we can estimate the L^p norm of the function with the square mean of decomposed pieces by using Cauchy-Schwarz inequality. However, Cauchy-Schwarz inequality makes large coefficient in front of the
square mean. The conjecture is how small this coefficient can be. We will prove the l^2 decoupling conjecture for compact hyper-surfaces with positive definite second fundamental form. Although
decoupling conjecture is a weaker version of the square function estimate problem, there are various applications such as discrete restriction phenomena, Strichartz estimates for torus, and some
number theory problems. We will see how the decoupling conjecture used in these situations. | {"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&l=en&sort_index=date&order_type=asc&page=36&document_srl=786740","timestamp":"2024-11-11T10:47:29Z","content_type":"text/html","content_length":"48187","record_id":"<urn:uuid:68cc0048-3f8c-4757-82e0-fffcbdb68346>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00710.warc.gz"} |
Simplify with sci/eng on hp 50g
Linear Mode
Simplify with sci/eng on hp 50g
« Next Oldest | Next Newest »
11-06-2012, 04:38 AM
When I change into engineering/scientific mode with a precision of e.g. 1 digit, '70E-9 *cos(0)' gets simplified into 1/e^16 (cas is set to exact). What is going on here?
11-06-2012, 05:37 PM
I am unable to reproduce your error. I get 7.0E-8
A couple of questions:
-In the answer, is that e the normal (2.71828...) e?
-Do you have E defined to be a variable with a value?
-Can you explain *exactly* what you're doing?
Example, I am in RPN mode, enter 70E-9 *cos(0) in single quotes, and hit EVAL (or simplify, whatever) and get 7.0E-8
11-06-2012, 06:37 PM
Set mode to engineering with precision 1, cas mode exact. Open equation editor, enter cos(0) * 0.01 (another example). Select whole expression and simplify it, or close the equation editor and invoke
SIMPLIFY (EVAL does not exhibit this behaviour). It gets simplified to 1/e^5 (yes, that's e, the exponential function).
11-06-2012, 06:50 PM
I have made a recording of this in the emulator:
11-07-2012, 06:15 AM
Hi Juraj,
I confirm I get the same strange result when using SIMPLIFY and low display digits in Exact Mode. This seems to be a problem with version 2.15 of the calculator firmware. In version 2.09 it will
request Approx. mode and display the correct answer.
To others,
Just entering the equation with EVAL it seems it always gets the right answer, irrespective of display digits setting.
To reproduce the error:
Have version 2.15 and set display to either FIX 1, SCI 1 or ENG 1
Enter the equation in Equation Writer, select it completely and use SIMP
enter the equation on the command line '70E-9·COS(0)' go to CONVERT -> REWRITE -> SIMPLIFY
Further to the above: it seems also to be a problem with the 50g association of integers and exact mode, and floating numbers and approx. mode.
Enter the equation as:
and it will simplify it correctly.
Thus the v. 2.09 response is correct to request Approx. Mode because of the 70E-9. It does not request it when 70/1000000000 is entered. The 50g is wrong by just giving an answer that is incorrect.
Something broke here between 2.09 & 2.15.
Edited: 7 Nov 2012, 6:48 a.m.
11-07-2012, 10:56 AM
In approx mode, 1•0.01 gets simplified right into 6.7E-3. This is really dangerous! How the hell did not anyone discover this earlier? That was on my 3rd or 4th day of owning a 50g....
Edited: 7 Nov 2012, 10:59 a.m.
11-07-2012, 11:25 AM
How do you do ?
In RPN mode I get the right answer
11-07-2012, 11:39 AM
Quote: How do you do ?
In RPN mode I get the right answer
Hi Gilles,
v2.09 does get the right answer, but v2.15 gets the answer that Juraj noted. (we are talking about SIMPLIFY command with low digit ENG/SCI/FIX mode).
Edited: 7 Nov 2012, 11:52 a.m.
11-07-2012, 11:44 AM
Quote: How the hell did not anyone discover this earlier? That was on my 3rd or 4th day of owning a 50g....
I guess most users like lots of numbers and do not normally use the SIMPLIFY function with such low digit requirements.
Quote: In approx mode, 1•0.01 gets simplified right into 6.7E-3.
It still seems to try and use the Exact techniques (in Exact mode it gives 1/e^5, which = 6.7E-3 with "-> NUM" in FIX 1). When entering it as '1·(1/100)' the correct answer is produced.
It seems to try and find the nearest answer representable with e. I tried several flag settings, but nothing seems to help. Perhaps others have better ideas. As I said, it seems something here broke
between v2.09 and 2.15.
Edited: 7 Nov 2012, 12:06 p.m.
11-08-2012, 10:34 AM
Interesting. I tested this on a emulator, though, which would have been an older version. I left my physical HP50g at home, so I'll have to check this on that later.
11-16-2012, 07:24 PM
Quote: Interesting. I tested this on a emulator, though, which would have been an older version. I left my physical HP50g at home, so I'll have to check this on that later.
It looks like my physical calculator does have this issue. Maybe I should try to install an older version?
11-08-2012, 06:26 PM
I can also confirm this behavior on the HP50G ROM 2.15
When I change into engineering/scientific mode with a precision of e.g. 1 digit, '70E-9 *cos(0)' gets simplified into 1/e^16 (cas is set to exact). What is going on here?
I am unable to reproduce your error. I get 7.0E-8
-In the answer, is that e the normal (2.71828...) e?
-Do you have E defined to be a variable with a value?
Example, I am in RPN mode, enter 70E-9 *cos(0) in single quotes, and hit EVAL (or simplify, whatever) and get 7.0E-8
Set mode to engineering with precision 1, cas mode exact. Open equation editor, enter cos(0) * 0.01 (another example). Select whole expression and simplify it, or close the equation editor and invoke
SIMPLIFY (EVAL does not exhibit this behaviour). It gets simplified to 1/e^5 (yes, that's e, the exponential function).
I have made a recording of this in the emulator:
Hi Juraj,I confirm I get the same strange result when using SIMPLIFY and low display digits in Exact Mode. This seems to be a problem with version 2.15 of the calculator firmware. In version 2.09 it
will request Approx. mode and display the correct answer.To others,Just entering the equation with EVAL it seems it always gets the right answer, irrespective of display digits setting.To reproduce
the error:Have version 2.15 and set display to either FIX 1, SCI 1 or ENG 1Enter the equation in Equation Writer, select it completely and use SIMPorenter the equation on the command line '70E-9·COS
(0)' go to CONVERT -> REWRITE -> SIMPLIFYEDIT:Further to the above: it seems also to be a problem with the 50g association of integers and exact mode, and floating numbers and approx. mode.Enter the
equation as:(70/1000000000)*COS(0)and it will simplify it correctly.Thus the v. 2.09 response is correct to request Approx. Mode because of the 70E-9. It does not request it when 70/1000000000 is
entered. The 50g is wrong by just giving an answer that is incorrect. Something broke here between 2.09 & 2.15.
In approx mode, 1•0.01 gets simplified right into 6.7E-3. This is really dangerous! How the hell did not anyone discover this earlier? That was on my 3rd or 4th day of owning a 50g....
Quote: How do you do ? In RPN mode I get the right answer
Quote: How the hell did not anyone discover this earlier? That was on my 3rd or 4th day of owning a 50g....
Quote: In approx mode, 1•0.01 gets simplified right into 6.7E-3.
Interesting. I tested this on a emulator, though, which would have been an older version. I left my physical HP50g at home, so I'll have to check this on that later.
Quote: Interesting. I tested this on a emulator, though, which would have been an older version. I left my physical HP50g at home, so I'll have to check this on that later.
It looks like my physical calculator does have this issue. Maybe I should try to install an older version?
I can also confirm this behavior on the HP50G ROM 2.15
Possibly Related Threads…
Thread Author Replies Views Last Post
HP Prime: Converting number to Sci notation and back BruceTTT 1 1,582 11-12-2013, 02:11 AM
Last Post: Phil Wipf
ENG button (like Casio calculators) on HP Prime uklo 3 2,071 11-04-2013, 09:45 PM
Last Post: LHH
HP 50g trilogy. The new HP 50g Blue (and II) Pablo P (Spain) 18 5,263 09-19-2011, 03:08 AM
Last Post: BruceH
HP 25c Apps Review: GO 25 SCI, rpn 25 Eddie W. Shore 14 3,912 03-29-2011, 05:10 PM
Last Post: Andrés C. Rodríguez (Argentina)
Why the resistance to make a true pocket Sci RPN calc Gerardo Rincon 10 2,631 11-15-2010, 01:20 AM
Last Post: Walter B
Interesting Blog Post (Sci Am, Sept 77) Les Bell 1 990 04-04-2010, 12:19 AM
Last Post: Eric Smith
Bug in SIMPLIFY? (hp 50g) Luca 6 1,881 08-20-2009, 05:03 PM
Last Post: Nigel J Dowrick
How to prevent automatic substitution during SIMPLIFY Luca 2 1,194 08-17-2009, 02:48 AM
Last Post: Luca
SCI-15C question. Pal G. 5 1,583 11-21-2008, 06:58 PM
Last Post: Eric Smith
Typograpic issue in Urroz sci and eng hp 49g vol 2? Eric Smith 1 894 09-12-2008, 03:44 AM
Last Post: Gordon Strickland | {"url":"https://archived.hpcalc.org/museumforum/thread-233668-post-233729.html","timestamp":"2024-11-04T22:02:51Z","content_type":"application/xhtml+xml","content_length":"58205","record_id":"<urn:uuid:8b3725a0-6d79-43a3-9f4d-be16d31a1ebb>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00636.warc.gz"} |
B. Silindir Yantır, "Integrable Soliton Extensions," University of Ostrava, Seminar of Department of Mathematics , Ostrava, Czech Republic, 2014
Silindir Yantır, B. 2014. Integrable Soliton Extensions. University of Ostrava, Seminar of Department of Mathematics , (Ostrava, Czech Republic).
Silindir Yantır, B., (2014). Integrable Soliton Extensions . University of Ostrava, Seminar of Department of Mathematics, Ostrava, Czech Republic
Silindir Yantır, BURCU. "Integrable Soliton Extensions," University of Ostrava, Seminar of Department of Mathematics, Ostrava, Czech Republic, 2014
Silindir Yantır, BURCU S. . "Integrable Soliton Extensions." University of Ostrava, Seminar of Department of Mathematics , Ostrava, Czech Republic, 2014
Silindir Yantır, B. (2014) . "Integrable Soliton Extensions." University of Ostrava, Seminar of Department of Mathematics , Ostrava, Czech Republic.
@conferencepaper{conferencepaper, author={BURCU SİLİNDİR YANTIR}, title={Integrable Soliton Extensions}, congress name={University of Ostrava, Seminar of Department of Mathematics}, city={Ostrava},
country={Czech Republic}, year={2014}} | {"url":"https://avesis.deu.edu.tr/activitycitation/index/1/fe55cc26-bf7c-4b43-ad1b-dd5e555073f6","timestamp":"2024-11-07T08:58:44Z","content_type":"text/html","content_length":"10880","record_id":"<urn:uuid:7645cf26-8131-484b-aa37-83c38a052b45>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00705.warc.gz"} |
Bayesian Probability Calculator - Online Calculators
To calculate Bayesian probability using Bayes’ Theorem, divide the product of the probability of event B given A and the prior probability of A by the probability of B. This helps you update the
probability of event A after considering new evidence (B).
The Bayesian Probability Calculator uses Bayes’ Theorem to update the probability of an event based on prior knowledge and new evidence. This is especially useful in decision-making, medical
diagnosis, and scientific research, where you need to update probabilities based on new information.
By applying this method, you can calculate the posterior probability (P(A|B)) of an event A happening given that event B has occurred. Bayes’ Theorem is a foundational concept in statistics and
probability, offering a structured approach to making informed decisions under uncertainty.
$P(A|B) = \frac{P(B|A) \times P(A)}{P(B)}$
Variable Description
**P(A B)**
**P(B A)**
P(A) Prior probability of A
P(B) Total probability of B
Solved Calculation:
Example 1:
Step Calculation
Prior Probability (P(A)) 0.3
Likelihood (P(B A))
Total Probability (P(B)) 0.5
Bayesian Probability Calculation $\frac{0.8 \times 0.3}{0.5}$
Result 0.48
Answer: The posterior probability $P(A|B)$ is 0.48.
Example 2:
Step Calculation
Prior Probability (P(A)) 0.6
Likelihood (P(B A))
Total Probability (P(B)) 0.9
Bayesian Probability Calculation $\frac{0.7 \times 0.6}{0.9}$
Result 0.467
Answer: The posterior probability $P(A|B)$ is 0.467.
What is Bayesian Probability Calculator?
The Bayesian Probability Calculator is a tool that helps calculate probabilities using Bayes’ Theorem, which revises the likelihood of an event based on new evidence. It’s widely used in research and
data analysis, especially when dealing with conditional probabilities.
For multiple events, tools like the Bayesian probability calculator for 3 events or an Excel-based Bayesian network calculator can help solve more complex scenarios.
These calculators simplify updating probabilities based on new data points, making them useful in fields like medical diagnosis, finance, and machine learning.
Final Words:
You can also use the Bayesian posterior probability calculator to find posterior probabilities and the Bayesian average calculator to determine weighted averages in certain datasets. These
calculators and formulas provide a step-by-step approach to solving problems using Bayes’ Rule. | {"url":"https://areacalculators.com/bayesian-probability-calculator/","timestamp":"2024-11-03T03:10:43Z","content_type":"text/html","content_length":"107456","record_id":"<urn:uuid:20523fb1-1685-43bb-8a04-6152fa78798e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00802.warc.gz"} |
Complex Numbers Worksheets
Complex numbers are not that intricate or complex. And if they tend to trick you at all, we have you covered through our Complex Numbers Worksheets. Complex numbers are written in the form a + bi,
where a is called the real term and the coefficient of i is the imaginary part. Let’s explore this topic with our easy-to-use complex number worksheets that are tailor-made for students in high
school and is the perfect resource to introduce this new concept. This stack of free pdfs helps young learners identify the real and imaginary part of the complex number, find absolute value,
rationalize denominators, and many more.
This bundle of worksheets on complex numbers is designed for high school students.
Identifying the Real Part and the Imaginary Part
Showcase the unique two-dimensional nature of complex numbers by prompting high school students to recognize the real part and the imaginary part of the number. The numbers are represented in the
standard form a + bi, where a is the real part, and b is the imaginary part.
Write the Conjugate of Each Complex Number
Designed to tread seamlessly with CCSS, this printable worksheet helps students write the conjugate of the complex number, by just flipping the sign of the imaginary part of the complex number.
Aimed at high school students, this free pdf helps foster understanding of rationalizing denominators. Determine the conjugate of the denominator and multiply both the numerator and the denominator
by the conjugate. Simplify the expression if needed.
Simplifying Expressions with Complex Numbers
Become adept at adding, subtracting, multiplying, and dividing complex numbers with this free worksheet. Use basic rules like combining the like terms together, FOIL method, multiplying the top and
bottom by the complex conjugate of the denominator to simplify complex-valued expressions.
Make a significant first stride in complex numbers by finding the absolute values of complex numbers with this free worksheet. The absolute value of a complex number is nothing but the distance of
the point from the origin on a complex plane. | {"url":"https://www.tutoringhour.com/worksheets/complex-numbers/","timestamp":"2024-11-06T05:18:19Z","content_type":"text/html","content_length":"62166","record_id":"<urn:uuid:890252a3-9df7-4bce-9fdd-ea398742d829>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00751.warc.gz"} |
Determinant based Mutual Information
kill two birds with one stone
Two birds:
1. Alice and Bob are rating 10 online products, how to reward them such that they are encouraged to tell the truth?
2. We want to train a medical-image-classifier. However, the given labels are noisy. How to design the loss function such that the training process is robust to the noise?
For the first question, the challenge is that Alice and Bob’ ratings are subjective thus we cannot verify it. One way to pay Alice is to use Bob’ ratings as a reference. For example, a natural
attempt is to pay Alice and Bob by the number of agreements they have. However, honest Alice and honest Bob may have very different opinions. Moreover, they can collude to give all products the
highest ratings to obtain the highest reward.
This question is called multi-task peer prediction. Peer prediction is initially proposed by Miller, Resnick, Zeckhauser 2004. Miller, Resnick, Zeckhauser 2004 consider a single-task setting where
the participants are assigned a single task (e.g. Do you like Panda express?). The multi-task setting (e.g. the multiple products rating setting) is proposed by Dasgupta and Gosh 2013. In the
multi-task setting, a series of works (e.g. Shnayder, Agarwal, Frongillo, Parkes 2016, Kong and Schoenebeck 2019, Liu, Wang, Chen 2020) successfully develop reward schemes where truth-telling is the
best for each participant, regardless of other participants’ strategy (dominantly truthful). However, they require the participants to perform an infinite number of tasks (e.g. rating an infinite
number of tasks).
For the second question, the challenge is very similar to the first one when we use noisy labels as a reference to evaluate the classifier. Cross-entropy loss does not work here since if we use
cross-entropy, we are training the classifier to predict the noisy labels rather than the true labels. This question is called noisy learning problem and studied by many works (e.g. Natarajan et al.
One stone:
The stone is a new mutual information measure, Determinant based Mutual Information (DMI), proposed by Kong 2020 (Talk). The high-level idea is:
1. reward Alice and Bob the Determinant based Mutual Information between their ratings.
2. evaluate the classifier by the Determinant based Mutual Information between the classifier’s outputs and the noisy labels. The training process returns the classifier with the highest evaluation.
Mutual Information
Intuitively, mutual information measures the amount of information shared by two random variables X, Y. For example, two independent random variables have zero mutual information. Two highly
correlated random variables have high mutual information. The first definition of mutual information is proposed by Shannon. Shannon Mutual Information is defined as the KL divergence between the
joint distribution over X, Y and the product of marginal distributions. One important property Shannon Mutual Information has is information-monotonicity:
When X and M(Y) are independent conditioning on Y, the mutual information between X and M(Y) is less than the mutual information between X and Y.
This property means that any data processing method operated on one side will decrease the mutual information.
Determinant based Mutual Information (DMI)
DMI is defined as the absolute value of the determinant of the joint distribution matrix.
Determinant based Mutual Information
Like Shannon mutual information, Determinant based Mutual Information is symmetric, non-negative and satisfies information-monotonicity
DMI is information-monotone.
Two additional special properties of DMI are
1. The square of DMI is a polynomial thus can be estimated unbiasedly by a finite number of samples.
2. Relative invariance: DMI(X;M(Y))=DMI(X;Y)|det(M)|
Two special properties of DMI
These two special properties allow DMI has applications in both peer prediction and noisy learning setting.
In the multi-task peer prediction setting, unlike previous mechanisms which require an infinite number of tasks to be dominantly truthful, DMI induces a simple dominantly truthful mechanism,
DMI-Mechanism, which only requires ≥2C number of tasks, where C is the number of rating options.
We divide the products into two batches arbitrarily to guarantee that each batch’s size is greater than the number of rating options. For example, if it is a binary choice rating, like/dislike, each
batch’s size should be greater than 2. For each batch, compute the empirical counts between Alice and Bob’s ratings within the batch. Finally, return the product of the determinant of the batches’
empirical counts matrices.
Analysis of DMI-Mechanism:
We assume the rating tasks are similar and independent, then
Alice and Bob’s honest ratings can be seen as i.i.d. samples from joint random variables (A,B).
We assume that Alice (Bob) performs the same (possibly random) strategy for all tasks. Then Alice’s strategy S_A can be seen as an operation on A and Bob’s strategy S_B can be seen as an operation on
B. Then
Alice and Bob’s ratings can be seen as i.i.d. samples from joint random variables (S_A(A),S_B(B)).
We also show that
In expectation, each counts matrix’s determinant is proportional to the determinant of the joint distribution over (S_A(A),S_B(B)).
Thus, we reward Alice and Bob the square of DMI between their ratings.
Due to information-monotonicity of DMI, truth-telling is the best for each participant, regardless of other participants’ strategy (dominantly truthful).
The implementation of DMI-Mechanism only requires each batch size ≥ C to guarantee that the expectation of the reward is not always zero. Previous work essentially designs the reward by Shannon
mutual information or other similar mutual information family designed by convex functions. Therefore, in those mechanisms, the dominantly truthful property requires an infinite number of tasks.
In the noisy learning setting, for each classifier h, we compute the empirical joint frequency matrix between h’s output and labels; return the absolute value of the matrix’s determinant as
evaluation for h. The training process returns the h with the highest evaluation.
Analysis of DMI-Scorer:
In the noisy learning setting, if the empirical frequency estimation is sufficiently good, the classifier h is evaluated via DMI between h’s outputs and the noisy labels, i.e., DMI(Classifier h;
Noise(Ground truth Labels)) where Noise(Ground-truth labels) is the noisy labels which are obtained by performing a noise-operation on the ground truth labels.
When we assume that the noise is independent of the classifier, the relative invariance property of DMI implies that the evaluation is robust to noise, that is,
If h has a higher DMI with ground truth labels than h’, it will also have a higher DMI with noisy labels than h’.
To apply DMI-Scorer in noisy learning in practice, Xu*, Cao*, Kong, Wang 2019 propose a loss function which is the negative logarithm of DMI-Scorer.
The biggest limitation of DMI is that it vanishes not only when X and Y are independent, but also when the joint distribution matrix is degenerate. In the asymmetric setting or high dimensional
setting when the number of rating options/label types is high, the direct DMI may not be applicable. One possible solution is to compress both sides into a small discrete set and compute DMI of the
compressed set. But this will not preserve the properties of DMI.
Determinant based Mutual Information is a new definition of mutual information, which not only shares some important properties with Shannon Mutual Information, but also has two additional special
properties. Interestingly, these two properties make DMI 1) significantly reduce the task complexity in the multi-task peer prediction setting; 2) induce a fully noise-robust loss function. Later we
will talk about geometric interpretation of DMI and generalization of DMI. | {"url":"https://yuqingkong.medium.com/determinant-based-mutual-information-40f2348c5ce4","timestamp":"2024-11-09T14:17:00Z","content_type":"text/html","content_length":"151938","record_id":"<urn:uuid:cf481a84-55aa-4b35-8033-d18fc8a7a2a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00490.warc.gz"} |
Zeroing the diagonal of a matrix by multiplying by (1-I)
Asked by Marco Kim on
1 Answers
Answer by Mackenzie Fox
And I want to zero its main diagonal by multiplying it by (1-I), meaning by 1 minus the identity matrix. How can I do this in pytorch?, What is the name of a white-flowering shadow-loving plant?
,Thanks for contributing an answer to Stack Overflow!, 1 Does this answer your question? Zero diagonal of a PyTorch tensor? – iacob Mar 30 at 8:53
PyTorch has a built in function to do this inplace:
You can either apply this to your matrix directly, or if you really want 1 - I can apply it to a ones matrix:
1 - torch.eye(n)
# Alternatively
Source: https://stackoverflow.com/questions/65746836/zeroing-the-diagonal-of-a-matrix-by-multiplying-by-1-i | {"url":"https://www.devasking.com/issue/zeroing-the-diagonal-of-a-matrix-by-multiplying-by-1i","timestamp":"2024-11-13T06:14:14Z","content_type":"text/html","content_length":"122982","record_id":"<urn:uuid:1edea4c8-c917-43a2-859b-b44eda2fa6a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00469.warc.gz"} |
Pearson and Spearman Correlations: A Guide to Understanding and Applying Correlation Methods
Correlation is a statistical tool used in Machine Learning to identify dependencies between several variables.There are several types of correlation. Find out more about the The Pearson correlation
and Spearman Correlation below.
For data analysis, a Data Scientist has several statistical tools at his disposal. One of these tools is correlation.Correlation is a particularly useful statistical measure, enabling the
relationship between two variables to be studied by calculating a correlation coefficient.
Correlation corresponds to the strength (indicated by the absolute value of the coefficient) as well as the direction (indicated by the sign of the coefficient) of the relationship between these
variables. The direction can be either positive (when x increases, y also increases) or negative (when x increases, y decreases or vice versa). There are several types of correlation. Among these
correlations, there are two that are particularly widely used: The Pearson correlation and Spearman Correlation. These two types of correlation will be discussed in greater detail later in this
The Pearson correlation and Spearman Correlation - The Pearson correlation
Pearson correlation, also known as linear correlation, measures the linear relationship between two continuous variables. Pearson correlation is indicated by the value of the correlation coefficient
r, calculated using the following formula :
Formule du coefficient de Pearson
Before calculating the Pearson coefficient, make sure that the data meet the following assumptions:
• Data sample is random (representative of the population)
• Variables are quantitative (continuous)
• Data are paired (each x value is associated with a y value)
• Observations are independent
• Data are normally distributed
• Variables are linearly related
• No outliers in the data
The value of the correlation coefficient r is between -1 and 1. There are several possible cases depending on the value of r :
• If r is close to 1, then the variables are linearly positively dependent.
• If r is close to 0, then there is no linear relationship between the variables.
• If r is close to -1, then the variables are linearly negatively dependent.
An example of the application of Pearson’s correlation would be the study of the relationship between meat consumption and life expectancy by country.
The Pearson correlation and Spearman Correlation - The Spearman's correlation
The Spearman correlation is a measure of correlation that measures a monotonic relationship between two variables based on the rank of the data. An example of data rank determination is: [58,70,40]
becomes [2,1,3]. Spearman correlation is often used for data consisting of outliers. To measure Spearman correlation, the indicator used is the Spearman coefficient rs, also known as the rank
coefficient, given by the formula below. In this formula, the variable n indicates the number of points in the data series. The variable d corresponds to the square of the difference in ranks between
each point with coordinates (x,y).
Before calculating the Spearman coefficient, it is necessary to ensure that the data satisfy the following assumptions:
• The relationship between variables is monotonic
• Data are associated in pairs
• Observations are independent
• There is a monotonic relationship between the variables
• Variables are ordinal or continuous.
An example of the application of Spearman’s correlation would be the study of the relationship between consumer preferences and product price.
The Pearson correlation and Spearman Correlation - Conclusion
The Pearson correlation and Spearman Correlation are two different correlation measures that apply in specific situations. Spearman correlation uses data rank to measure monotonicity between ordinal
or continuous variables. Pearson correlation, on the other hand, detects linear relationships between quantitative variables with data following a normal distribution. In the case of a Machine
Learning problem, it is often a question of using correlation matrices made up of the correlation coefficients between all the variables in a dataset. The notion of correlation is therefore important
for Machine Learning.
If you’d like to learn more about Machine Learning and go deeper into the field of data science, take a look at our Data Scientist and Data Analyst training courses. | {"url":"https://datascientest.com/en/pearson-and-spearman-correlations-a-guide-to-understanding-and-applying-correlation-methods","timestamp":"2024-11-08T21:59:33Z","content_type":"text/html","content_length":"431146","record_id":"<urn:uuid:aedcae7c-4725-423d-999f-27c2fa4e3d48>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00387.warc.gz"} |
Integer Set in Java
In this assignment, you will produce two implementations of an integer set with the same public interface, as given by the IntSet.java interface type. The set method adds an element to the set. The
clear method removes an element from the set. The test method tests whether an element is in the set. The min and max methods give the smallest and largest element of the set. Thus, one way of
looking at all elements of the set is:
for (int i = set.min(); i <= set.max(); i++)
if (set.test(i))
i is an element of set
The ArraySet class stores its elements in an array starting out with an array of length 10, and doubling the length whenever the array runs out of space. Append elements in the order in which they
are set. When clearing an element, replace it with the last element in the array. Don't shrink the array.
The BitSet class stores a sequence of bits to indicate whether a particular element is present or not. Pack 32 bits to an int. Start out with an array of length 10, which represents a set of up to
320 elements starting from the initially set element. If an element is set that is outside the representable range, allocate a larger array that is at least twice the length of the existing array.
For example, consider the statements:
IntSet set = new BitSet();
Now set.elements has length 10 and can represent numbers 100 ... 419.
Now set.elements has length 20 and can represent numbers 100...739.
Now set.elements has length 122 and can represent numbers 100...4003.
If the array needs to grow to hold values that are less than the current start position, then also grow to at least twice the current length. For example, consider
IntSet set = new BitSet();
Now set.elements has size 20 and can represent elements -220...419.
set.elements has size 45 and can represent elements -1020...419.
The clear operation simply clears a bit without shrinking the array.
Add test cases of your choice to the JUnit test class SetTest.java.
Need a custom answer at your budget?
This assignment has been answered 6 times in private sessions. | {"url":"https://codifytutor.com/marketplace/integer-set-in-java-1d72541a-3616-47c1-b357-daeea8604bab","timestamp":"2024-11-02T11:21:02Z","content_type":"text/html","content_length":"22114","record_id":"<urn:uuid:1b6ba5d4-9d70-4699-9720-814f64231b04>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00711.warc.gz"} |
Self-scattering on large, porous grains in protoplanetary disks with dust settling
Issue A&A
Volume 648, April 2021
Article Number A87
Number of page(s) 13
Section Interstellar and circumstellar matter
DOI https://doi.org/10.1051/0004-6361/202040033
Published online 15 April 2021
A&A 648, A87 (2021)
Self-scattering on large, porous grains in protoplanetary disks with dust settling
Institut für Theoretische Physik und Astrophysik, Christian-Albrechts-Universität zu Kiel, Leibnizstr. 15, 24118 Kiel, Germany
e-mail: rbrunngraeber@astrophysik.uni-kiel.de
Received: 1 December 2020
Accepted: 26 February 2021
Context. Observations of protoplanetary disks in the sub-millimetre wavelength range suggest that polarisation is caused by scattering of thermal re-emission radiation. Most of the dust models that
are used to explain these observations have major drawbacks: they either use much smaller grain sizes than expected from dust evolution models, or result in polarisation degrees that are lower than
Aims. We investigate the effect of dust grain porosity on the observable polarisation due to scattering at sub-millimetre wavelengths arising from grain size distributions up to millimetre sizes, as
they are expected to be present close to the midplane of protoplanetary disks.
Methods. Using the effective medium theory, we calculated the optical properties of porous dust and used them to predict the behaviour of the observable polarisation degree due to scattering.
Subsequently, Monte Carlo radiative transfer simulations for protoplanetary disks with porous dust grains were performed to analyse the additional effect of the optical depth structure, and thus the
effect of multiple scattering events and inhomogeneous temperature distributions on the net observable polarisation degree.
Results. We find that porous dust grains with moderate filling factors of about 10% increase the degree of polarisation compared to compact grains. For higher grain porosities, that is, grains with a
filling factor of 1% or lower, the extinction opacity decreases, as does the optical depth of a disk with constant mass. Consequently, the unpolarised direct radiation dominates the scattered flux,
and the degree of polarisation drops rapidly. Even though the simulated polarisation degree is higher than in the case of compact grains, it is still below the typical observed values for face-on
disks. However, the polarisation degree can be increased when crucial model assumptions derived from disk and dust evolution theories, for instance, dust settling and millimetre-sized dust grains,
are dropped. In the case of inclined disks, however, our reference model is able to achieve polarisation degrees of about 1%, and using higher disk masses, even higher than this.
Key words: radiative transfer / polarization / protoplanetary disks / scattering / radiation mechanisms: thermal
© ESO 2021
1 Introduction
Observations of the polarised continuum radiation of protoplanetary disks in the (sub-)millimetre (mm) wavelength range are considered an important source of information about the structure of
magnetic fields and the dust grain shape and size in these disks. In this context, two main polarisation mechanisms are important: dichroic emission and absorption by magnetically aligned
non-spherical grains, and polarisation due to scattering. Especially the so-called self-scattering mechanism, that is, scattering of thermal re-emission radiation by large dust grains at (sub-)mm
wavelengths, provides a promising explanation for many polarisation patterns that have recently been observed in protoplanetary disks (e.g. Harris et al. 2018; Lee et al. 2018; Mori et al. 2019;
Ohashi & Kataoka 2019; Sadavoy et al. 2019). The local polarisation degree in these spatially resolved maps has been found to be typically about 1–3%, with some rare outliers in both directions, for
example, HD 142527, VLA 1623, or AS 209 (Kataoka et al. 2016; Harris et al. 2018; Mori et al. 2019). To reproduce this relatively high value, it is often stated that dust grains with radii of about
100 μm contribute most to the observed flux (Bacciotti et al. 2018; Hull et al. 2018; Lin et al. 2020; Ohashi et al. 2020). However, this explanation appears to contradict commonly accepted scenarios
of grain growth in protoplanetary disks. There, grains are expected to grow rather fast to about mm sizes and eventually dominate the emission in this wavelength regime (e.g. Beckwith & Sargent 1991;
Testi et al. 2014). Recent radiative transfer simulations allowing such large grains, however, predict lower polarisation degrees than observed (Brunngräber & Wolf 2020; Yang & Li 2020). Furthermore,
due to size-dependent turbulent vertical mixing, larger grains will settle towards the midplane, which leads to even lower polarisation degrees (Brunngräber & Wolf 2020). This significant discrepancy
in the degree of polarisation between simulation and observation is a major drawback in the explanation of the observed polarisation with scattering.
To overcome this lack of polarisation in simulations, more complex dust models have been proposed, such as oblate, prolate, or fractal grain shapes, or different chemical compositions (Kirchschlager
& Bertrang 2020; Brunngräber & Wolf 2020; Yang & Li 2020). In the quoted studies, the polarisation after single scattering of an initially unpolarised wave is usually discussed, that is, the ratio of
the Müller matrix elements S[12]∕S[11] and the albedo ω. However, these studies lack full radiative transfer simulations to prove the applicability of these results in the environment of
protoplanetary disks, for instance. The effects of multi-scattering, attenuation of the radiation, and the contribution of unpolarised re-emission radiation are likely to change the degree of
polarisation drastically.
A particularly frequently proposed solution for the discrepancy of the degree of polarisation is the use of porous dust grains because the polarisation after single scattering increases with
increasing porosity (e.g. Tazaki et al. 2019; Brunngräber & Wolf 2020). Theoretical and laboratory studies of dust growth and evolution both suggest low filling factors Φ, that is, highly porous
grains, and/or fractal geometries (Blum et al. 2000; Blum & Wurm 2008; Kothe et al. 2013; Garcia & Gonzalez 2020). Furthermore, dust particles collected in the Solar System and the upper layers of
the Earth’s atmosphere show a fractal and/or porous structure as well (Brownlee et al. 1977; A’Hearn et al. 2005; Abe et al. 2006; Westphal et al. 2014). In this paper, we examine the effect of dust
grain porosity on the polarised fraction of scattered radiation at sub-mm wavelengths of protoplanetary disks. Therefore we focus at first on the optical properties that determine the level of
polarised scattered radiation in Sect. 2. This approach is commonly used because it isindependent of any underlying density distribution and thus requires no elaborate radiative transfer simulations.
At the same time, it allows a straightforward examination of a broad parameter space. The dust model is given by a grain size distribution composed of porous dust, and the effect of different filling
factors and maximum grain sizes is investigated. This dust model is described in Sect. 2.1, and the results are presented in Sect. 2.2 and discussed in Sect. 2.3. In the second part of this paper, in
Sect. 3, radiative transfer simulations are performed in the context of a protoplanetary disk where the largest grains have settled towards the midplane. For this purpose, we introduce the underlying
disk model in Sect. 3.1, and present the results of the Monte Carlo simulations in Sect. 3.2. The discussion of these results is provided in Sect. 4, and we summarize the entire paper in Sect. 5.
2 Porosity and the general effect on polarised scattered radiation
As a first step to investigate the effect of grain porosity on the degree of polarisation due to scattering, we examine selected optical quantities of the dust. This procedure is independent of any
spatial density distribution, therefore it is universal and only depends on the intrinsic attributes of the dust grains determining its optical properties, such as chemical composition, shape,
internal structure, and refractive index. This approach is therefore also applied in many studies concerning scattering at (sub-)mm wavelengths to explore the parameter space of dust composition and
maximum grain size in the size distribution with regard to the expected degree of polarisation (e.g. Kataoka et al. 2015; Tazaki et al. 2019; Yang & Li 2020; Kirchschlager & Bertrang 2020;
Brunngräber & Wolf 2020).
2.1 Dust model
Owing to growth and destruction processes in protoplanetary disks, dust grains are expected to have sizes that range over many orders of magnitude. Each grain size has a different
wavelength-dependent absorption, emission, and scattering behaviour. To account for this, we integrated these quantities weighted by the size-dependent abundance over the entire grain size
distribution, which was adopted to be a power law with n(s) ∝ s^q as found for the interstellar medium (Mathis et al. 1977), with s being the radius of a spherical grain. If not stated otherwise, the
size exponent q was set to −3.5. The lower boundary of the size distribution is s[min] = 5 nm and the upper boundary s[max] is a free parameter.
The dust model was chosen to consist only of spherical grains. For the chemical composition, we used the refractive indices of astronomical silicate (astrosil) and the two main orientations of
graphite (parallel and perpendicular, Draine & Malhotra 1993) from Draine (2003). To mimic porous dust grains in our study, we considered spherical grains with different filling factors Φ and
explored their effect on the scattering polarisation. The filling factor is defined as the ratio of the volume of the dust material V[dust] to the total grain volume V[total], $Φ = 1−P =
with the porosity $P$, with $0≤P≤1$, that is, a filling factor of Φ = 1 represents compact grains.
The complex refractive indices of these porous grains were approximated using the effective medium theory (EMT) and the Bruggeman mixing rule (Bruggeman 1935). The optical quantities, that is,
cross-sections of absorption and scattering C[abs] and C[sca], and the elements of the Müller matrix $S$, were calculated with the approximation solution of the Mie theory (Mie 1908) using the tool
miex (Wolf & Voshchinnikov 2004). These properties were calculated for grain size distributions with 101 logarithmically spaced different maximum grain sizes s[max] between 1 μm and 10 cm, and for
101 logarithmically spaced different filling factors between 10^−4 and 1. Each of these 10 201 calculations used 100 grain size bins between s[min] = 5 nm and s[max] to average the optical quantities
over the size distribution (Wolf & Voshchinnikov 2004, their Eqs. (28)–(30)), and 361 scattering angles in the range [0°, 180°].
2.2 Optical quantities
In this section, we restrict the discussion to astrosil because it is the most abundant component in protoplanetary disks. However, our general qualitative conclusions with respect to the effect of
porosity on the polarisation characteristics are also applicable for the case of graphite; see Appendix A for the corresponding plots for graphite.
Single scattering
First, we assumed an unpolarised wave, which is scattered exactly once by a spherical dust grain. In this simple case, the polarisation degree of the outgoing wave is only a function of the two
Müller matrix elements S[11] and S[12] of the dust, $p = −S12S11.$
We refer to p as the single scattering polarisation. It is the most commonly used quantity to describe and predict the scattering polarisation of different dust grains (e.g. Kataoka et al. 2015;
Tazaki et al. 2019; Yang & Li 2020; Kirchschlager & Bertrang 2020; Brunngräber & Wolf 2020). For spherical grains, the thermal re-emission of the dust is unpolarised and the highest contribution to
the net, that is, observable, polarised flux is indeed given by single-scattering events. Hence, the quantity p is a good qualitative indicator for the expected net polarisation degree. The Müller
matrix elements and thus p depend on the scattering angle θ. The predominant scattering angle is a function of the actual geometrical set-up and wavelength λ.
For the Rayleigh scattering regime (2πs ≪ λ), the single-scattering polarisation is maximised for a scattering angle of θ = 90 °. For larger grains (or smaller wavelengths), the single-scattering
polarisation p shows a fast-changing wavy pattern for different scattering angles (Brunngräber & Wolf 2019). We therefore focused on a scattering angle of θ = 90 °. In addition, scattering angles
close to this value also dominate the radiation of face-on protoplanetary disks, as shown in Sect. 3. The corresponding single-scattering polarisation p is shown in the upperleft panel of Fig. 1 for
grain size distributions with different maximum grain sizes between 1 μm and 10 cm and filling factors between 10^−4 and 1 at a wavelength of λ = 850 μm. We restrict the discussion here to this
wavelength, which is close to the wavelength of many of the polarisation observations performed with ALMA (e.g., Stephens et al. 2017; Hull et al. 2018; Lee et al. 2018; Dent et al. 2019; Lin et al.
2020). Almost the entire parameter space yields a single-scattering polarisation of unity. The polarisation is thus determined by Rayleigh scattering although the grain sizes are partially much
larger than the wavelength. Only for high filling factors (low porosities) is the degree of polarisation lower as a result of Mie scattering.
Fig. 1
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo ω (upper right), extinction opacity (lower left), and probability density
function Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities are calculated for a grain size distribution with q = −3.5, s[min] = 5 nm and different upper grain size limits s[max]
at a wavelength of 850 μm for astrosil.
Scattering probability
However, in contrast to the above simplistic scenario, the total flux does not only include the scattered radiation, but also the direct thermal emission of dust, that is, unscattered radiation. As
this direct emission is unpolarised according to our dust model, the net polarisation degree is lower than the polarisation degree resulting from scattering alone. In order to derive a useful measure
for the total net polarisation degree under these more complex circumstances, we have to quantify whether a photon interacts with dust grains (scattering or absorption) on its path to the observer.
Depending on the optical depth τ ∝ κ[ext] ⋅ ρ[dust] along that path, the probability of interaction is given by 1 − exp(−τ) with the extinction opacity κ[ext] = C[ext]∕m[dust], the extinction
cross-sectionC[ext] = C[sca] + C[abs], the mean grain mass m[dust], and the spatial dust mass density ρ[dust]. If the observed density distribution is optically thin (τ ≪ 1), the observable flux is
determined by unpolarised direct thermal re-emission radiation. For a given optical depth, the ratio of scattered radiation to direct thermal radiation is also determined by the fraction of
scattering events among all interactions. Consequently, the polarisation degree of the emerging radiation is given by both the optical depth and the albedo ω, $ω = CscaCsca+Cabs.$
The albedo and extinction opacity, which determines the optical depth, are shown in the upper right and lower left panel of Fig. 1, respectively. For large maximum grain sizes, the opacity first
increases with decreasing filling factor and decreases again after reaching a maximum value. Whereas the grain mass is proportional to the filling factor, the extinction cross-section shows a more
complex behaviour. At first, it decreases much slower with decreasing filling factor than the grain mass. However, for filling factors Φ ≲ 10^−2, the cross-section is roughly proportional to Φ^2 and
thus decreases faster than the grain mass. Hence, the opacity is maximised at the transition of these two regions. In addition, the opacity is constant for a constant product s[max] ⋅ Φ, which has
been shown previously by Kataoka et al. (2014) and Tazaki et al. (2019), for example.
Fig. 2
Product of albedo ω, single-scattering polarisation p, and probability density function Ŝ[11] for the scattering angle θ = 90°. All quantities are calculated for a grain size distribution with q =
−3.5, s[min] = 5 nm and three different maximum grain sizes s[max] = 100 μm (blue), 1 mm (red), and 10 mm (green) at a wavelength of 850 μm for astrosil; see Sect. 2.3 for the corresponding
Scattering angle dependency
If a scattering event occurs, the actual intensity of the radiation scattered in the direction of the observer is also a function of the scattering angle θ. For single scattering, this dependency is
given by the first Müller matrix element S[11] (θ). The probability density function $S^11(θ) = S11(θ)∫0πS11sinθ dθ$ for a scattering angle of θ = 90° is shown in the lower right panel of Fig. 1. For
large maximum grain sizes, this probability decreases for increasing grain size because large grains tend to scatter predominantly in the forward direction, that is, θ ≈ 0°. With increasing grain
size, the intensity of the radiation that is scattered at 90 ° decreases. As a consequence, the flux ratio of scattered to direct radiation and hence the degree of polarisation decreases as well.
2.3 Conclusion
To evaluate the effect of the grain porosity of the observable polarisation, a combined analysis of all of the aforementioned quantities is mandatory. Lower filling factors increase the
single-scattering polarisation p and the albedo ω. At the same time, however, the extinction opacity, and thus the scattering probability, decreases as well as the probability Ŝ[11] for the
scattering angle θ = 90°. In Fig. 2, the product of albedo, single-scattering polarisation, and the scattering probability density function for θ = 90 ° as a function of the filling factor Φ is shown
for different maximum grain sizes s[max]. This product can be used as an advanced proxy for the polarisation degree due to scattering for optically thick environments. For grains in the Rayleigh
scattering regime, a decrease in filling factor Φ decreases this proxy and therefore the expected polarisation degree. For size distributions with large maximum grain sizes compared to the wavelength
and high filling factors, the usual wavy pattern of the Mie scattering regime is present, which also decreases this quantity. Thus, for the combination of sub-mm wavelengths and large grain sizes,
the degree of polarisation is expected to first increase with decreasing filling factor and after reaching a maximum value, to decrease again for even lower filling factors.
In summary, the analysis of these optical quantities derived from the complex refractive index already provides useful results in the context of polarisation due to (sub-)mm scattering off porous
dust grains. First, increasing the porosity, that is, decreasing the filling factor, results in a higher single-scattering polarisation p of up to 100%. In addition, the albedo increases up to unity
as well. Thus, the quantity p ⋅ ω, which is occasionally used in the literature as a simple proxy for expected polarisation degrees (e.g. Kataoka et al. 2015; Tazaki et al. 2019; Yang & Li 2020), is
Second, highly porous grains tend to scatter almost exclusively in the forward direction. For the considered spherical grains, forward scattering does not produce polarised radiation. This reduces
the scattering probability and polarised flux for larger scattering angles.
Third, above a certain threshold of the maximum grain size of about 0.3 mm for the size distribution, composition, and wavelength considered in Fig. 1, the extinction opacity first increases with
increasing filling factors but decreases after reaching a maximum value. For filling factors below this maximum, the opacity drops rather fast and the optical depth decreases. This reduces the ratio
of scattered to direct radiation and ultimately, the degree of polarisation. In addition, the opacity is roughly constant for a constant product s[max] ⋅ Φ.
And lastly, considering the single-scattering polarisation p alone is in general not sufficient to predict the net polarisation nor its qualitative behaviour, that is, trends as a function of grain
properties. Three more quantities result from the refractive index of the dust and contribute significantly to the polarisation degree: albedo, extinction opacity, and the fraction of radiation
scattered in the direction of the observer.
3 Radiative transfer simulations of a protoplanetary disk
In order to evaluate the effect of dust grain porosity on the scattering polarisation in the sub-mm wavelength regime in the environmentof protoplanetary disks, comprehensive numerical simulations
are mandatory to account for the complex processes that determine the observable polarisation. Therefore full 3D Monte Carlo radiative transfer simulations with the publicly available^1 software
POLARIS (Reissl et al. 2016) were conducted. In addition to the optical properties that were used in the simple approach in Sect. 2, these simulations also considered self-consistent dust temperature
for each grain size bin, the contributionof unpolarised direct re-emission, multiple scattering, local radiation field anisotropy, the scattering angle as a function of spatial density distribution
and viewing angle, and a size-dependent vertical density distribution resulting from dust settling. Forthis purpose, we first define the density distribution of our protoplanetary disk model
including dust settling in Sect. 3.1. In Sect. 3.2, the results of the radiative transfer simulations are discussed.
3.1 Model set-up
For high polarisation due to scattering, a high optical depth is necessary. Therefore self-scattering in the mm wavelength regime is almost exclusively observed in protoplanetary disks. In these
disks, grain sizes from a few nanometres up to centimetres are expected to exist (e.g. Beckwith & Sargent 1991; Testi et al. 2014). Due to gas drag, the larger grains settle towards the midplane of
the disk, whereas the smallest grains are coupled to the gas and thus show a much larger vertical extent (Safronov 1972; Goldreich & Ward 1973; Dubrulle et al. 1995; Dullemond & Dominik 2004; Woitke
et al. 2016). The radial and vertical density distribution of the dust is therefore a function of the grain size (Gräfe et al. 2013; Pinte et al. 2016; Guilloteau et al. 2016; Avenhaus et al. 2018;
Andrews et al. 2018; Villenave et al. 2020). Additionally, the scattering and polarisation efficiency also depend on grain size. To account for these effects, the radiative transfer simulations in
this study were performed on the basis of a protoplanetary disk model where the largest grains have already settled towards the midplane. A more detailed description of the disk model can be found in
Brunngräber & Wolf (2020). The gas density distribution ρ[gas] is given by (Lynden-Bell & Pringle 1974; Kenyon & Hartmann 1995; Hartmann et al. 1998) $ρgas(r,z) = Σgas(r)2π hgas(r)⋅exp[−12(zhgas(r))
2] ,$
with the vertically integrated surface density $Σgas(r) = 2π ρ0 href⋅(rR0)−γ⋅exp[−(rRtrunc)2−γ]$(2)
and the scale height $hgas(r) = href (rR0)β,$
with (r, z) being the usual cylindrical coordinates, R[0] is the radius where the gas scale height h[gas] reaches the reference scale height h[ref], R[trunc] is the truncation radius, and ρ[0] is the
density scaling parameter that defines the total gas mass.
To account for the dust settling, the dust mass was re-distributed vertically. The dust scale height h[dust](r, s, Φ) was downscaled according to the settling model of Dubrulle et al. (1995) and
Woitke et al. (2016) for grains with radius s and bulk density ρ[bulk], $hdust(r,s,Φ) = hgas(r)⋅f(r,s,Φ)1+f(r,s,Φ),$(3)
where the settling function f(r, s, Φ) is given by $f(r,s,Φ) = αsettle6π Σgas(r)s ϱbulk(Φ).$(4)
The parameter α[settle] regulates the strength of the settling and is determined by the disk viscosity and the strength of turbulent mixing (Dubrulle et al. 1995; Woitke et al. 2016). We wish to
emphasise that due to the dependency on $ϱbulk(Φ),$ the strength of the vertical settling is thus a function of porosity and is weaker for highly porous grains. The mass density distribution of the
dust is hence given by $ρdust(r,z,s,Φ) = Σdust(r)2π hdust(r,s,Φ)⋅exp[−12(zhdust(r,s,Φ))2]$(5)
and the dust surface density by $Σdust(r) = fd/g⋅Σgas(r)$
with thedust-to-gas ratio f[d/g] = 1∕100.
The dust model differs slightly from the model described in Sect. 2.1. Whereas in the previous section the optical quantities, for instance, the cross-sections, were averaged over the entire size
distribution for simplicity, during the radiative transfer simulation, the size as well as the corresponding optical quantities of a grain were determined probabilistically for each individual
interaction event (scattering or absorption). To achieve ample sampling, we used a total of 1000 logarithmically spaced grain sizes to represent the entire distribution. Furthermore, a mixture of
62.5% astrosil and 37.5% graphite (1∕3–2∕3 approximation; Draine & Malhotra 1993) was used.
To account for dust settling, the size-dependent scale height given by Eqs. (3) and (4) was calculated for ten logarithmically spaced size bins. These bins cover the entire range of grain sizes from
s[min] to s[max]. Thus, the net density distribution is given by superposition, that is, the sum of ten individual density distributions characterised by their individual grain size ranges; for
details, see also Brunngräber & Wolf (2020). The parameter values of the reference disk model and the central star are identical to those of the reference model of Brunngräber & Wolf (2020); see
Table 1.
Table 1
Disk and stellar parameter values of our reference case.
3.2 Results
In this section, we present the outcome of the radiative transfer simulations. In order to derive spatially resolved polarisation maps, we first calculated the temperature distribution of the dust.
The sole heating source is a central star with an effective temperature of T[⋆] = 4050 K and a luminosity of L[⋆] = 0.9 L[⊙], representinga low-luminous but typical T Tauri star (see e.g. the
compilation in Table F.1. in Varga et al. 2018). Based on the derived self-consistent temperature distribution, the radiative transfer of the thermal re-emission of the dust and thus the resulting
intensity maps were simulated atλ = 850 μm, the same wavelength as in Sect. 2. The observable flux represents a combination of the unpolarised direct emission and the scattered radiation. In the
following, we separate these two individual contributions. To compare the results of models with different filling factors, we present radial profiles of the degree of polarisation both for the
scattered and the total flux and the flux ratio of direct to scattered radiation. More details on the simulation process and extraction of the radial profiles can be found in Brunngräber & Wolf
First, we focus on the scattered radiation. In this way, the comparison to the results of the simple approach in Sect. 2 is straightforward. In the upper row of Fig. 3, radial profiles of the
polarisation degree for the scattered flux only for different filling factors Φ are shown. The polarisation of pure scattering increases with decreasing filling factor (different colours), which is a
direct consequence of an increasing ratio of the Müller matrix elements S[12] ∕S[11]; see also the upper left panel of Fig. 1. In addition, the general shape of the radial profiles is determined by
the degree of anisotropy of the local radiation field. An anisotropic radiation field is one of the main requirements for polarisation due to scattering of thermal re-emission radiation (Kataoka et
al. 2015; Heese et al. 2020; Brunngräber & Wolf 2020). In the inner regions of the disk, the radiation field is close to isotropic because of the axisymmetric density and temperaturedistribution.
However, the local radiation field becomes increasingly anisotropic with larger orbital radii r. This is caused by the decreasing temperature, and hence flux, for larger distances. Consequently, the
degree of polarisation increases towards the outer regions of the disk.
Both a lower disk mass (first column in Fig. 3) and a smaller maximum grain size (third column) increase the polarisation of scattered flux. In both cases, the temperature in the upper layers of the
disk is higher than in the reference case (solid lines in all four columns). The resulting steeper vertical temperature gradient causes the anisotropy of the local radiation field and thus the
polarisation degree to increase. Additionally, for a smaller maximum grain size, the single-scattering polarisation p is significantly higher than in the reference case; see the upper left panel of
Fig. 1. Similarly, the scattering polarisation decreases for increasing disk masses (second column) and maximum grain sizes (fourth column).
Nonetheless, a high degree of scattering polarisation is not the only prerequisite for a high net polarisation degree combining scattered and direct radiation. The flux ratio of scattered to direct
radiation is shown in the middle row of Fig. 3. A high flux ratio (≫ 1) represents the case in which the total flux is dominated by scattering and thus the degree of polarisation is similar to the
case with scattered flux only. However, for our simulations, the ratio is well below unity in most considered cases (down to ~ 210^−2 for compact dust and even lower for porous grains), and thus the
direct unpolarised radiation contributes much more to the overall flux than the scattered flux. In most of the cases shown in Fig. 3, the flux ratio decreases for a decreasing filling factor Φ
because of two reasons. First, the extinction cross-section decreases for more porous grains. Second, the product of albedo, single-scattering polarisation, and probability for a scattering angle of
90 ° also decreases for lower filling factors; see the lower left panels of Figs. 1 and 2. In addition, for constant optical properties, the ratio of scattered to direct flux is essentially governed
by the optical depth towards the observer. Thus, in the case of a lower or higher disk mass, the ratio decreases or increases, respectively, by the same factor as the mass. However, considering
different maximum grain sizes, the flux ratio is also a function of the extinction opacity κ[ext] and the product of albedo, single-scattering polarisation, and probability distribution of the
scattering angle. As a result, the flux ratio decreases by different amounts for most of the simulations with smaller and larger maximum grain sizes. One exception is the combination of a low filling
factor of Φ = 10^−2 and a maximum grain size of s[max] = 10 mm represented by the dashed red line in the fourth column of Fig. 3. Here, both the albedo and the extinction cross-section increase for
the larger maximum grain size.
The combination of the scattering polarisation degree and the flux ratio of scattered to direct radiation yields the overall degree of polarisation of the total flux; see the lower panels of Fig. 3.
For most disk models, the net polarisation first increases with decreasing filling factor, and after reaching a maximum value, it decreases again. This is the result of two counteracting effects: on
the one hand, there is the already mentioned steadily increasing degree of polarisation of pure scattering. On the other hand, there is the steep drop of the flux ratio with decreasing filling
factors. Theonly exception within the considered parameter space is the disk with a smaller maximum grain size of s[max] = 100 μm. Here, the maximum polarisation is produced by compact grains and is
reduced for more porous grains, which is also shown in Fig. 2. Compared to the reference case, the net polarisation increases for a higher disk mass because the disk becomes optically thick and for a
smaller maximum grain size due to the increase in scattering polarisation. In general, the net polarisation degree of the total flux is reduced significantly compared to the case of scattered
radiation only. Furthermore, the shape of the radial profile changes and now decreases towards larger radii after a maximum degree of polarisation. Its position in the disk is governed by the ratio
of the scattered to direct unpolarised flux. For optically thin disks, the maximum is located at the inner rim of the disk and is shifted farther out with increasing optical depth^2. The position of
maximum polarisation degree can therefore be used to determine the transition from optically thick to optically thin (Brunngräber & Wolf 2020).
Although the degree of polarisation increases for porous grains, it is still significantly lower than those observed, which typically yield polarisation degrees of about a few percent. The main
cause, as discussed, is the relatively low contribution of the scattered radiation to the total radiation. At first glance, an increase in this contribution and thus of the resulting polarisation
degree can be achieved by assuming an increased maximum grain size, and thus a greater effect of large grains with large extinction cross-sections. However, our simulations show that this conclusion
is not correct. In the left column of Fig. 4, the radial profiles for disks containing grains with different maximum radii and a filling factor of Φ = 10^−1 are directly compared for three different
total dust masses. For the chosen wavelength of λ = 850 μm, the scattered to direct flux ratio is very similar for disks containing dust distributions with maximum radii of s[max] = 1 mm (green) and
10 mm (red). This is because opacity and albedo are very similar as well. The main difference between these two simulations is the polarisation after single scattering p (upper left panel of Fig. 1),
which decreases from 0.87 to 0.55 for the larger maximum grain size. At the same time, more of the larger grains are present in the disk, which tends to have lower equilibrium temperatures. Thus, the
anisotropy of the radiation field is reduced as well. Both the lower temperature and the lower single-scattering polarisation lead to a reduced degree of polarisation for scattering only (upper left
panel of Fig. 4) and to a reduced overall degree of polarisation (lower left panel of Fig. 4). A smaller maximum grain size, however, may lead to an increased net polarisation degree. Although the
opacity, and thus the optical depth, is lower by about a factor 16 than the reference case for s[max] = 100 μm, the net polarisation is higher by a factor 1.15 due to the higher single-scattering
polarisation, the steeper temperature distribution of the smaller grains, and the resulting high radiation field anisotropy. A maximum grain size of only 100 μm does not agree either with the
commonly accepted dust and disk evolution models, however, nor do these models predict a sharp decrease of the grain size distribution beyond this value.
Another possibility to increase the degree of polarisation might be to change the grain size distribution. In our reference model, the size distribution has a power-law exponent of q = −3.5. In
Appendix A, the corresponding plots of the optical properties for different exponents can be found. The scattering polarisation steadily increases for steeper grain size distributions; see the upper
right panel of Fig. 4. This is again due to the larger amount of smaller grains, which have a steeper temperature distribution and a higher single-scattering polarisation. The flux ratio of scattered
to direct radiation, on the other hand, is only slightly altered. Thus, the overall degree of polarisation for compact and moderately porous grains increases for steeper grain size distributions as
well. Nevertheless, the polarisation degree is still below the observed values.
Brunngräber & Wolf (2020) showed that a larger scale height of the dust distribution increases the polarisation degree. This is even more pronounced for porous grains, as is shown in the left column
of Fig. 5. The net polarisation rises above 0.6% for an intermediate filling factor Φ = 10^−1 (green) and a scale height twice as large as the reference case h[ref] = 20 au (dashed). Although there
is observational evidence for such large scale heights, most studies tend towards smaller scale heights of about 10 au or smaller (Stapelfeldt et al. 1998; Andrews et al. 2010; Woitke et al. 2019;
Villenave et al. 2020). When only the scale heights of the largest grains are increased by discarding the dust settling, the polarisation also increases up to about 0.5%; see the left column of Fig.
5. This is again contrary to the expected dust distribution from evolutionary models.
Driven by the goal to achieve a higher opacity and single-scattering polarisation and thus to reproduce the generally observed polarisation degree in the order of a few percent, a high amount of
carbonaceous dust was considered by Yang & Li (2020). We partially confirm their findings. When we increased the fraction of graphite in our dust model from 37.5% to 60%, the polarisation indeed
increased, but only by a small amount, and the polarisation is still well below 1%. The maximum value is about 0.3%; see Fig. 6. This means that a large graphite fraction alone does not sufficiently
increase the polarisation degree.
As already shown in several studies (e.g. Yang et al. 2016, 2017), the polarisation degree also depends on the disk inclination. We confirm these findings as shown in Fig. 7 with our reference model
and an inclination of 45°. Whereas the polarisation of scattered flux only is very similar for both viewing angles, the ratio of the scattered to direct flux increases especially atthe near side of
the disk, and hence the increased net polarisation fraction up to 1.9% in the innermost regions for a filling factor of 0.1 (middle panel). This inclination-induced increase of the polarisation
arises because the scattering angle and the resulting polarisation is different for radiation parallel to the major disk axis than for radiation parallel to the minor disk axis. Thus, the
superposition of these contributions is less destructive and hence results in a lower decrease in polarisation degree than in the face-on case, where the scattering angles towards the observer at a
given point in the disk are very similar for radiation from all directions. This is described in detail in Yang et al. (2016). Thus, with an inclined disk consisting of moderately porous mm-sized
grains, we are able to achieve levels of polarisation that are comparable to selected observations (Bacciotti et al. 2018; Lee et al. 2018; Mori et al. 2019; Ohashi & Kataoka 2019). These trends
concerning filling factor, size distribution, and density distribution still hold for inclined disks. Additionally, we stress that the disk mass has a significant effect on the level and spatial
distribution of the polarisation degree for inclined disks through the optical depth. The optical depth and the mass under the assumption of a known dust model can therefore in principle be
constrained by spatially resolved polarisation observations; see Fig. 8.
Fig. 3
Radial profiles of different quantities for disks with different filling factors (colours). The reference disk model (solid lines) is shown in a comparison with different disk models (dashed
lines): a disk with a dust mass M[dust] = 10^−5 M[⊙] lower by a factor of 10 (first column), a disk with a dust mass M[dust] = 10^−3 M[⊙] higher by a factor ten (second column), a disk with a
maximum grain size s[max] = 100 μm smaller by a factor ten (third column), and a disk with a maximum grain size s[max] = 10 mm larger by a factor ten (fourth column). Top: degree of polarisation
for scattering only, i.e. without the unpolarised direct re-emission. Middle: flux ratio of scattered to direct re-emission. Bottom: net degree of polarisation of the total flux, i.e. scattered and
direct radiation, including vertical error bars.
Fig. 4
Same as Fig. 3, but for (left) disks with different dust masses (line style) and upper grain size limits (colours), and for (right) disks with different filling factors (colour) and exponents of
the grain size distribution (line style).
Fig. 5
Same as Fig. 3, but for disks with different filling factors (colour) and reference scale heights (left), and withoutdust settling (right).
Fig. 6
Same as Fig. 3, but for disks with different filling factors (colours) and mass fraction of graphite in the dust model (line style).
Fig. 7
Map of the polarisation degree with superimposed polarisation vectors for our reference disk with Φ = 1 (left), 10^−1 (centre), and 10^−2 (right), and an inclination of i = 0° (top) and 45° (bottom
). The lengths of the polarisation vectors are scaled to the polarisation degree, as indicated in the top left corner.
Fig. 8
Same as Fig. 7, but for a disk with a 10 times higher dust mass. The lengths of the polarisation vectors are scaled to the polarisation degree, as indicated in the top left corner.
4 Discussion
The degree of polarisation resulting from our simulations is in most cases lower by a factor of 2 to 3 than what is observed in several protoplanetary disks where the polarisation is thought to
result from scattering. Although both albedo ω and single-scattering polarisation p = −S[12]∕S[11] imply a high degree of polarisation for decreasing filling factors Φ, the net degree of polarisation
of the total radiation increases only for a small range of filling factors before decreasing towards even lower filling factors. This decrease in polarisation after reaching a maximum value at about
Φ = 0.1 is due to the decrease in extinction opacity of the dust grains and the resulting low optical depth, and hence a decrease in the scattering probability.
This might in principle be addressed by different modifications of the model. To increase the optical depth and thus the scattering probability, intuitive steps would be to increase the disk mass,
the maximum grain size, or use a more shallow grain size distribution. However, these changes do not necessarily lead to higher polarisation degrees, as seen in the previous section. Yang & Li (2020)
showed that a high abundance of carbonaceous material would increase the single-scattering polarisation as well. Appendix A shows that the opacity and albedo are indeed higher than in astrosil
grains. However, our reference dust model already contains 37.5% graphite (mass fraction). Even a larger fraction of 60% graphite increases the polarisation only by a small amount, the maximum degree
of polarisation is still below 0.3% for a face-on disk.
Although the degree of polarisation significantly increases for inclined disks, by a factor of about 4 for our reference disk and an inclination of 45°, it remains below most of the observed
polarisation degrees when compact grains are assumed. For a moderate filling factor of 0.1, our simulations reach polarisation degrees of 1.9% in the innermost disk regions, which is in agreement
with several observations. An only moderately porous dust phase is also in agreement with recent studies concerning the interstellar medium (Hirashita et al. 2021; Draine & Hensley 2021), and the
rotational disruption of dust grains in protoplanetary disks (Tatsuuma & Kataoka 2021). Furthermore, the polarisation of inclined disks follows the same trends concerning the filling factor as
mentioned above in the face-on case.
Nevertheless, the spectral index of our simulated disks between the wavelengths λ = 1.3 mm and 2 mm significantly increases for porous grains compared to compact grains. This is in contrast to many
observations of protoplanetary disks where the spectral index at mm wavelengths is found to be in the range between 2 and 2.5 which is commonly interpreted as evidence for large grains (Andrews &
Williams 2005). In our simulations, however, the spectral index of disks consisting of porous grains is larger than 3 and is as high as 3.8 for low-mass disks. Thus, our results suggest that the
grain porosity $P = 1−Φ$ in protoplanetary disks does not exceed 0.9 significantly and may be even lower. Although this is in agreement with Tazaki et al. (2019), for instance, it cannot be ruled out
that larger dust grains of about several cm in radius are present in the disk and depress the spectral index. At the same time, however, this would decrease the polarisation fraction.
A common suggestion for increasing the scattering polarisation in disk models without the inherent problem with the spectral index in the case of porous grains outlined above is introducing
non-spherical dust grains. Kirchschlager & Bertrang (2020) showed that non-spherical compact, that is, non-porous, silicate dust grains might show a higher single-scattering polarisation degree. The
complexity of the dust model increases significantly with non-spherical dust grains, and the prediction of whether the observable degree of polarisation would increase or decrease is not
straightforward. This uncertainty is even larger as the alignment efficiency and the dominating alignment mechanism in protoplanetary disks is still unclear. In addition to the polarisation degree,
the observed orientations place further constraints on the origin of the polarisation and need to be accounted for as well. A proper investigation including extensive Monte Carlo radiative transfer
simulations remains to be done.
5 Summary
We investigated the effect of dust grain porosity on the polarisation degree due to scattering of thermal re-emission radiation in the sub-mm wavelength range. For this purpose, we analysed the
optical properties of grain size distributions with different filling factors and upper grain size limits. This analysis approach is independent of any underlying density distribution and thus
independent of the local radiation field, and allows general discussions of the polarisation.
We showed that focussing on the single-scattering polarisation p = −S[12]∕S[11] and the albedo ω is not sufficient for reliable estimates of the overall degree of polarisation. We stress that the
extinction opacity and the flux distribution for different scattering angles have to be included in these considerations. Especially for larger grains, the scattered flux decreases dramatically for
scattering angles close to 90 ° as large grains tend to show a strong forward-scattering behaviour. Furthermore, increasing the upper grain size limit does not necessarily increase the opacity or
optical depth of the entire dust phase. Although the extinction cross-section is approximately proportional to the grain surface and thus to s^2, the actual number of dust grains is indirectly
proportional to the grain mass, that is, to s^−3, resulting in a decrease in opacity.
In the second part of our study, full radiative transfer simulations of typical protoplanetary disks with porous dust grains were conducted. Here, the largest grains are already settled towards the
midplane to account for dust settling. It was shown that introducing porosity may increase the degree of polarisation by up to a factor of four. However, decreasing the filling factor below ~ 0.1
will decrease thedegree of polarisation for even lower filling factors because of the low opacity and optical depth. The flux of the unpolarised direct emission of the dust becomes much higher than
the scattered flux, which effectively decreases the polarisation. Furthermore, the spectral index of disks consisting of porous grains is significantly larger than observed.
In the face-on case, increasing the anisotropy of the radiation field by larger temperature differences in the disk may lead to polarisation degrees in the order of what is observed in protoplanetary
disks. This can be achieved for instance by rather small upper grain size limits in the order of 100 μm or a significant overabundance of grains in that size range, or by a high anisotropy of the
radiation field due to rather extreme density distributions, caused by large scale heights, for example. Both are very unlikely scenarios for typical protoplanetary disks according to current disk
and dust evolution models.
For inclined disks, our simulations are able to reproduce polarisation degrees of 1.2% and higher with our reference disk model including mm-sized grains, moderate porosity, dust settling, and
reasonable (i.e. in agreement with accepted evolution theories and observations) geometrical disk properties. However, the spectral index of these simulations does not reproduce the observed values
in the mm wavelength range.
The mismatch between the spatially resolved polarimetric observations at sub-mm wavelengths, the observed spectral index and the predicted values resulting from theoretical models, and the
interpretation of a wealth of previous observations of dust in protoplanetary disks remains an open issue that urgently needs to be further addressed in future studies.
This research was funded through the DFG grant WO 857/18-1. This research made use of Astropy (http://www.astropy.org), a community-developed core Python package for Astronomy (Astropy Collaboration
Appendix A Optical properties for further dust mixtures
Fig. A.1
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.5, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 850 μm for graphite (parallel).
Fig. A.2
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.5, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 850 μm for graphite (perpendicular).
Fig. A.3
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.2, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 850 μm for astrosil.
Fig. A.4
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.8, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 850 μm for astrosil.
Fig. A.5
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.5, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 1.3 mm for astrosil.
Fig. A.6
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.5, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 2 mm for astrosil.
All Tables
Table 1
Disk and stellar parameter values of our reference case.
All Figures
Fig. 1
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo ω (upper right), extinction opacity (lower left), and probability density
function Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities are calculated for a grain size distribution with q = −3.5, s[min] = 5 nm and different upper grain size limits s[max]
at a wavelength of 850 μm for astrosil.
In the text
Fig. 2
Product of albedo ω, single-scattering polarisation p, and probability density function Ŝ[11] for the scattering angle θ = 90°. All quantities are calculated for a grain size distribution with q =
−3.5, s[min] = 5 nm and three different maximum grain sizes s[max] = 100 μm (blue), 1 mm (red), and 10 mm (green) at a wavelength of 850 μm for astrosil; see Sect. 2.3 for the corresponding
In the text
Fig. 3
Radial profiles of different quantities for disks with different filling factors (colours). The reference disk model (solid lines) is shown in a comparison with different disk models (dashed
lines): a disk with a dust mass M[dust] = 10^−5 M[⊙] lower by a factor of 10 (first column), a disk with a dust mass M[dust] = 10^−3 M[⊙] higher by a factor ten (second column), a disk with a
maximum grain size s[max] = 100 μm smaller by a factor ten (third column), and a disk with a maximum grain size s[max] = 10 mm larger by a factor ten (fourth column). Top: degree of polarisation
for scattering only, i.e. without the unpolarised direct re-emission. Middle: flux ratio of scattered to direct re-emission. Bottom: net degree of polarisation of the total flux, i.e. scattered and
direct radiation, including vertical error bars.
In the text
Fig. 4
Same as Fig. 3, but for (left) disks with different dust masses (line style) and upper grain size limits (colours), and for (right) disks with different filling factors (colour) and exponents of
the grain size distribution (line style).
In the text
Fig. 5
Same as Fig. 3, but for disks with different filling factors (colour) and reference scale heights (left), and withoutdust settling (right).
In the text
Fig. 6
Same as Fig. 3, but for disks with different filling factors (colours) and mass fraction of graphite in the dust model (line style).
In the text
Fig. 7
Map of the polarisation degree with superimposed polarisation vectors for our reference disk with Φ = 1 (left), 10^−1 (centre), and 10^−2 (right), and an inclination of i = 0° (top) and 45° (bottom
). The lengths of the polarisation vectors are scaled to the polarisation degree, as indicated in the top left corner.
In the text
Fig. 8
Same as Fig. 7, but for a disk with a 10 times higher dust mass. The lengths of the polarisation vectors are scaled to the polarisation degree, as indicated in the top left corner.
In the text
Fig. A.1
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.5, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 850 μm for graphite (parallel).
In the text
Fig. A.2
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.5, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 850 μm for graphite (perpendicular).
In the text
Fig. A.3
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.2, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 850 μm for astrosil.
In the text
Fig. A.4
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.8, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 850 μm for astrosil.
In the text
Fig. A.5
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.5, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 1.3 mm for astrosil.
In the text
Fig. A.6
Degree of polarisation for single scattering p = −S[12]∕S[11] for a scattering angle of θ = 90° (upper left), albedo (upper right), extinction opacity (lower left), and probability density function
Ŝ[11] for the scattering angle θ = 90° (lower right). All quantities were calculated for a grain size distribution with q = −3.5, s[min] = 5 nm, and different upper grain size limits s[max] at a
wavelength of 2 mm for astrosil.
In the text
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | {"url":"https://www.aanda.org/articles/aa/full_html/2021/04/aa40033-20/aa40033-20.html","timestamp":"2024-11-14T15:25:40Z","content_type":"text/html","content_length":"223815","record_id":"<urn:uuid:e5070bc6-05e7-47fe-8ab7-10e718fec68d>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00297.warc.gz"} |
Commodity Return Predictability: Evidence from Implied Variance, Skewness and their Risk Premia
Markets Traded
Period of Rebalancing
Standard Deviation (Annual)
Original paper
This paper investigates the role of realized and implied and their risk premia (variance and skewness) for commodities’ future returns. We estimate these moments from high frequency and commodity
futures option data that results in forward-looking measures. Risk premia are computed as the difference between implied and realized moments. We highlight, from a cross-sectional and time series
perspective, the strong positive relation between commodity returns and implied skewness. Moreover, we emphasize the high performance of skewness risk premium. Additionally, we show that their
portfolios exhibit the best risk-return tradeoff. Most of our results are robust to other factors such as the momentum and roll yield.
Keywords:Â Commodity Forecast, Implied Volatility, Implied Skewness, Risk Premium
Trading rules
• Scope for investments: 8 commodities (agricultural: corn, soybeans, wheat; metal: copper, silver, gold; energy: oil, natural gas).
• Estimate implied skewness: Use Bakshi’s model-free approach with one-month options on futures contracts.
• Design portfolios with both long and short positions:
□ Go long: Commodities in the top quartile (25%) based on implied skewness.
□ Go short: Commodities in the bottom quartile (25%) based on implied skewness.
• Portfolio structure: 2 long and 2 short commodity futures contracts, equal weights.
• Duration and frequency of portfolio construction: Set up daily and held for an equivalent of 21 consecutive business days (approximately a month).
• Rebalancing: On a daily basis, sticking to 1/21 of the total portfolio weight.
Python code
import backtrader as bt
import numpy as np
class ImpliedSkewnessStrategy(bt.Strategy):
params = (
('lookback', 21),
('rebalance_days', 1),
def __init__(self):
self.rebalance_counter = 0
def next(self):
if self.rebalance_counter % self.params.rebalance_days == 0:
self.rebalance_counter += 1
def rebalance_portfolio(self):
implied_skewness = []
for d in self.getdatanames():
skewness = self.calculate_implied_skewness(d)
implied_skewness.append((d, skewness))
implied_skewness.sort(key=lambda x: x[1], reverse=True)
long_positions = implied_skewness[:2]
short_positions = implied_skewness[-2:]
target_weight = 1 / (2 * self.params.lookback)
for data_name, _ in long_positions:
data = self.getdatabyname(data_name)
position_size = target_weight * self.broker.getvalue() / data.close[0]
self.order_target_size(data, position_size)
for data_name, _ in short_positions:
data = self.getdatabyname(data_name)
position_size = -target_weight * self.broker.getvalue() / data.close[0]
self.order_target_size(data, position_size)
def calculate_implied_skewness(self, data_name):
# Implement Bakshi's model-free approach here
# This function should return the implied skewness of the given data
if __name__ == '__main__':
cerebro = bt.Cerebro()
# Add data feeds for the 8 commodities here
# ...
cerebro.adddata(data_feed, name='corn')
cerebro.adddata(data_feed, name='soybeans')
cerebro.adddata(data_feed, name='wheat')
cerebro.adddata(data_feed, name='copper')
cerebro.adddata(data_feed, name='silver')
cerebro.adddata(data_feed, name='gold')
cerebro.adddata(data_feed, name='oil')
cerebro.adddata(data_feed, name='natural_gas')
print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue())
print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue())
This is a basic template for the Implied Skewness Strategy in Backtrader. Note that you will need to implement the Bakshi’s model-free approach to calculate implied skewness and add the data feeds
for the 8 commodities. | {"url":"https://wiki.paperswithbacktest.com/trading-strategies/commodities/commodity-return-predictability-evidence-from-implied-variance-skewness-and-their-risk-premia","timestamp":"2024-11-11T09:46:44Z","content_type":"text/html","content_length":"184647","record_id":"<urn:uuid:fc325783-b221-4bfb-b0de-77ecd4d79f8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00204.warc.gz"} |
Rotate Factor Loadings • Genstat v21
This dialog lets you rotate factor loadings from a principal components analysis according to either the Varimax or Quartimax criterion. Principal components analysis defines a set of dimensions
(sometimes called axes) that are linear combinations of the original variables. The individual coefficients of these combinations are called loadings, and can be used to interpret the dimensions.
With principal components analysis, the loadings must lie in the range [-1, +1]. When several dimensions are considered it is possible to define an equivalent set of new dimensions, whose loadings
are linear combinations of the original loadings. If the absolute values of the loadings for a new dimension are either close to 0 or close to 1, you can interpret the dimension as mainly
representing only those original variables with large positive (or negative) loadings. You may sometimes want new dimensions determined by loadings like these, because they are easier to interpret.
The methods by which these new dimensions can be obtained are generally known collectively as factor rotation because the new dimensions represent a rotation of the axes of the original dimensions.
This specifies which items of output are to be produced by the analysis.
Communalities Displays the communalities of the variables
Rotation Displays the rotated factors
Controls the method used for the factor rotation. The Varimax rotation, maximizes the variance of the squares of the loadings within each new dimension: the effect of this rotation should be to
spread out the squared-loadings to the extremes of their range. The Quartimax rotation uses the fourth power of the loadings instead of the second power.
Number of dimensions
This specifies the number of dimensions to rotate from the original loadings (the other dimensions are left unchanged).
Specifies graphical display of the results from the analysis.
Scatter plot of scores with Draws a scatter plot matrix of the principal component scores with the rotated axis. The axis dimensions to be displayed in the plot can be specified using the
rotated axis Dimensions to plot option
Dimensions to plot Controls the axis dimensions to be used in the plot. The axis dimension should be less than or equal to the number specified in the Number of dimensions field in
the principal components options
Display labels Specify a text containing labels for the individual points displayed in the plots. The text should be equal in length to the data variates
This lets you save results from the factor rotation in Genstat data structures. After selecting the appropriate boxes, you need to type the names for the identifiers of the data structures into the
corresponding In: fields.
Rotated loadings Matrix Of the rotated loadings
Rotated scores Matrix Of the rotated scores
Display in spreadsheet
Select this to display the results in a new spreadsheet window.
See also | {"url":"https://genstat21.kb.vsni.co.uk/knowledge-base/rotate-factor-loadings/","timestamp":"2024-11-02T18:21:04Z","content_type":"text/html","content_length":"41433","record_id":"<urn:uuid:9673147a-d086-40fe-8710-3d3e43465481>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00550.warc.gz"} |
Simulation CPD Needs-Assessment
Simulation CPD Needs-Assessment
Thank you for taking approximately 5-10 minutes to inform us of your CPD needs. Based on this information, we will assist you in identifying the most suitable CPD course(s) from among the existing
programs in BC.
Question Title
* 1. Please indicate how useful the following Emergency Airway Management skills are to you:
Not at all Useful Somewhat Useful Useful Very Useful Essential
Bag Valve Mask
Direct Laryngoscopy
Supraglottic Devices (LMA, Combitube)
Video Laryngoscopy (ie. Glidescope)
Fiberoptic Intubation
Question Title
* 2. Please indicate how useful the following Procedural skills are to you:
Not at all Useful Somewhat Useful Useful Very Useful Essential
Surgical Airway Techniques
Central Line Vascular Access
Intraosseous Vascular Access
Chest Tube Placement
Guide-wire-assisted Chest Tube Placement
ED Thoracotomy
Transvenous Pacemaker
Question Title
* 3. Please indicate how useful the following Ultrasound-Guided Emergency Medicine Procedure skills are to you:
Not at all Useful Somewhat Useful Useful Very Useful Essential
US-guided Central Line Placement
US-guided Abscess Drainage
US-guided Foreign Body Localization and Removal
US-guided Thoracentesis
US-guided Lumbar Puncture
US-guided Pericardiocentesis
US-guided Arthrocentesis
US-guided Paracentesis
Question Title
* 4. Please indicate how useful the following Diagnostic Ultrasound skills are to you:
Not at all Useful Somewhat Useful Useful Very Useful Essential
Question Title
* 5. Please indicate how useful the following Trauma skills are to you:
Not at all Useful Somewhat Useful Useful Very Useful Essential
Approach to Initial ED management of Trauma
Trauma Clinical Practice Guidelines, Protocols and Pathways
Updates on Recent Changes in Trauma Resuscitation
Geriatric Trauma
Trauma in Pregnancy
Pediatric Trauma
Question Title
* 6. Please indicate how useful the following Cardiovascular Emergency skills are to you:
Not at all Useful Somewhat Useful Useful Very Useful Essential
Advanced Cardiac Life Support (ALS)
Pediatric ALS
Question Title
* 7. Please indicate how useful the following Decision-Making skills are to you:
Not at all Useful Somewhat Useful Useful Very Useful Essential
Cognitive Error Reduction Strategies
Question Title
* 8. Please indicate how useful the following Crisis Resource Management skills are to you:
Not at all Useful Somewhat Useful Useful Very Useful Essential
Role Clarity
Interdisciplinary Team Training (ie. Emergency Physicians training with Surgeon and Anesthetist)
Interprofessional Team Training (MD’s training with RN’s, RT’s, paramedics)
Resource Allocation
Question Title
* 9. Other topics of interest:
Question Title
* 10. List any CPD courses that you have completed within the past two years: | {"url":"https://www.surveymonkey.com/r/SLRN2GZ","timestamp":"2024-11-02T17:00:24Z","content_type":"text/html","content_length":"407810","record_id":"<urn:uuid:021aad8a-909a-4f92-883c-b180977b662d>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00235.warc.gz"} |
Commutator - Wikipedia Republished // WIKI 2
In mathematics, the commutator gives an indication of the extent to which a certain binaryoperation fails to be commutative. There are different definitions used in grouptheory and ringtheory.
YouTube Encyclopedic
• 1/5
• GT7. The Commutator Subgroup
• Commutator Subgroup of a Group - Group Theory - lesson 49
• Commutator , commutator subgroup | Theoretical part | abstract algebra | math with Akash Tripathi
• Commutator of group and their theroms . And its property
Group theory
The commutator of two elements, g and h, of a group G, is the element
[g, h] = g^−1h^−1gh.
This element is equal to the group's identity if and only if g and h commute (that is, if and only if gh = hg).
The set of all commutators of a group is not in general closed under the group operation, but the subgroup of G generated by all commutators is closed and is called the derived group or the
commutatorsubgroup of G. Commutators are used to define nilpotent and solvable groups and the largest abelian quotientgroup.
The definition of the commutator above is used throughout this article, but many group theorists define the commutator as
[g, h] = ghg^−1h^−1.^[1]^[2]
Using the first definition, this can be expressed as [g^−1, h^−1].
Identities (group theory)
Commutator identities are an important tool in grouptheory.^[3] The expression a^x denotes the conjugate of a by x, defined as x^−1ax.
1. ${\displaystyle x^{y}=x[x,y].}$
2. ${\displaystyle [y,x]=[x,y]^{-1}.}$
3. ${\displaystyle [x,zy]=[x,y]\cdot [x,z]^{y}}$ and ${\displaystyle [xz,y]=[x,y]^{z}\cdot [z,y].}$
4. ${\displaystyle \left[x,y^{-1}\right]=[y,x]^{y^{-1}}}$ and ${\displaystyle \left[x^{-1},y\right]=[y,x]^{x^{-1}}.}$
5. ${\displaystyle \left[\left[x,y^{-1}\right],z\right]^{y}\cdot \left[\left[y,z^{-1}\right],x\right]^{z}\cdot \left[\left[z,x^{-1}\right],y\right]^{x}=1}$ and ${\displaystyle \left[\left[x,y\
right],z^{x}\right]\cdot \left[[z,x],y^{z}\right]\cdot \left[[y,z],x^{y}\right]=1.}$
Identity (5) is also known as the Hall–Witt identity, after PhilipHall and ErnstWitt. It is a group-theoretic analogue of the Jacobiidentity for the ring-theoretic commutator (see next section).
N.B., the above definition of the conjugate of a by x is used by some group theorists.^[4] Many other group theorists define the conjugate of a by x as xax^−1.^[5] This is often written ${\
displaystyle {}^{x}a}$. Similar identities hold for these conventions.
Many identities that are true modulo certain subgroups are also used. These can be particularly useful in the study of solvablegroups and nilpotentgroups. For instance, in any group, second powers
behave well:
${\displaystyle (xy)^{2}=x^{2}y^{2}[y,x][[y,x],y].}$
If the derivedsubgroup is central, then
${\displaystyle (xy)^{n}=x^{n}y^{n}[y,x]^{\binom {n}{2}}.}$
Ring theory
Rings often do not support division. Thus, the commutator of two elements a and b of a ring (or any associativealgebra) is defined differently by
${\displaystyle [a,b]=ab-ba.}$
The commutator is zero if and only if a and b commute. In linearalgebra, if two endomorphisms of a space are represented by commuting matrices in terms of one basis, then they are so represented in
terms of every basis. By using the commutator as a Liebracket, every associative algebra can be turned into a Liealgebra.
The anticommutator of two elements a and b of a ring or associative algebra is defined by
${\displaystyle \{a,b\}=ab+ba.}$
Sometimes ${\displaystyle [a,b]_{+}}$ is used to denote anticommutator, while ${\displaystyle [a,b]_{-}}$ is then used for commutator.^[6] The anticommutator is used less often, but can be used to
define Cliffordalgebras and Jordanalgebras and in the derivation of the Diracequation in particlephysics.
The commutator of two operators acting on a Hilbertspace is a central concept in quantummechanics, since it quantifies how well the two observables described by these operators can be measured
simultaneously. The uncertaintyprinciple is ultimately a theorem about such commutators, by virtue of the Robertson–Schrödingerrelation.^[7] In phasespace, equivalent commutators of function
star-products are called Moyalbrackets and are completely isomorphic to the Hilbert space commutator structures mentioned.
Identities (ring theory)
The commutator has the following properties:
Lie-algebra identities
1. ${\displaystyle [A+B,C]=[A,C]+[B,C]}$
2. ${\displaystyle [A,A]=0}$
3. ${\displaystyle [A,B]=-[B,A]}$
4. ${\displaystyle [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0}$
Relation (3) is called anticommutativity, while (4) is the Jacobiidentity.
Additional identities
1. ${\displaystyle [A,BC]=[A,B]C+B[A,C]}$
2. ${\displaystyle [A,BCD]=[A,B]CD+B[A,C]D+BC[A,D]}$
3. ${\displaystyle [A,BCDE]=[A,B]CDE+B[A,C]DE+BC[A,D]E+BCD[A,E]}$
4. ${\displaystyle [AB,C]=A[B,C]+[A,C]B}$
5. ${\displaystyle [ABC,D]=AB[C,D]+A[B,D]C+[A,D]BC}$
6. ${\displaystyle [ABCD,E]=ABC[D,E]+AB[C,E]D+A[B,E]CD+[A,E]BCD}$
7. ${\displaystyle [A,B+C]=[A,B]+[A,C]}$
8. ${\displaystyle [A+B,C+D]=[A,C]+[A,D]+[B,C]+[B,D]}$
9. ${\displaystyle [AB,CD]=A[B,C]D+[A,C]BD+CA[B,D]+C[A,D]B=A[B,C]D+AC[B,D]+[A,C]DB+C[A,D]B}$
10. ${\displaystyle [[A,C],[B,D]]=[[[A,B],C],D]+[[[B,C],D],A]+[[[C,D],A],B]+[[[D,A],B],C]}$
If A is a fixed element of a ring R, identity (1) can be interpreted as a Leibnizrule for the map ${\displaystyle \operatorname {ad} _{A}:R\rightarrow R}$ given by ${\displaystyle \operatorname {ad}
_{A}(B)=[A,B]}$. In other words, the map ad[A] defines a derivation on the ring R. Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities
(4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) express Z-bilinearity.
From identity (9), one finds that the commutator of integer powers of ring elements is:
${\displaystyle [A^{N},B^{M}]=\sum _{n=0}^{N-1}\sum _{m=0}^{M-1}A^{n}B^{m}[A,B]B^{N-n-1}A^{M-m-1}=\sum _{n=0}^{N-1}\sum _{m=0}^{M-1}B^{n}A^{m}[A,B]A^{N-n-1}B^{M-m-1}}$
Some of the above identities can be extended to the anticommutator using the above ± subscript notation.^[8] For example:
1. ${\displaystyle [AB,C]_{\pm }=A[B,C]_{-}+[A,C]_{\pm }B}$
2. ${\displaystyle [AB,CD]_{\pm }=A[B,C]_{-}D+AC[B,D]_{-}+[A,C]_{-}DB+C[A,D]_{\pm }B}$
3. ${\displaystyle [[A,B],[C,D]]=[[[B,C]_{+},A]_{+},D]-[[[B,D]_{+},A]_{+},C]+[[[A,D]_{+},B]_{+},C]-[[[A,C]_{+},B]_{+},D]}$
4. ${\displaystyle \left[A,[B,C]_{\pm }\right]+\left[B,[C,A]_{\pm }\right]+\left[C,[A,B]_{\pm }\right]=0}$
5. ${\displaystyle [A,BC]_{\pm }=[A,B]_{-}C+B[A,C]_{\pm }=[A,B]_{\pm }C\mp B[A,C]_{-}}$
6. ${\displaystyle [A,BC]=[A,B]_{\pm }C\mp B[A,C]_{\pm }}$
Exponential identities
Consider a ring or algebra in which the exponential ${\displaystyle e^{A}=\exp(A)=1+A+{\tfrac {1}{2!}}A^{2}+\cdots }$ can be meaningfully defined, such as a Banachalgebra or a ring of
In such a ring, Hadamard'slemma applied to nested commutators gives: ${\textstyle e^{A}Be^{-A}\ =\ B+[A,B]+{\frac {1}{2!}}[A,[A,B]]+{\frac {1}{3!}}[A,[A,[A,B]]]+\cdots \ =\ e^{\operatorname {ad} _
{A}}(B).}$ (For the last expression, see Adjoint derivation below.) This formula underlies the Baker–Campbell–Hausdorffexpansion of log(exp(A) exp(B)).
A similar expansion expresses the group commutator of expressions ${\displaystyle e^{A}}$ (analogous to elements of a Liegroup) in terms of a series of nested commutators (Lie brackets), ${\
displaystyle e^{A}e^{B}e^{-A}e^{-B}=\exp \!\left([A,B]+{\frac {1}{2!}}[A{+}B,[A,B]]+{\frac {1}{3!}}\left({\frac {1}{2}}[A,[B,[B,A]]]+[A{+}B,[A{+}B,[A,B]]]\right)+\cdots \right).}$
Graded rings and algebras
When dealing with gradedalgebras, the commutator is usually replaced by the graded commutator, defined in homogeneous components as
${\displaystyle [\omega ,\eta ]_{gr}:=\omega \eta -(-1)^{\deg \omega \deg \eta }\eta \omega .}$
Adjoint derivation
Especially if one deals with multiple commutators in a ring R, another notation turns out to be useful. For an element ${\displaystyle x\in R}$, we define the adjoint mapping ${\displaystyle \mathrm
{ad} _{x}:R\to R}$ by:
${\displaystyle \operatorname {ad} _{x}(y)=[x,y]=xy-yx.}$
This mapping is a derivation on the ring R:
${\displaystyle \mathrm {ad} _{x}\!(yz)\ =\ \mathrm {ad} _{x}\!(y)\,z\,+\,y\,\mathrm {ad} _{x}\!(z).}$
By the Jacobiidentity, it is also a derivation over the commutation operation:
${\displaystyle \mathrm {ad} _{x}[y,z]\ =\ [\mathrm {ad} _{x}\!(y),z]\,+\,[y,\mathrm {ad} _{x}\!(z)].}$
Composing such mappings, we get for example ${\displaystyle \operatorname {ad} _{x}\operatorname {ad} _{y}(z)=[x,[y,z]\,]}$ and ${\displaystyle \operatorname {ad} _{x}^{2}\!(z)\ =\ \operatorname {ad}
_{x}\!(\operatorname {ad} _{x}\!(z))\ =\ [x,[x,z]\,].}$ We may consider ${\displaystyle \mathrm {ad} }$ itself as a mapping, ${\displaystyle \mathrm {ad} :R\to \mathrm {End} (R)}$, where ${\
displaystyle \mathrm {End} (R)}$ is the ring of mappings from R to itself with composition as the multiplication operation. Then ${\displaystyle \mathrm {ad} }$ is a Liealgebra homomorphism,
preserving the commutator:
${\displaystyle \operatorname {ad} _{[x,y]}=\left[\operatorname {ad} _{x},\operatorname {ad} _{y}\right].}$
By contrast, it is not always a ring homomorphism: usually ${\displaystyle \operatorname {ad} _{xy}\,eq \,\operatorname {ad} _{x}\operatorname {ad} _{y}}$.
General Leibniz rule
The generalLeibnizrule, expanding repeated derivatives of a product, can be written abstractly using the adjoint representation:
${\displaystyle x^{n}y=\sum _{k=0}^{n}{\binom {n}{k}}\operatorname {ad} _{x}^{k}\!(y)\,x^{n-k}.}$
Replacing ${\displaystyle x}$ by the differentiation operator ${\displaystyle \partial }$, and ${\displaystyle y}$ by the multiplication operator ${\displaystyle m_{f}:g\mapsto fg}$, we get ${\
displaystyle \operatorname {ad} (\partial )(m_{f})=m_{\partial (f)}}$, and applying both sides to a function g, the identity becomes the usual Leibniz rule for the nth derivative ${\displaystyle \
partial ^{n}\!(fg)}$.
See also
Further reading
• McKenzie,R.; Snow, J. (2005), "Congruencemodularvarieties:commutatortheory", in Kudryavtsev, V. B.; Rosenberg, I. G. (eds.), Structural Theory of Automata, Semigroups, and Universal Algebra,
NATO Science Series II, vol. 207, Springer, pp. 273–329, doi:10.1007/1-4020-3817-8_11, ISBN 9781402038174
External links
This page was last edited on 5 September 2024, at 18:26 | {"url":"https://wiki2.org/en/Commutator","timestamp":"2024-11-09T19:07:29Z","content_type":"application/xhtml+xml","content_length":"159852","record_id":"<urn:uuid:2e159bd7-6123-482d-93a3-dcb61a2fcf7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00416.warc.gz"} |
Safe Haskell None
Language Haskell2010
type Graph = Graph' Block Source #
A control-flow graph, which may take any of four shapes (O/O, OC, CO, C/C). A graph open at the entry has a single, distinguished, anonymous entry point; if a graph is closed at the entry, its entry
point(s) are supplied by a context.
data Graph' block (n :: Extensibility -> Extensibility -> *) e x where Source #
Graph' is abstracted over the block type, so that we can build graphs of annotated blocks for example (Compiler.Hoopl.Dataflow needs this).
GNil :: Graph' block n O O
GUnit :: block n O O -> Graph' block n O O
GMany :: MaybeO e (block n O C) -> Body' block n -> MaybeO x (block n C O) -> Graph' block n e x
Outputable (Graph CmmNode e x) Source #
Defined in PprCmm
class NonLocal thing where Source #
Gives access to the anchor points for nonlocal edges as well as the edges themselves
entryLabel Source #
:: thing C x
-> Label The label of a first node or block
successors Source #
:: thing e C
-> [Label] Gives control-flow successors
NonLocal CmmNode Source #
Defined in CmmNode
NonLocal n => NonLocal (Block n) Source #
Defined in Hoopl.Graph
mapGraph :: (forall e x. n e x -> n' e x) -> Graph n e x -> Graph n' e x Source #
Maps over all nodes in a graph.
mapGraphBlocks :: forall block n block' n' e x. (forall e x. block n e x -> block' n' e x) -> Graph' block n e x -> Graph' block' n' e x Source #
Function mapGraphBlocks enables a change of representation of blocks, nodes, or both. It lifts a polymorphic block transform into a polymorphic graph transform. When the block representation
stabilizes, a similar function should be provided for blocks.
revPostorderFrom :: forall block. NonLocal block => LabelMap (block C C) -> Label -> [block C C] Source #
Returns a list of blocks reachable from the provided Labels in the reverse postorder.
This is the most important traversal over this data structure. It drops unreachable code and puts blocks in an order that is good for solving forward dataflow problems quickly. The reverse order is
good for solving backward dataflow problems quickly. The forward order is also reasonably good for emitting instructions, except that it will not usually exploit Forrest Baskett's trick of
eliminating the unconditional branch from a loop. For that you would need a more serious analysis, probably based on dominators, to identify loop headers.
For forward analyses we want reverse postorder visitation, consider: A -> [B,C] B -> D C -> D Postorder: [D, C, B, A] (or [D, B, C, A]) Reverse postorder: [A, B, C, D] (or [A, C, B, D]) This matters
for, e.g., forward analysis, because we want to analyze *both* B and C before we analyze D. | {"url":"http://hackage-origin.haskell.org/package/ghc-8.10.2/docs/Hoopl-Graph.html","timestamp":"2024-11-11T03:44:03Z","content_type":"application/xhtml+xml","content_length":"23048","record_id":"<urn:uuid:691f1cda-eb3d-40fc-82d6-51eaffe6bfeb>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00691.warc.gz"} |
Data for strategic and operational decisions | Information Systems homework help
Using Analytic Techniques to Add Meaning to Data
Download data on a company’s stock history. From this data, create scatterplots, histograms, and calculate the mean, median, mode, and standard deviation of some data points. Write a 5 page report
including the graphs and descriptive statistics you have created.
Business analytics techniques are used to facilitate decision making by transforming large amounts of raw data into meaningful information. Many businesses rely on analysis of relevant historical
data to make key strategic and operational decisions. Therefore, understanding how to use techniques such as graphical representation and descriptive statistics to translate raw data into useful
information can be a valuable skill in an organization.
In this assessment and the next, you will have the opportunity to sharpen your analytics skills by locating and interpreting real-life stock data.
You have been learning about how to explore data. In this assessment, you will apply those skills by downloading a practical dataset and creating graphical representations of that data. The work you
do in this assessment will lay the foundation for future assessments in which you analyze and interpret those graphical representations. Since the purpose of business analytics is to make sense of
large quantities of raw data, this assessment helps you develop skills in applying analytics to business contexts by practicing the exploration and display of data.
In addition to graphical and tabular summary methods, numeric or quantitative variables and data can be summarized numerically using various techniques of description and display.
Descriptive methods, which describe existing data, are also methods for using a subset of the available data to estimate or test a theory about a measurement on a larger group. This larger group is
called the population, and the measurement being studied is the parameter. The smaller group, or subset, of the population that is taken in order to make an inference (to make an estimate or test a
theory) is referred to as the sample. The measurement taken on that sample is then referred to as the statistic, which is usually the best single-number estimate for the population parameter of
interest. Most often, however, the estimate should not be restricted to a single number that would be exactly correct or incorrect. Instead, it is preferable to calculate some range of possible
values between which there can be a certain percent confidence that the true population parameter falls. These are referred to as confidence intervals.
Business analytics techniques are used to facilitate decision making by transforming large amounts of raw data into meaningful information. Many businesses rely on analysis of relevant historical
data to make key strategic and operational decisions with the goal of gaining or maintaining competitive advantage. Therefore, understanding how to use techniques such as graphical representation and
descriptive statistics to translate raw data into useful information can be a valuable skill in an organization.
In this assessment and the next, you will have the opportunity to sharpen your analytics skills by locating and interpreting real-life stock data, creating a business report, and presenting the
information from the business report with your supervisor and colleagues as part of a decision-making effort.
Your Role
You are a member of a business analyst group interested in a publicly traded company. Your supervisor has asked you to create a presentation, including graphical representations from raw stock data.
From that raw stock data, you are to create a business report for a company-wide meeting at the end of the quarter. Your work and the work of others will result in a Business Report, which will be
utilized to help company leadership make decisions.
Your first task is to pick a publicly held company with only one business platform. So do not pick Apple, Amazon, Disney, et cetera. You want a company that plays in only one industry. Then you are
to provide an overview of the company, including business context. Remember that business context includes many aspects of the company, industry, competition, et cetera.
The second task is prepping stock history data from the business or company and creating scatter plots and a histograms.
The third task is to calculate mean, median, mode, and standard deviation of the adjusted daily closing stock price and the stock volume.
The fourth task is to provide a summary of the information you provide (including data analysis) without bias and with factual information including citations.
It is your responsibility to present visually and to interpret the data into meaningful information using analysis and descriptive statistics.
Select a publicly traded business or stock that plays in only one industry in which you have interest. Download the raw data on the company’s stock history.
Follow these steps to locate and download stock history from Yahoo! Finance:
Go to Yahoo Finance.
Search for and find the stock information of your chosen company. Remember do not use a company that plays in multiple industries.
Once you pull up the general data on the company, review the screen links throughout until you find the link for Historical Data. Click on the Historical Data link. Then select the following settings
above the table:
Select Time Period of one year.
Select “Historical Prices.”
Select Frequency as “Daily.”
Click Apply.
Click Download Data. Go to the bottom of your screen or your Downloads folder to open the Excel file you just downloaded. Open the Excel file. Check to be sure that you have enough lines to show the
whole year. If not, reset the settings at the top of the Historical Data chart and try again.
Once you are sure that you have a year’s worth of data, save the Excel file.
Using the Excel file with the year’s stock data, conduct descriptive analysis as follows:
Create a scatter plot of the highest stock price (in the column labeled “High”) against time. Write several sentences explaining the process/steps by which you created this graph.
Create a scatter plot of the lowest stock price (in the column labeled “Low”) against time. Write several sentences explaining the process/steps by which you created this graph.
Create a histogram of the adjusted daily closing stock price (in the column labeled “Adj Close”). Make sure the histogram is meaningful by adjusting the bin size so you can see the shape of the
histogram. Write several sentences explaining the process/steps by which you created this graph.
Create a histogram of the stock trading volume (in the column labeled “Volume”). Make sure the histogram is meaningful by adjusting the bin size so you can see the shape of the histogram. Write
several sentences explaining the process/steps by which you created each graph.
Complete the following for each of the four graphs:
Make sure the x and y axis have appropriate labels—“Stock Price in USD” or “Date D/M/Y” for example.
Change the title of the graph to communicated what the graph is communicating.
Add options—color, trendlines, legend, other?
Calculate the mean, median, mode, and standard deviation of the adjusted daily closing stock price.
Put answers of calculations in table format for easy review.
Write several sentences explaining the process by which you calculated these statistics.
Calculate the mean, median, mode, and standard deviation of the stock volume. Put in table format for easy review. Write a sentence explaining the process by which you calculated these statistics.
Prepare a 5-8 page report that you would present to your supervisor, including the following:
An APA-formatted title page.
A 1-2 page introduction describing the background of your chosen company and its practical extensive business context. You should use at least four sources of information on the company, industries
the company participates in, history, mission, platforms, products, competitive advantage, and competitors by industry.
A section labeled Graphical Representations of Data, in which you include the four graphs you created above and short descriptions of the process you used to create each graph.
A section labeled Descriptive Statistics, in which you include the statistics you calculated above and explanation of the procedures you followed to calculate the statistics.
A summary of what the data suggests. No opinion please. See textbook information on how to interpret data. Please remain unbiased in your summary. You may use additional resources (and cite) to help
you interpret the data. For example: What does Standard Deviation say about stock volatility?
Your paper should be APA-formatted with in-text citations and a corresponding references page. Remember to cite the sources of your financial data. Include at least four sources of information for
your page and reflect on reference page.
Walkthrough: You may view the following media piece to help you understand concepts addressed in this assessment:
Using Analytic Techniques to Add Meaning to Data Walkthrough.
Additional Requirements
Length: 5, double-spaced. Include a title page and the graphical representations of the data selected.
Written communication: Written communication should be free of errors that detract from the overall message.
By successfully completing this assessment, you will demonstrate your proficiency in the following course competencies through corresponding scoring guide criteria:
Competency 2: Use analytic and statistical techniques to make meaning of large quantities of data.
Create four different graphical representations of data.
Calculate descriptive statistics for two different variables.
Summarize the processes by which each graph and statistics were created and calculated.
Competency 4: Present the results of data analysis in clear and meaningful ways to multiple stakeholders.
Introduce the company and practical business context.
Correctly format citations and references using current APA style.
Write content clearly and logically with correct use of grammar, punctuation, and mechanics. | {"url":"https://www.essay-writing.com/data-for-strategic-and-operational-decisions-information-systems-homework-help/","timestamp":"2024-11-07T19:39:30Z","content_type":"text/html","content_length":"73219","record_id":"<urn:uuid:79250da5-b4bc-4eb6-809d-b2ae5ae970ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00051.warc.gz"} |
Imagination and the Impossible
Two more sources I’d like to draw from for this fall’s maths for designers-course:
1. Geometry and the Imagination
A fantastic collection of handouts for a two week summer workshop entitled ’Geometry and the Imagination’, led by John Conway, Peter Doyle, Jane Gilman and Bill Thurston at the Geometry Center in
Minneapolis, June 1991, based on a course ‘Geometry and the Imagination’ they taught twice before at Princeton.
Among the goodies a long list of exercises in imagining (always useful to budding architects) and how to compute curvature by peeling potatoes and other vegetables…
The course really shines in giving a unified elegant classification of the 17 wallpaper groups, the 7 frieze groups and the 14 families of spherical groups by using Thurston’s concept of orbifolds.
If you think this will be too complicated, have a look at the proof that the orbifold Euler characteristic of any symmetry pattern in the plane with bounded fundamental domain is zero :
Take a large region in the plane that is topologically a disk (i.e. without holes). Its Euler characteristic is $1$. This is approximately equal to $N$ times the orbifold Euler characteristic for
some large $N$, so the orbifold Euler characteristic must be $0$.
This then leads to the Orbifold Shop where they sell orbifold parts:
• a handle for 2 Euros,
• a mirror for 1 Euro,
• a cross-cap for 1 Euro,
• an order $n$ cone point for $(n-1)/n$ Euro,
• an order $n$ corner reflector for $(n-1)/2n$ Euro, if you have the required mirrors to install this piece.
Here’s a standard brick wall, with its fundamental domain and corresponding orbifold made from a mirror piece (1 Euro), two order $2$ corner reflectors (each worth $.25$ Euro), and one order $2$ cone
point (worth $.5$ Euro). That is, this orbifold will cost you exactly $2$ Euros.
If you spend exactly $2$ Euros at the Orbifold Shop (and there are $17$ different ways to do this), you will have an orbifold coming from a symmetry pattern in the plane with bounded fundamental
domain, that is, one of the $17$ wallpaper patterns.
For the mathematicians among you desiring more details, please read The orbifold notation for two-dimensional groups by Conway and Daniel Huson, from which the above picture was taken.
2. On the Cohomology of Impossible Figures by Roger Penrose
The aspiring architect should be warned that some constructions are simply not possible in 3D, even when they look convincing on paper, such as Escher’s Waterfall.
M.C. Escher, Waterfall – Photo Credit
In his paper, Penrose gives a unified approach to debunk such drawings by using cohomology groups.
Clearly I have no desire to introduce cohomology, but it may still be possible to get the underlying idea across. Let’s take the Penrose triangle (all pictures below taken from Penrose’s paper)
The idea is to break up such a picture in several parts, each of which we do know to construct in 3D (that is, we take a particular cover of our figure). We can slice up the Penrose triangle in three
parts, and if you ever played with Lego you’ll know how to construct each one of them.
Next, position the constructed pieces in space as in the picture and decide which of the two ends is closer to you. In $Q_1$ it is clear that point $A_{12}$ is closer to you than $A_{13}$, so we
write $A_{12} < A_{13}$.
Similarly, looking at $Q_2$ and $Q_3$ we see that $A_{23} < A_{21}$ and that $A_{31} < A_{32}$. Next, if we try to reassemble our figure we must glue $A_{12}$ to $A_{21}$, that is $A_{12}=A_{21}$,
and similarly $A_{23}=A_{32}$ and $A_{31}=A_{13}$. But, then we get
A_{13}=A_{31} < A_{32}=A_{23} < A_{21}=A_{12} < A_{13} \] which is clearly absurd. Once again, if you have suggestions for more material to be included, please let me know.
One Comment
1. Perhaps something on “making curved surfaces with straight lines”, i.e. hyperboloids made of sticks? Used for cooling towers, lighthouses, watertowers, … | {"url":"http://www.neverendingbooks.org/imagination-and-the-impossible/","timestamp":"2024-11-01T20:58:59Z","content_type":"text/html","content_length":"35776","record_id":"<urn:uuid:1466dfd4-1923-4196-8894-fffcc9d54b9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00271.warc.gz"} |
Share price formula capital gains
both dividend and capital gains taxes into share prices. Intuitively, equation (1) posits that firm value is a function of the book value of common equity and the 2 Jan 2019 Let's dive into what,
exactly, capital gains taxes entail – and what you need to know about them. Shares from the vest The future tax savings is what you might get if the stock price holds. Calculating Your Break Even
Price. (b) Demergers of Argos plc and Arjo Wiggins Teape Appleton plc (formerly To determine the capital gains tax base cost of BAT shares and AZ shares, the base shares bought on your behalf under
the DRIP, for calculating the chargeable
For the purposes of UK capital gains tax, the market values on February 15, of individual shareholders resident in the UK calculating their personal tax liability. Share prices have been restated
where necessary to reflect all capitalisation Find out how much capital gains tax - CGT you need to pay on shares is calculate the capital gain based on the amount of purchase and the sale price
you 8 Dec 2019 Capital gains are the primary source of returns from securities such as stocks. Six months later, the price of the stock rises to $65 per share. capital losses from capital gains
before calculating your capital gains tax liability. 9.7.1. Selling Shares with Manual Calculation of Capital Gain or Loss Table 9.7 . Selling Shares Split Scheme, Sale & Gain Combined, Gross
Pricing 26 Nov 2019 Hold the shares inside an IRA, 401(k) or other tax-advantaged account. Dividends and capital gains on stock held inside a traditional IRA are tax-
The first step in how to calculate long-term capital gains tax is generally to find the difference between what you paid for your property and how much you sold it for—adjusting for commissions or
fees. Depending on your income level, your capital gain will be taxed federally at either 0%, 15% or 20%.
The first step in how to calculate long-term capital gains tax is generally to find the difference between what you paid for your property and how much you sold it for—adjusting for commissions or
fees. Depending on your income level, your capital gain will be taxed federally at either 0%, 15% or 20%. Capital gain: Full sales value – (Brokerage at 0.5% + purchase price) = 1,80,000 – (900 +
1,00,000) = Rs. 79,100. Short-term capital gains tax: Short-term capital gain multiplied by Tax rate divided by 100 = 79,100 * 15 / 100 = Rs. 11,865. Debt-oriented mutual funds and preference shares,
however, do not fall under the purview of Section 111A. Subtract the per-share cost basis from the average sale price to calculate gain or loss. A negative figure is a loss, while a positive figure
is a gain. In the example, subtract the $21 cost basis from the $26 sale price to arrive at a gain of $5 per share. For example, if you sell two stocks in a year, one at a $1,000 profit and the other
at a $500 loss, you will report a net capital gain of $500 and only pay the capital gains tax on $500. Capital gains yield is the percentage price appreciation on an investment. It is calculated as
the increase in the price of an investment, divided by its original acquisition cost. For example, if a security is purchased for $100 and later sold for $125, the capital gains yield is 25%.
For example, if you sell two stocks in a year, one at a $1,000 profit and the other at a $500 loss, you will report a net capital gain of $500 and only pay the capital gains tax on $500.
27 Jan 2018 Capital gains yield is the percentage price appreciation on an on a share, an investor must combine the capital gains yield and the dividend The dividend discount model (DDM) is a method
of valuing a company's stock price based on The equation most widely used is called the Gordon growth model (GGM). Consider the dividend growth rate in the DDM model as a proxy for the growth of
earnings and by extension the stock price and capital gains. Consider Basically, if you buy shares for one price and sell them for another price then the difference between the two is your capital
gain or capital loss. In the event you Capital Gains Tax (CGT) on the sale, gift or exchange of an asset. Overview · What do you pay You might need to use the 'market value' instead of sale price or
purchase price. For example, if you gift an asset to Calculation of Mary's CGT 25 Jul 2019 Many investors focus their attention on how a stock's price changes over time. Instead of the $7 capital
gain per share, which translates to about 13%, We'll get into the calculation of annualized total returns later, but the You need to figure your capital gain or loss when you sell such investments.
That means, to figure the average basis, you must multiply the price per share by
25 Jul 2019 Many investors focus their attention on how a stock's price changes over time. Instead of the $7 capital gain per share, which translates to about 13%, We'll get into the calculation of
annualized total returns later, but the
Now, after 2 years, the price of the stock has appreciated to $120 per share. What is the Capital Yield on that particular stock? All we need to do is to put in the data Capital Gains Yield is the
price appreciation on an investment relative to the amount one initially invested. For example, if one buys a stock for $10 and the share
28 Jun 2019 Calculating the cost base for real estate · Shares, units and similar investments Press right to expand, left to close. You are in this area. CGT
17 Feb 2020 Capital gains earned on sale of property must be invested in the investments specified under the Income Tax Act before expiry of time limit and 17 May 2018 Capital Gain (Taxable Part)=
Sale Price – Purchase Price. = 700,000 – 500,000*. = 200,000. Calculation of purchase price: When Fair Market 1 Feb 2018 To calculate your total capital gain tax on shares you sold during the
previous For this reason, sometimes investors who only focus on price, rather Are you already calculating your capital gains taxes for the 2017 tax year? 28 Jun 2019 Calculating the cost base for
real estate · Shares, units and similar investments Press right to expand, left to close. You are in this area. CGT both dividend and capital gains taxes into share prices. Intuitively, equation (1)
posits that firm value is a function of the book value of common equity and the 2 Jan 2019 Let's dive into what, exactly, capital gains taxes entail – and what you need to know about them. Shares
from the vest The future tax savings is what you might get if the stock price holds. Calculating Your Break Even Price. (b) Demergers of Argos plc and Arjo Wiggins Teape Appleton plc (formerly To
determine the capital gains tax base cost of BAT shares and AZ shares, the base shares bought on your behalf under the DRIP, for calculating the chargeable
7 May 2018 Until financial year 2017-18, Long Term Capital Gain (LTCG) tax on equity clause the following method will be used for calculating LTCG. In this case the market value in January is not
only higher than the purchase price What is Capital Gains Tax (CGT)?. CGT is a tax on the profit or gain you make when you sell, or otherwise dispose of, an asset such as shares. General
information A capital gains yield is the rise in the price of a security, such as common stock. For common stock holdings, the CGY is the rise in the stock price divided by the original price of the
security. All we need to do is to put in the data into the formula for capital gains yield calculation. Capital Gains formula = (P 1 – P 0) / P 0; Or, Capital Gains = ($120 – $105) / $105; Or,
Capital Gains = $15 / $105 = 1/7 = 14.29%. That means, by using this formula, we understand that Ishita got 14.29% capital gains after 2 years of investment. At the end of the year, company ABC has a
market price of $105 per share. In addition, company ABC issues a dividend of $50 per share. The Capital Gains Yield for Mark’s investment is (105-100)/100 = 5%, which is much less than the 50% that
John receives. On a per-share basis, you have a long-term gain of $5 per share. Multiply this amount by 50 shares and you have a long-term capital gain (15% tax rate) of $250 (50 x $5). Investors
need to remember that if a stock splits, they must also adjust their cost price accordingly. Accordingly, they have to pay a 20% tax for no-equity assets after inflation indexation and 10% tax
without indexation. Indexation increases the purchase price and the capital gain decreases accordingly. You can apply the indexation formula on the purchase price and calculate its 20% tax, | {"url":"https://investingyals.netlify.app/spector57794jeje/share-price-formula-capital-gains-fi.html","timestamp":"2024-11-07T02:57:06Z","content_type":"text/html","content_length":"34159","record_id":"<urn:uuid:44fb3117-8a29-4360-817e-90d7c8fa4b88>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00123.warc.gz"} |
Coordinate Geometry Class 9 Notes – Chapter 3
CBSE Class 9 Maths Coordinate Geometry Notes:-Download PDF Here
Coordinate Geometry for class 9 notes is given here. Get the complete concept of coordinate geometry such as cartesian system, coordinate points, how to plot the points in the coordinate axes,
quadrants with signs and so on. Go through the below article to learn coordinate geometry for class 9.
Cartesian System
Cartesian plane & Coordinate Axes
Cartesian Plane: A cartesian plane is defined by two perpendicular number lines, A horizontal line(x−axis) and a vertical line (y−axis).
These lines are called coordinate axes. The Cartesian plane extends infinitely in all directions.
Origin: The coordinate axes intersect each other at right angles, The point of intersection of these two axes is called Origin.
To know more about Cartesian System, visit here.
The cartesian plane is divided into four equal parts, called quadrants. These are named in the order as I, II, III and IV starting with the upper right and going around in anticlockwise direction.
Points in different Quadrants.
Signs of coordinates of points in different quadrants:
I Quadrant: ‘+’ x – coordinate and ‘+’ y – coordinate. E.g. (2, 3)
II Quadrant: ‘-’ x – coordinate and ‘+’ y – coordinate. E.g. (-1, 4)
III Quadrant: ‘-’ x – coordinate and ‘-’ y – coordinate. E.g. (-3, -5)
IV Quadrant: ‘+’ x – coordinate and ‘-’ y – coordinate. E.g. (6, -1)
To know more about Quadrants, visit here.
Plotting on a Graph
Representation of a point on the Cartesian plane
Using the co-ordinate axes, we can describe any point in the plane using an ordered pair of numbers. A point A is represented by an ordered pair (x, y) where x is the abscissa and y is the ordinate
of the point.
To know more about Cartesian plane, visit here.
Plotting a point
The coordinate points will define the location in the cartesian plane. The first point (x) in the coordinates represents the horizontal axis, and the second point in the coordinates (y) represents
the vertical axis.
Consider an example, Point (3, 2) is 3 units away from the positive y-axis and 2 units away from the positive x-axis. Therefore, point (3, 2) can be plotted, as shown below. Similarly, (-2, 3), (-1,
-2) and (2, -3) are plotted. | {"url":"https://mathlake.com/article-228-Coordinate-Geometry-Class-9-Notes--Chapter-3.html","timestamp":"2024-11-13T09:24:32Z","content_type":"text/html","content_length":"15294","record_id":"<urn:uuid:b33b727a-f2a5-4047-aac2-809c941c5123>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00011.warc.gz"} |
3149 - conferences.cirm-math.fr
GAGTA conferences form an annual series of conferences devoted to the dialogue between different approaches on infinite groups: Geometry, combinatorics, decidability, logic and algorithms. These
conferences gather every year a group of involved specialists and focus on giving good opportunities to young researchers. They both contribute to long term research thematics and explore new trends.
Growth of groups, metric properties as initiated by M. Gromov, computation, algorithms and logic in infinite groups are the main lines of theses conferences. Cubic groups, contracting elements in
almost hyperbolic groups, random walks on groups, dynamical systems, various aspects of normal forms in specific groups, automata groups and self-similar groups are some of the current hot topics.
All in all, these conferences are a meeting point between algebra, geometry and computer science. In this venue of GAGTA, we will also emphasize the strong links with symbolic dynamics as well as the
strengths of Sage software for computing in group theory. A GAGTA conference already took place at CIRM in September 2015. More information about GAGTA can be found on the website: | {"url":"https://conferences.cirm-math.fr/3149.html","timestamp":"2024-11-08T05:08:05Z","content_type":"text/html","content_length":"92899","record_id":"<urn:uuid:c675dcc6-57f2-4273-a71a-0ce8b753b290>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00740.warc.gz"} |
Difference Between Commutative and Associative
Mathematics is a game of numbers and numbers are everywhere. And the rule of the game is the properties and rules associated with numbers. Properties help you calculate answers in your head quickly
and easily. Properties are nothing but special rules that numbers follow. There are three basic properties of numbers that every math system obeys: Commutative, Associative, and Distributive
properties. These properties are features of the four operations (add, subtract, multiply, and divide) that always apply regardless of the number you’re working with. But we will discuss only
commutative and associative properties in the following article.
Both commutative and associative properties are rules applied to addition and multiplication operations. These properties are laws used in algebra to help solve problems. The commutative property
comes from the term “commute” which means move around and it refers to being able to switch numbers that you’re adding or multiplying. The associative property comes from the word “associate” or
“group” and it refers to grouping of three or more numbers using parentheses, regardless of how you group them. The result remains the same, no matter how you re-group the numbers. Let’s take a look
at the two properties to better understand how they work.
What is Commutative?
For example; we know that adding 2 and 5 gives the same answer as adding 5 and 2. The order of the numbers in an addition problem can be changed without changing the result. This thing about numbers
and addition is called the commutative property of addition. So, we can say addition is a commutative operation. Similarly, multiplication is a commutative operation.
Commutative property of addition:
a + b = b + a
3 + 4 = 7 is the same as 4 + 3 = 7
The result will be the same regardless of the order of the numbers.
Commutative property of multiplication:
a × b = b × a
3 × 7 = 21 is the same as 7 × 3 = 21
Likewise, the result will be the same regardless of the order of the numbers.
What is Associative?
Associative is yet another property we use has to do with re-grouping. For instance, when adding 2 + 3 + 5, we can either add 2 and 3 first and then add 5, or we can add 3 and 5 first and then the 2.
Mathematically, it looks like this: 2 + 3 + 5 = 2 + (3 + 5) = (2 +3) + 5. Operations that behave in this manner are called associative operations. The result remains the same even if we change the
grouping of numbers.
Associative property of addition:
a + (b + c) = (a + b) + c = a + b + c
1 + (2 +3) = (1 +2) + 3 = 6
The result remains the same, no matter how you group the numbers.
Associative property of multiplication:
a × (b × c) = (a × b) × c
2 × (3 × 4) = 2 × 12 = 24
(2 × 3) × 4 = 6 × 4 = 24
So, the grouping in the numbers does not change the result.
Difference between Commutative and Associative
– The commutative property comes from the term “commute” which means ‘move around’ and it refers to being able to switch numbers that you’re adding or multiplying regardless of the order of the
numbers. The associative property, on the other hand, comes from the word “associate” or “group” and it refers to grouping of three or more numbers using parentheses, regardless of how you group
them. The result will be the same, no matter how you re-group the numbers or variables.
– The commutative rule of addition states, a + b = b + a, which means adding a and b gives the same result as adding b and a. The orders can be changes without changing the result. This rule of
addition is called the commutative property of addition. Similarly, multiplication is a commutative operation which means a × b will give the same result as b × a. The associative property, on the
other hand, is the rule that refers to grouping of numbers. The associative rule of addition states, a + (b + c) is the same as (a + b) + c. Likewise, the associative rule of multiplication says a ×
(b × c) is the same as (a × b) × c.
– The commutative property of addition: 1 + 2 = 2 +1 = 3
The commutative property of multiplication: 2 × 3 = 3 × 2 = 6
The associative property of addition: 5 + (3 + 7) = (5 + 3) + 7 = 15
The associative property of multiplication: 5 × (2 × 4) = (5 × 2) × 4 = 40
Commutative vs. Associative: Comparison Chart
In a nutshell, the commutative property is not to confuse with the associative property. The commutative property states that it’s okay to change of the order of the numbers in addition and
multiplication operations because the result will be the same, no matter the order. The associative property, on the other hand, states that the result will be the same, no matter how you group the
number or variables in addition/multiplication operations.
Latest posts by Sagar Khillar
(see all)
Search DifferenceBetween.net :
Email This Post
: If you like this article or our site. Please spread the word. Share it with your friends/family.
Leave a Response
References :
[0]Image credit: https://upload.wikimedia.org/wikipedia/commons/thumb/d/dc/Group_associative_categories.svg/500px-Group_associative_categories.svg.png
[1]Image credit: http://latex-cookbook.net/media/cookbook/examples/PNG/commutative-diagram.png
[2]McKeague, Charles P. Intermediate Algebra with Trigonometry. Cambridge, Massachusetts: Academic Press, 2014. Print
[3]Zegarelli, Mark. Basic Math and Pre-Algebra For Dummies. Hoboken, New Jersey: John Wiley & Sons, 2016. Print
[4]McKeague, Charles P. Beginning Algebra: A Text/Workbook. Amsterdam, Netherlands: Elsevier, 2014. Print
Articles on DifferenceBetween.net are general information, and are not intended to substitute for professional advice. The information is "AS IS", "WITH ALL FAULTS". User assumes all risk of use,
damage, or injury. You agree that we have no liability for any damages. | {"url":"https://www.differencebetween.net/science/mathematics-statistics/difference-between-commutative-and-associative/","timestamp":"2024-11-05T19:11:37Z","content_type":"application/xhtml+xml","content_length":"86037","record_id":"<urn:uuid:a1a67c2e-4447-432d-8721-47dfa4472507>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00058.warc.gz"} |
Math problem solver with work shown
Section 6-6 : Work. This is the final application of integral that we'll be looking at in this course. In this section we will be looking at the amount of work that is done by a force in moving an
object. In a first course in Physics you typically look at the work that a constant force, \(F\), does when moving an object over a distance of \(d\). Free
Worksheets - homeschoolmath.net Other math worksheet websites. DadsWorksheets.com - thousands of free math worksheets This site has over 5,000 different math worksheets from kindergarten to
pre-algebra and growing. Math Maze Generate a maze that practices any of the four operations. You can choose the difficulty level and size of maze. 10 Quickies Worksheets
Math Worksheets - Full List - SuperTeacherWorksheets Math Crossword Puzzles. Solve the math problems and use the answers to complete the crossword puzzles. Math Riddles. Solve the math problems to
decode the answer to funny riddles. Includes a wide variety of math skills, including addition, subtraction, multiplication, division, place value, rounding, and more. Mean (Averages) Worksheets This
App Can Scan and Solve Math Equations Instantly It looks like the app can also show step-by-step instructions for solving the problem. ... Really Hard Math Problems With ... places a call on an
FCC-approved radio frequency while driving to work.
Show students how easy it might be to misunderstand the problem. Upper Grades. Read word problems slowly and carefully several times so that all students comprehend. If possible, break up the problem
into smaller segments. Allow students to act out the word problems to better comprehend what they are being asked to solve.
How to Solve Algebra Problems Step-By-Step Solving Algebra word problems is useful in helping you to solve earthly problems. While the 5 steps of Algebra problem solving are listed below, this
article will focus on the first step, Identify the problem. Mathematics - WorksheetWorks.com Find worksheets about Mathematics. WorksheetWorks.com is an online resource used every day by thousands of
teachers, students and parents. Problem Solving Worksheets page 1 | abcteach
Photomath - Scan. Solve. Learn.
We're not JUST textbooks! Stuck on a homework problem? Ask. Q&A is easy and free on Slader. Our best and brightest are here to help you succeed in the classroom. ASK NOW About Slader. We know what
it's like to get stuck on a homework problem. We've been there before. Free Math Problem Solver Step By Step free math problem solver step by step QuickMath allows students to get instant solutions
to all kinds of math problems, from algebra and equation solving right through to calculus and matrices.This online solver will show steps and explanations for common math problems. Math Problem
Solver For Free - cheapgetwritingessay.com math problem solver for free Math games strengthen the understanding of kids who are already good at Math and ...Math Tutoring with a Club Z! Tutor. Club Z!
Tutoring of Allen, in home and online ...How to Solve Algebra Word ProblemsSolve calculus and algebra problems online with Cymath math problem solver with steps to show your work. Word Problems &
Problem Solving Printables Slideshow ...
WebMath - Solve Your Math Problem
PDF Problem Solving and Critical Thinking Problem solving and critical thinking refers to the ability to use knowledge, facts, and data to effectively solve problems. This doesn't mean you need to
have an immediate answer, it means you have to be able to think on your feet, assess problems and find solutions. The ability to develop a well thought out solution 8th Grade Math | Khan Academy
Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. Khan Academy is a nonprofit with the mission of providing a free,
world-class education for anyone, anywhere. Show your work - Math Central
Word problems (or story problems) allow kids to apply what they've learned in math class to real-world situations. Word problems build higher-order thinking, critical problem-solving, and reasoning
skills. Click on the the core icon below specified worksheets to see connections to the Common Core Standards Initiative.
Math Equation Solver - Calculator Soup Solve equations with PEMDAS order of operations showing the work. See the steps to to solve math problems with exponents and roots using order of ... 7 Best
Math Problem Solver App for Android and iPhone | Mashtips 5 Jul 2018 ... This math question solver offers an experience of working with private mathematics tutor. Got any tough math problem, then
Mathway is here to ... Word Problems Calculator - MathCelebrity
Proficiency with word problems is important because they present real-life math issues that your youngster is likely to experience both professionally and personally. Even with strong math skills, a
child may have difficulty solving word problems without using reading and analytical skills in the process. Common Core Math Explained in 3 Minutes | U.S. Chamber of ... However, in addition to the
traditional way of learning math, students are being taught another approach to help them understand how and why math problems work—called "number sense". This new way shows students that numbers are
just flexible things made up of other numbers and makes solving math problems that much easier. Math Problem Solver Online - Math-Problmes After paying via Visa or PayPal, the professional online
math problem solver that has been assigned to you starts working on it. Upon completion, you will get a message asking you to download your work. Hire Our Math Problems Solver Today. We do not only
give you answers to your mathematical problems. | {"url":"https://coursestostp.netlify.app/insognia70664lik/math-problem-solver-with-work-shown-415","timestamp":"2024-11-04T19:07:37Z","content_type":"text/html","content_length":"22019","record_id":"<urn:uuid:f919a57b-639f-476f-a407-c0d2f847e468>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00449.warc.gz"} |
Golf Handicap Philosophy
“The USGA Handicap System™ enables golfers of all skill levels to compete on an equitable basis” - www.USGA.org
The Handicap system was developed to measure the potential ability of golf players and level the ground for them to compete. These potential ability is determined based on historical performance
through what, in my opinion, is one of the most comprehensive methods to measure a player's ability in all of sports. During this series of articles I won't go into the details of the mathematical
process, instead I will discuss the philosophy behind the system so you can understand why the system is developed in that way.
Below are the 4 principles that, in my opinion, dictate the design of the Handicap System
1. Measuring player’s potential
2. Comparing Apples to Apples
3. Removing noise
4. Trusting each other
During this article of the series, I will share with you the principle of "Measuring Player's Potential". The remaining principles will be discussed in future posts.
Measuring Player's Potential
Using Potential ability is more accurate than using Average ability
As you probably already know, the Handicap is determined by using the 10 best rounds of the last 20 rounds a golfer has played. Which brings the question of why the best 10 and not the average of the
last 20? or last 10? One could argue that the average ability is more representative of what the player is likely to shoot.
However, I actually believe that the decision of measuring potential ability instead of average ability is a pretty smart one. There are mainly three reasons 1) It reduces the intrinsic variability
of a golfer's historical performance, because golfers are more stable and focused when they are playing well and 2) It generates more excitement in golfers (I would rather say I am 10 Handicap than
a 14 Handicap).
I think (2) is pretty self explanatory. But let's do a quick exercise to prove (1). In the graph below you will observe my brother's last 20 rounds. Take a close look at (i) all 20 scores, (ii) the
best 10 scores and (iii) the worst 10 scores.
My brother is the typical once a week golfer with an avg. score of 90. In the Last 20 rounds the highest score is 104 and the lowest is 83, resulting in a spread of +14 or -7 off the average. You
can already tell that dramatically lowering your average is more difficult than dramatically shooting over your average (almost twice as much in his case).
Now, his 10 best scores are "“83,84,84,85,86,86,86,86,86,90”. All scores are within 3 strokes except that 90. And still the range is only 7 strokes. However, when looking at the remaining 10 scores
you get “104,97,96,95,95,93,92,92,91,90”. 14 Strokes between the highest and lowest but more importantly a pretty even distribution between them. This example is pretty common for higher handicap
players but it becomes less relevant with lower handicap players (<10) as their game becomes more predictable.
This exercise shows how players are more stable and focused when they are playing well. Therefore, it show why potential ability is more accurate than average ability.
In soon to come articles I will explain the remaining 3 principles of the Handicap System. Let me know your comments at contactus@thegrint.com
Enjoy your game! | {"url":"https://thegrint.com/range/post/golf-handicap-philosophy-series-players-potential-ability","timestamp":"2024-11-06T04:09:24Z","content_type":"text/html","content_length":"20831","record_id":"<urn:uuid:563996e5-cecc-4866-b653-e477c8c52e03>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00788.warc.gz"} |
Common elements
Write a function
int common_elements(const vector<int>& X, const vector<int>& Y);
that given two vectors X and Y in increasing order, returns the number of common elements in the two vectors, that is, the number of integer numbers a such that a=X[i]=Y[j] for any i and j.
The two vectors are in strictly increasing order.
common_elements([3,5,7,8], [2,3,7,9,10]) = 2.
common_elements([1,2,3,4,5], [3,4,5,6,7,8]) = 3.
common_elements([1,2,3,4,5], [8,9]) = 0. | {"url":"https://jutge.org/problems/P12675_en","timestamp":"2024-11-05T19:50:57Z","content_type":"text/html","content_length":"18845","record_id":"<urn:uuid:a2b7963e-e6dc-4def-b997-1ade0573181d>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00716.warc.gz"} |
How to calculate compound monthly growth rate in excel
Compound Quarterly Growth Rate (CQGR) in Excel is rather easy to calculate. This can be very useful to forecast potential income or forecast personal returns. Before we begin, make sure you
understand how to calculate Compound Annual Growth Rates (CAGR). The spreadsheet also rearranges the formula so you can calculate the final amount (given the initial amount, CAGR, and number of
years) and the number of years (given the initial and final amount, and CAGR). You can also calculate the Compound Annual Growth Rate using Excel’s XIRR function – check out the screengrab below for
an example.
What is the definition of Sales 3y CAGR %? Sales growth shows the increase in sales over a specific period of time. The CAGR formula is the following: (current 30 Nov 2015 The monthly growth rate is
higher than 10% in all but two months, but calculate a growth rate of 40% per month on average or a compound monthly growth rate ( CMGR) of 35% without having to lie, and you can have Excel
Inconsistent growth is not visible in Compound Monthly Growth Rate (CMGR):. 18 Sep 2019 The compound annual growth rate (CAGR) provides the rate of return to calculate growth rates, instead of having
to do so manually in Excel. Compounded Annual Growth rate (CAGR) is a business and investing specific Actual or normalized values may be used for calculation as long as they retain 26 Jul 2019
Introduction to Compounding and CAGR. CAGR. What the heck is CAGR? Well, it's an acronym for Compound Annual Growth Rate, or in other 24 Apr 2018 Calculating percentage of monthly growth gives you a
way to track the changes in website visitors, social media likes or stock values over time.
Instantly calculate the compound annual growth rate (Excel RRI function) of an investment and see the step by step process used to solve the CAGR formula.
28 Jan 2006 Description: A simple MS Excel template (Office 2003) you can use to calculate CAGR given initial investment, ending investment, and # of 9 Feb 2017 Excel calculates the compound annual
growth rate using a manually entered formula or by employing the Power, Rate or GeoMean functions. 27 May 2016 CAGR. This tutorial walks through calculating a dynamic Compound Annual Growth Rate
(CAGR). By dynamic we mean The book covers topics applicable for both PowerBI and Power Pivot inside excel. I've personally read 21 Jan 2014 Before we dive into Excel, let's understand the how
calculate the compound annual growth rate. The formula is: CAGR = (Ending value Calculate compound annual growth rate with XIRR function in Excel. 1 . Create a new table with the start value and end
value as the following first screen shot shown: Note: In Cell F3 enter =C3, in Cell G3 enter 2 . Select a blank cell below this table, enter the below formula into it, and press
28 Jan 2006 Description: A simple MS Excel template (Office 2003) you can use to calculate CAGR given initial investment, ending investment, and # of
Compound interest is interest that's calculated both on the initial principal of a deposit or loan, and on all previously accumulated interest. For example, let's say you have a deposit of $100 that
earns a 10% compounded interest rate. The $100 grows into $110 after the first year, then $121 after the second year. The way to set this up in Excel is to have all the data in one table, then break
out the calculations line by line. For example, let's derive the compound annual growth rate of a company's sales over 10 years: The CAGR of sales for the decade is 5.43%.
To calculate AAGR in Excel: Select cell C3 by clicking on it by your mouse. Enter the formula =(B3-B2)/B2 to cell C3. Press Enter to assign the formula to cell C3. Drag the fill handle from cell C3
to cell C8 to copy the formula to the cells below. Column C will now have the yearly growth rates.
There's no CAGR function in Excel. However, simply use the RRI function in Excel to calculate the compound annual growth rate (CAGR) of an investment over a What is the formula for calculating
compound annual growth rate (CAGR) in Excel? Buy Now! doc cagr autotext. Calculate Average Annual Growth Rate in Excel. To
21 Jan 2014 Before we dive into Excel, let's understand the how calculate the compound annual growth rate. The formula is: CAGR = (Ending value
9 Feb 2017 Excel calculates the compound annual growth rate using a manually entered formula or by employing the Power, Rate or GeoMean functions.
26 Jul 2019 Introduction to Compounding and CAGR. CAGR. What the heck is CAGR? Well, it's an acronym for Compound Annual Growth Rate, or in other 24 Apr 2018 Calculating percentage of monthly growth
gives you a way to track the changes in website visitors, social media likes or stock values over time. 25 Sep 2014 The good news is that you can do these calculations yourself, using Excel to find
the Compound Annual Growth Rate, or CAGR, of your current 12 Apr 2018 The Function above applies an investment rate of 8%, over 10 years, It is harder to write a single cell formula/function when
you have a table of growth rates that vary over Compounding growth is very easy to do in Excel because you can I made a measure: (current months's assets – current month's 28 Jan 2006 Description: A
simple MS Excel template (Office 2003) you can use to calculate CAGR given initial investment, ending investment, and # of 9 Feb 2017 Excel calculates the compound annual growth rate using a
manually entered formula or by employing the Power, Rate or GeoMean functions. 27 May 2016 CAGR. This tutorial walks through calculating a dynamic Compound Annual Growth Rate (CAGR). By dynamic we
mean The book covers topics applicable for both PowerBI and Power Pivot inside excel. I've personally read | {"url":"https://topoptionsoafv.netlify.app/nicoson59258josu/how-to-calculate-compound-monthly-growth-rate-in-excel-321.html","timestamp":"2024-11-05T23:38:41Z","content_type":"text/html","content_length":"34778","record_id":"<urn:uuid:257799de-7ad7-4e43-9b1e-36a3cb358981>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00888.warc.gz"} |
BILL # HB 2888 TITLE: ready-to-drink spirits products; tax
SPONSOR: Biasiucci STATUS: As Introduced
PREPARED BY: Nate Belcher
Under current law, spirituous liquor is subject to luxury tax at a rate of $3 per gallon. The bill establishes a new statutory category of alcoholic beverages called "ready-to-drink spirits products"
and sets the luxury tax rate for this new category at a rate proportionate to $3 per gallon for the amount of spirituous liquor in these beverages. This new category consists of distilled spirits
mixed with other beverages, which do not exceed 10% alcohol by volume (ABV) and that are sold in the manufacturer's sealed, original packaging of no greater than 16 ounces.
The bill would become effective on the first day of the month following the general effective date.
Estimated Impact
We estimate that HB 2888 would reduce luxury tax revenues by $(2.2) million per year. Given the distribution formula for spirituous liquor tax revenues, this reduction would be allocated as follows:
$(1.5) million for the General Fund, $(0.4) million for the Corrections Fund, $(0.2) million for the Drug Treatment and Education Fund, and $(67,000) for the Corrections Revolving Fund.
According to the Department of Revenue, beverages which are commonly referred to as "canned cocktails" or "ready-to-drink cocktails" are currently taxed at the spirituous liquor luxury tax rate of $3
per gallon because they fall under the current definition of spirituous liquor. Under HB 2888, these products would instead fall under a new category called "ready-to-drink spirits" with an identical
luxury tax rate of $3 per gallon, but this rate would only apply to the volume of spirits mixed into a beverage instead of to the entire beverage volume. The bill would not impact other similar
products such as "hard seltzers", as those are classified in a separate category of malt-based products due to being created through the fermentation process.
The Distilled Spirits Council of the United States reports that $2.8 billion of ready-to-drink (RTD) cocktails were sold across the United States in 2023. According to the Council, the RTD cocktails
market is the fastest growing spirits category by revenue, with an increase of 35.8% in 2022 and 26.8% in 2023. Based on the current growth trend, we estimate that nationwide revenue from the sales
of RTDs will reach $3.3 billion in 2024. Assuming that Arizona's share of the national RTD cocktails market is proportionate to its 2.2% of the national population, as estimated by the Census Bureau,
we project that Arizona will have about $73 million [= $3.3 billion x 2.2%] in total RTD cocktails sales in 2024.
According to data reported by SevenFifty Daily, an online magazine that covers the business and culture of the alcoholic beverage industry, 12-ounce containers made up 31% of RTD cocktail sales
volume in 2021, up from 7% in 2019. Assuming that this trend has continued after 2021, we estimate that 40% of RTD cocktail sales in 2024 will include containers no larger than 16 ounces. Based on
this estimate, we project that the amount of RTD sales in Arizona subject to the tax rate under the bill would be $29.2 million [= $73 million x 40%] in 2024.
Given that 4-packs of 12-ounce cans are typically priced at around $10 to $14 in Arizona, we estimate an average retail price of $12 per 4-pack, or $32 per gallon for RTD cocktails in single-serving
containers. Applying the average retail price of $32 per gallon to our projected sales base of $29.2 million yields an estimated RTD cocktails sales volume subject to the tax rate under the bill of
about 912,500 gallons in Arizona in 2024.
Currently, RTD cocktails are taxed at $3 per gallon of final product. Under the bill, these products would be taxed at a fraction of $3 per gallon depending on the proportion of spirits to the total
volume of the drink not to exceed 16 ounces. Our research suggests that the ABV of RTD cocktails typically range between 5% and 10%. For the purpose of this analysis, we have assumed that the average
12-ounce can has an ABV of 7.5%, consisting of 2.25 ounces of liquor (with an ABV of 40%) and 9.75 ounces of a non-alcoholic mixer. Based on this assumption, the average luxury tax rate under the
bill would be $0.5625 per gallon [= 2.25 oz. / 12.0 oz. x $3 per gallon], which is $(2.4375) less per gallon than under current law. Applying this $(2.4375) per gallon tax reduction to the 912,500
gallons of RTD spirits sales volume yields an estimated revenue reduction of $(2.2) million in 2024.
Under current law, these products are taxed under the spirituous liquor category, which is distributed as follows: 70% to the General Fund, 20% to the Corrections Fund, 7% to the Drug Treatment and
Education Fund, and 3% to the Corrections Revolving Fund. According to our discussion with Legislative Council, those revenues would likely be distributed using the same formula as spirituous
liquors, but the bill does not currently specify a distribution schedule for RTD spirits revenues.
Local Government Impact | {"url":"https://www.azleg.gov/legtext/56leg/2R/fiscal/HB2888.DOCX.htm","timestamp":"2024-11-10T09:17:30Z","content_type":"text/html","content_length":"12194","record_id":"<urn:uuid:c70ff563-a8ac-4d91-a96b-9cb54575f8cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00384.warc.gz"} |
On the Queue-Number of Graphs with Bounded Tree-Width
On the Queue-Number of Graphs with Bounded Tree-Width
Keywords: Graph layouts, Queue layouts, Tree-width,
A queue layout of a graph consists of a linear order on the vertices and an assignment of the edges to queues, such that no two edges in a single queue are nested. The minimum number of queues needed
in a queue layout of a graph is called its queue-number.
We show that for each $k\geq0$, graphs with tree-width at most $k$ have queue-number at most $2^k-1$. This improves upon double exponential upper bounds due to Dujmović et al. and Giacomo et al. As a
consequence we obtain that these graphs have track-number at most $2^{O(k^2)}$.
We complement these results by a construction of $k$-trees that have queue-number at least $k+1$. Already in the case $k=2$ this is an improvement to existing results and solves a problem of
Rengarajan and Veni Madhavan, namely, that the maximal queue-number of $2$-trees is equal to $3$. | {"url":"https://www.combinatorics.org/ojs/index.php/eljc/article/view/v24i1p65","timestamp":"2024-11-03T09:57:44Z","content_type":"text/html","content_length":"15043","record_id":"<urn:uuid:445b1f8d-7f07-491f-b907-8921d75ca436>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00346.warc.gz"} |
A Unified Analysis Framework for Iterative Parallel-in-Time Algorithms | SIAM Journal on Scientific Computing
A Unified Analysis Framework for Iterative Parallel-in-Time Algorithms
Parallel-in-time integration has been the focus of intensive research efforts over the past two decades due to the advent of massively parallel computer architectures and the scaling limits of purely
spatial parallelization. Various iterative parallel-in-time algorithms have been proposed, like Parareal, PFASST, MGRIT, and Space-Time Multi-Grid (STMG). These methods have been described using
different notation, and the convergence estimates that are available are difficult to compare. We describe Parareal, PFASST, MGRIT, and STMG for the Dahlquist model problem using a common notation
and give precise convergence estimates using generating functions. This allows us, for the first time, to directly compare their convergence. We prove that all four methods eventually converge
superlinearly, and we also compare them numerically. The generating function framework provides further opportunities to explore and analyze existing and new methods.
Reproducibility of computational results.
This paper has been awarded the “SIAM Reproducibility Badge: code and data available”, as a recognition that the authors have followed reproducibility principles valued by SISC and the scientific
computing community. Code and data that allow readers to reproduce the results in this paper are available at
1. Introduction.
The efficient numerical solution of time-dependent ordinary and partial differential equations (ODEs/PDEs) has always been an important research subject in computational science and engineering.
Nowadays, with high-performance computing platforms providing more and more processors whose individual processing speeds are no longer increasing, the capacity of algorithms to run concurrently
becomes important. As classical parallelization algorithms start to reach their intrinsic efficiency limits, substantial research efforts have been invested to find new parallelization approaches
that can translate the computing power of modern many-core high-performance computing architectures into faster simulations.
For time-dependent problems, the idea to parallelize across the time direction has gained renewed attention in the last two decades.
Various algorithms have been developed; for overviews see the papers by Gander [
] and Ong and Schroder [
]. Four iterative algorithms have received significant attention, namely
] (474 citations since 2001),
Parallel Full Approximation Scheme in Space and Time
(PFASST) [
] (254 citations since 2012),
Multi-Grid Reduction in Time
(MGRIT) [
] (287 citations since 2014), and a specific form of
Space-Time Multi-Grid
(STMG) [
] (140 citations since 2016). Other algorithms have been proposed, e.g., the
Parallel (or Parareal) Implicit Time integration Algorithm
(PITA) [
] (275 citations since 2003), which is very similar to
, the diagonalization technique [
] (63 citations since 2008),
Revisionist Integral Deferred Corrections
(RIDC) [
] (114 citations since 2010),
] (103 citations since 2013), and
parallel Rational approximation of eEXponential Integrators
(REXI) [
] (28 citations since 2018).
, PFASST, MGRIT, and STMG have all been benchmarked for large-scale problems using large numbers of cores of high-performance computing systems [
]. They cast the solution process in time as a large linear or nonlinear system which is solved by iterating on all time steps simultaneously. Since parallel performance is strongly linked to the
rate of convergence, understanding convergence mechanisms and obtaining reliable error bounds for these iterative parallel-in-time (PinT) methods is crucial. Individual analyses exist for
], MGRIT [
], PFASST [
], and STMG [
]. There are also a few combined analyses showing equivalences between
and MGRIT [
] or connections between MGRIT and PFASST [
]. However, no systematic comparison of convergence behavior, let alone efficiencies, between these methods exists.
There are at least three obstacles to comparing these four methods: first, there is no common formalism or notation to describe them; second, the existing analyses use very different techniques to
obtain convergence bounds; third, the algorithms can be applied to many different problems in different ways with many tunable parameters, all of which affect performance [
]. Our main contribution is to address, at least for the Dahlquist test problem, the first two problems by proposing a common formalism to rigorously describe
, PFASST, MGRIT,
and the Time Multi-Grid (TMG) component
of STMG using the same notation. Then, we obtain comparable error bounds for all four methods by using the generating function method (GFM) [
]. GFM has been used to analyze
] and was used to relate
and MGRIT [
]. However, our use of GFM to derive common convergence bounds across multiple algorithms is novel, as is the presented unified framework. When coupled with a predictive model for computational cost,
this GFM framework could eventually be extended to a model to compare parallel performance of different algorithms, but this is left for future work.
Our manuscript is organized as follows. In section
, we introduce the GFM framework; in particular, in section
, we give three definitions (time block, block variable, and block operator) used to build the GFM framework and provide some examples using classical time integration methods. Section
contains the central definition of a
block iteration
and again examples. In section
, we state the main theoretical results and error bounds, and the next sections contain how existing algorithms from the PinT literature can be expressed in the GFM framework:
in section
, TMG in section
, and PFASST in section
. Finally, we compare in section
all methods using the GFM framework. Conclusions and an outlook are given in section
2. The generating function method.
We consider the Dahlquist equation
$$\frac{du}{dt} = \lambda u, \quad \lambda \in \mathbb{C}, \quad t \in (0, T], \quad u(0) = u_0 \in \mathbb{C}.$$
The complex parameter
allows us to emulate problems of parabolic (
\(\lambda \lt 0\)
), hyperbolic (
imaginary), and mixed type.
2.1. Blocks, block variables, and block operators.
We decompose the time interval \([0, T]\) into \(N\) time subintervals \([t_n, t_{n+1}]\) of uniform size \(\Delta t\) with \(n \in \{0, \ldots, N-1\}\) .
Definition 2.1 (time block).
time block
(or simply block) denotes the discretization of a time subinterval \([t_n, t_{n+1}]\) using \(M\gt 0\) grid points,
$$\tau_{n,m} = t_n + \Delta t \tau_{m}, \quad m\in \{1, \ldots, M\},$$
where the \(\tau_{m} \in [0,1]\) denote normalized grid points in time used for all blocks.
We choose the name “block” in order to have a generic name for the internal steps inside each time subinterval. A block could be several time steps of a classical time-stepping scheme (e.g.,
Runge–Kutta; cf. section
), the quadrature nodes of a collocation method (cf. section
), or a combination of both. But in every case, a block represents the time domain that is associated to one computational process of the time parallelization. A block can also collapse by setting
, so that we retrieve a standard uniform time discretization with time step
\(\Delta t\)
. The additional structure provided by blocks will be useful when describing and analyzing two-level methods which use different numbers of grid points per block for each level; cf. section
Definition 2.2 (block variable).
block variable
is a vector
$$\boldsymbol{u}_n = [u_{n,1},u_{n,2},\ldots,u_{n,M}]^T,$$
where \(u_{n,m}\) is an approximation of \(u(\tau_{n,m})\) on the time block for the time subinterval \([t_n, t_{n+1}]\) . For \(M=1\) , \(\boldsymbol{u}_n\) reduces to a scalar approximation of \(u
(\tau_{n,M})\equiv u(t_{n+1})\)
Some iterative PinT methods like
(see section
) use values defined at the interfaces between subintervals
\([t_n, t_{n+1}]\)
. Other algorithms, like PFASST (see section
), update solution values in the interior of blocks. In the first case, the block variable is the right interface value with
and thus
. In the second case, it consists of
values in the time block
\([t_n, t_{n+1}]\)
\(M\gt 1\)
. In both cases, PinT algorithms can be defined as
iterative processes updating the block variables.
Remark 2.3.
While the adjective “time” is natural for evolution problems, PinT algorithms can also be applied to recurrence relations in different contexts like deep learning [
] or when computing Gauss quadrature formulas [
]. Therefore, we will not systematically mention “time” when talking about blocks and block variables.
Definition 2.4 (block operators).
We denote as block operators the two linear functions \(\boldsymbol{\phi }:\mathbb{C}^M\rightarrow \mathbb{C}^M\) and \(\boldsymbol{\chi }:\mathbb{C}^M\rightarrow \mathbb{C}^M\) for which the block
variables of a numerical solution of
$$\boldsymbol{\phi }(\boldsymbol{u}_1) = \boldsymbol{\chi }(u_0\boldsymbol{{\mathbb{1}}}),\quad \boldsymbol{\phi }(\boldsymbol{u}_{n+1})= \boldsymbol{\chi }(\boldsymbol{u}_n),\quad n=1,2,\
with \(\boldsymbol{{\mathbb{1}}}:=[1,\dots,1]^T\) . The time integration operator \(\boldsymbol{\phi }\) is bijective and \(\boldsymbol{\chi }\) is a transmission operator. The time propagator
updating \(\boldsymbol{u}_n\) to \(\boldsymbol{u}_{n+1}\) is given by
$$\boldsymbol{\psi } := \boldsymbol{\phi }^{-1}\boldsymbol{\chi }.$$
2.1.1. Example with Runge–Kutta methods.
Consider numerical integration of (
) with a Runge–Kutta method with stability function
equidistant time steps per block, there are two natural ways to write the method using block operators:
The volume formulation
: set
. Setting
\(r:=R(\lambda \Delta t/\ell )^{-1}\)
, the block operators are the
\(M\times M\)
sparse matrices
$$\boldsymbol{\phi } := \begin{pmatrix} r & \\ -1 & r &\\ & \ddots & \ddots \end{pmatrix},\quad \boldsymbol{\chi } := \begin{pmatrix} 0 & \dots & 0 & 1\\ \vdots & & \vdots & 0\\ \vdots & & \vdots & \
vdots \end{pmatrix}.$$
The interface formulation
: set
so that
$$\boldsymbol{\phi }:=R(\lambda \Delta t/\ell )^{-\ell },\quad \boldsymbol{\chi }:=1.$$
2.1.2. Example with collocation methods.
Collocation methods are special implicit Runge–Kutta methods [
, Chap. 4, sect. 4] and instrumental when defining PFASST in section
. We show their representation with block operators. Starting from the Picard formulation for (
) in one time subinterval
\([t_n, t_{n+1}]\)
$$u(t) = u(t_n) + \int_{t_n}^{t} \lambda u(\tau )d\tau,$$
we choose a quadrature rule to approximate the integral. We consider only Lobatto or Radau-II type quadrature nodes where the last quadrature node coincides with the right subinterval boundary. This
gives us quadrature nodes for each subinterval that form the block discretization points
of Definition
, with
. We approximate the solution
at each node by
$$u_{n,m} = u_{n,0} + \lambda \Delta t \sum_{j=1}^{M} q_{m,j} u_{n,j} \quad \text{with}\quad q_{m,j} := \int_{0}^{\tau_m} l_j(s)ds,$$
are the Lagrange polynomials associated with the nodes
. Combining all the nodal values, we form the block variable
, which satisfies the linear system
$$(\mathbf{I}-\mathbf{Q}) \boldsymbol{u}_{n} = \begin{pmatrix} u_{n,0} \\ \vdots \\ u_{n,0} \end{pmatrix} = \begin{bmatrix} 0 & \dots & 0 & 1 \\ \vdots & & \vdots & \vdots \\ 0 & \dots & 0 & 1 \end
{bmatrix}\boldsymbol{u}_{n-1} =: \mathbf{H} \boldsymbol{u}_{n-1},$$
with the quadrature matrix
\(\mathbf{Q} := \lambda \Delta t (q_{m,j})\)
the identity matrix, and
sometimes called the transfer matrix that copies the last value of the previous time block to obtain the initial value
of the current block.
The integration and transfer block operators from Definition
then become
^6 \(\boldsymbol{\phi } := (\mathbf{I}-\mathbf{Q})\)
\(\boldsymbol{\chi } := \mathbf{H}\)
2.2. Block iteration.
Having defined the block operators for our problem, we write the numerical approximation (
) of (
) as the
all-at-once global problem
$$\mathbf{A}\boldsymbol{u} := \begin{pmatrix} \boldsymbol{\phi } & & &\\ -\boldsymbol{\chi } & \boldsymbol{\phi } & &\\ & \ddots & \ddots &\\ & & -\boldsymbol{\chi } & \boldsymbol{\phi } \end
{pmatrix} \begin{bmatrix} \boldsymbol{u}_1\\\boldsymbol{u}_2\\\vdots \\\boldsymbol{u}_N \end{bmatrix} = \begin{bmatrix} \boldsymbol{\chi }(u_0\boldsymbol{{\mathbb{1}}})\\0\\\vdots \\0 \end{bmatrix}
=: \boldsymbol{f}.$$
Iterative PinT algorithms solve (
) by updating a vector
\(\boldsymbol{u}^k = [\boldsymbol{u}^k_1, \dots, \boldsymbol{u}^k_N]^T\)
until some stopping criterion is satisfied. If the global iteration can be written as a local update for each block variable separately, we call the local update formula a block iteration.
Definition 2.5 (primary block iteration).
A primary block iteration is an updating formula for \(n\geq 0\) of the form
$$\boldsymbol{u}^{k+1}_{n+1} = \mathbf{B}^0_1(\boldsymbol{u}^k_{n+1}) + \mathbf{B}^1_0\left (\boldsymbol{u}^{k+1}_{n}\right ) + \mathbf{B}^0_0\left (\boldsymbol{u}^{k}_{n}\right ), \quad \boldsymbol
{u}_{0}^k = u_0\boldsymbol{{\mathbb{1}}} \quad \forall k \in \mathbb{N},$$
where \(\mathbf{B}^0_1\) , \(\mathbf{B}^1_0\) , and \(\mathbf{B}^0_0\) are linear operators from \(\mathbb{C}^M\) to \(\mathbb{C}^M\) that satisfy the consistency condition ^7
$$(\mathbf{B}_1^0-\mathbf{I})\boldsymbol{\psi } + \mathbf{B}^1_0 + \mathbf{B}^0_0=0$$
with \(\boldsymbol{\psi }\) defined in
Note that a block iteration is always associated with an all-at-once global problem, and the primary block iteration (
) should converge to the solution of (
(left) shows a graphical representation of a primary block iteration using a
to represent the dependencies of
on the other block variables. The
-axis represents the block index
(time), and the
-axis represents the iteration index
. Arrows show dependencies from previous
indices and can only go from left to right and/or from bottom to top. For the primary block iteration, we consider only dependencies from the previous block
and iterate
Fig. 1. \(kn\) -graphs for a generic primary block iteration (left), damped Block Jacobi (middle), and Approximate Block Gauss–Seidel (right).
More general block iterations can also be considered for specific iterative PinT methods, e.g., MGRIT with FCF-relaxation (see Remark
). Other algorithms also consist of combinations of two or more block iterations, for example, STMG (cf. section
) or PFASST (cf. section
). But we show in those sections that we can reduce those combinations into a single primary block iteration, hence we focus here mostly on primary block iterations to introduce our analysis
We next describe the Block Jacobi relaxation (section
) and the approximate Block Gauss–Seidel iteration (section
), which are key components used to describe iterative PinT methods.
2.2.1. Block Jacobi relaxation.
A damped Block Jacobi iteration for the global problem (
) can be written as
$$\boldsymbol{u}^{k+1} = \boldsymbol{u}^k + \omega \mathbf{D}^{-1}(\boldsymbol{f}-\mathbf{A}\boldsymbol{u}^k),$$
is a block diagonal matrix constructed with the integration operator
\(\boldsymbol{\phi }\)
, and
\(\omega \gt 0\)
is a relaxation parameter. For
\(n\gt 0\)
, the corresponding block formulation is
$$\boldsymbol{u}_{n+1}^{k+1} = (1-\omega )\boldsymbol{u}_{n+1}^k + \omega \boldsymbol{\phi }^{-1}\boldsymbol{\chi }\boldsymbol{u}_{n}^k,$$
which is a primary block iteration with
. Its
-graph is shown in Figure
(middle). The consistency condition (
) is satisfied, since
$$((1-\omega )\mathbf{I}-\mathbf{I})\boldsymbol{\phi }^{-1}\boldsymbol{\chi } + 0 + \omega \boldsymbol{\phi }^{-1}\boldsymbol{\chi } = 0.$$
Note that selecting
simplifies the block iteration to
$$\boldsymbol{u}_{n+1}^{k+1} = \boldsymbol{\phi }^{-1}\boldsymbol{\chi }\boldsymbol{u}_{n}^k.$$
2.2.2. Approximate Block Gauss–Seidel iteration.
Let us consider a Block Gauss–Seidel type preconditioned iteration for the global problem (
$$\boldsymbol{u}^{k+1} = \boldsymbol{u}^k + \mathbf{P}_{GS}^{-1}(\boldsymbol{f}-\mathbf{A}\boldsymbol{u}^k),\quad \mathbf{P}_{GS} = \begin{bmatrix} \boldsymbol{\tilde{\phi }} & & \\ - \boldsymbol{\
chi } & \boldsymbol{\tilde{\phi }} & \\ & \ddots & \ddots \end{bmatrix},$$
where the block operator
\(\boldsymbol{\tilde{\phi }}\)
corresponds to an approximation of
\(\boldsymbol{\phi }\)
. This approximation can be based on time-step coarsening, but could also use other approaches, e.g., a lower order time integration method. In general,
\(\boldsymbol{\tilde{\phi }}\)
must be cheaper than
\(\boldsymbol{\phi }\)
, but it is also less accurate. Subtracting
in (
) and multiplying by
yields the block iteration of this
Approximate Block Gauss–Seidel
$$\boldsymbol{u}_{n+1}^{k+1} = \left [\mathbf{I} - \boldsymbol{\tilde{\phi }^{-1} }\boldsymbol{\phi }\right ] \boldsymbol{u}_{n+1}^{k} + \boldsymbol{\tilde{\phi }^{-1}}\boldsymbol{\chi }\boldsymbol
-graph is shown in Figure
(right). Note that a standard Block Gauss–Seidel iteration for (
) (i.e., with
\(\boldsymbol{\tilde{\phi }} = \boldsymbol{\phi }\)
) is actually a direct solver, the iteration converges in one step by integrating all blocks with
\(\boldsymbol{\phi }\)
sequentially, and its block iteration is simply
$$\boldsymbol{u}_{n+1}^{k+1} = \boldsymbol{\phi }^{-1}\boldsymbol{\chi }\boldsymbol{u}_{n}^{k+1}.$$
2.3. Generating function and error bound for a block iteration.
Before giving a generic expression for the error bound of the primary block iteration (
) using the GFM framework, we first need a definition and a preliminary result. The primary block iteration (
) is defined for each block index
\(n \geq 0\)
, thus we can define the following.
Definition 2.6 (generating function).
The generating function associated with the primary block iteration
is the power series
$$\rho_k(\zeta ) := \sum_{n=0}^{\infty } e^k_{n+1}\zeta^{n+1},$$
where \(e^k_{n+1} := \left \lVert \boldsymbol{u}^k_{n+1}-\boldsymbol{u}_{n+1}\right \rVert\) is the difference between the \(k^{th}\) iterate \(\boldsymbol{u}^k_{n+1}\) and the exact solution \(\
boldsymbol{u}_{n+1}\) for one block of
in some norm on \(\mathbb{C}^M\)
Since the analysis works in any norm, we do not specify a particular one here. In the numerical examples we use the \(L^{\infty }\) norm on \(\mathbb{C}^M\) .
Lemma 2.7.
The generating function for the primary block iteration
$$\rho_{k+1}(\zeta ) \leq \frac{\gamma + \alpha \zeta }{1 - \beta \zeta }\rho_{k}(\zeta ),$$
where \(\alpha := \left \lVert \mathbf{B}_0^0\right \rVert\) , \(\beta := \left \lVert \mathbf{B}_0^1\right \rVert\) , \(\gamma := \left \lVert \mathbf{B}_1^0\right \rVert\) , and the operator norm
is induced by the chosen vector norm.
We start from (
) and subtract the exact solution of (
$$\boldsymbol{u}^{k+1}_{n+1} - \boldsymbol{u}_{n+1} = \mathbf{B}^0_1(\boldsymbol{u}^k_{n+1}) + \mathbf{B}^1_0\left (\boldsymbol{u}^{k+1}_{n}\right ) + \mathbf{B}^0_0\left (\boldsymbol{u}^{k}_{n}\
right ) - \boldsymbol{\psi }(\boldsymbol{u}_{n}).$$
Using the linearity of the block operators and the consistency condition (
) with
, this simplifies to
$$\boldsymbol{u}^{k+1}_{n+1} - \boldsymbol{u}_{n+1} = \mathbf{B}^0_1(\boldsymbol{u}^k_{n+1}-\boldsymbol{u}_{n+1}) + \mathbf{B}^1_0\left (\boldsymbol{u}^{k+1}_{n}-\boldsymbol{u}_{n}\right ) + \mathbf
{B}^0_0\left (\boldsymbol{u}^{k}_{n}-\boldsymbol{u}_{n}\right ).$$
We apply the norm and use the triangle inequality and the operator norms defined above to get the recurrence relation
$$e^{k+1}_{n+1} \leq \gamma e^{k}_{n+1} + \beta e^{k+1}_n + \alpha e^{k}_n$$
for the error. We multiply this inequality by
and sum for
\(n\in \mathbb{N}\)
to get
$$\sum_{n=0}^{\infty }e^{k+1}_{n+1} \zeta^{n+1} \leq \gamma \sum_{n=0}^{\infty }e^{k}_{n+1} \zeta^{n+1} + \beta \sum_{n=0}^{\infty }e^{k+1}_{n} \zeta^{n+1} + \alpha \sum_{n=0}^{\infty }e^{k}_{n} \
Note that this is a formal power series expansion for
small in the sense of generating functions [
, section 1.2.9]. Using Definition
and that
for all
we find
$$\rho_{k+1}(\zeta ) \leq \gamma \rho_{k}(\zeta ) + \beta \zeta \sum_{n=1}^{\infty }e^{k+1}_{n} \zeta^{n} + \alpha \zeta \sum_{n=1}^{\infty }e^{k}_{n} \zeta^{n}.$$
Shifting indices leads to
$$(1-\beta \zeta )\rho_{k+1}(\zeta ) \leq (\gamma + \alpha \zeta )\rho_{k}(\zeta )$$
and concludes the proof.□
Theorem 2.8.
Consider the primary block iteration
and let
$$\delta := \underset{n=1, \ldots, N}{\max }\left \lVert \boldsymbol{u}^0_n-\boldsymbol{u}_n\right \rVert$$
be the maximum error of the initial guess over all blocks. Then, using the notation of Lemma 2.7, we have
$$e^k_{n+1} \leq \theta_{n+1}^k(\alpha, \beta, \gamma )\delta$$
for \(k\gt 0\) , where \(\theta_{n+1}^k\) is a bounding function defined as follows:
if only \(\gamma=0\) , then
\(\beta=0\) , then
\(\alpha=0\) , then
\(\alpha\) , nor \(\beta\) , nor \(\gamma\) is zero, then
$$\theta_{n+1}^k = \gamma^k \sum_{i=0}^{\min (n, k)} \sum_{l=0}^{n-i} \binom{k}{i}\binom{l+k-1}{l} \left (\frac{\alpha }{\gamma }\right )^i\beta^l.$$
We call any error bound obtained from one of these formulas a GFM-bound.
The proof uses Lemma
to bound the generating function at
$$\rho_0(\zeta ) \leq \delta \sum_{n=0}^{\infty } \zeta^{n+1},$$
which covers arbitrary initial guesses for defining starting values
for each block. For specific initial guesses,
\(\rho_{0}(\zeta )\)
can be bounded differently [
, Proof of Thm. 1]. The error bound is then computed by coefficient identification after a power series expansion. The full rather technical proof can be found in Appendix
In the numerical examples shown below, we find that the estimate from Theorem
is not always sharp; cf. section
. If the last time point of the blocks coincides with the right bound of the subinterval,
it is helpful to define the
interface error
at the right boundary point of the
block as
$$\bar{e}_{n+1}^k := |\bar{u}^k_{n+1}-\bar{u}_{n+1}|,$$
is the last element of the block variable
. We then multiply (
) by
to get
$$e_M^T (\boldsymbol{u}_{n+1}^{k+1}-\boldsymbol{u}_{n+1}) = \boldsymbol{b}_1^0 (\boldsymbol{u}_{n+1}^{k}-\boldsymbol{u}_{n+1}) + \boldsymbol{b}_0^1 (\boldsymbol{u}_{n}^{k+1}-\boldsymbol{u}_{n}) + \
boldsymbol{b}_0^0 (\boldsymbol{u}_{n}^{k}-\boldsymbol{u}_{n}),$$
is the last row of the block operator
. Taking the absolute value on both sides, we recognize the interface error
on the left-hand side. By neglecting the error from interior points and using the triangle inequality, we get the approximation
$$\bar{e}_{n+1}^{k+1} \lessapprox \bar{\gamma }\bar{e}_{n+1}^{k} + \bar{\beta }\bar{e}_{n}^{k+1} + \bar{\alpha }\bar{e}_{n}^{k},$$
\(\bar{\alpha }:=|\bar{b}_0^0|\)
\(\bar{\beta }:=|\bar{b}_1^0|\)
\(\bar{\gamma }:=|\bar{b}_0^1|\)
Corollary 2.9 (interface error approximation).
Defining for the initial interface error the bound \(\bar{\delta }:=\max_{n\in \{1,\dots, N\}} \left \lVert \bar{u}^0_n- \bar{u}_n\right \rVert\) , we obtain for the interface error the approximation
$$\bar{e}_{n+1}^{k} \lessapprox \bar{\theta }_{n+1}^k \bar{\delta },\quad \bar{\theta }_{n+1}^k:=\theta_{n+1}^k(\bar{\alpha }, \bar{\beta },\bar{\gamma }),$$
with \(\theta_{n+1}^k\) defined in Theorem 2.8
The result follows as in the proof of Lemma
using approximate relations.□
Remark 2.10.
For the general case, the error at the interface
\(\bar{e}_{n+1}^{k+1}\) is not the same as
the error for the whole block
. Only a block discretization using a single point (
) makes the two values identical. Furthermore, Corollary
is generally not an upper bound, but an approximation thereof.
3. Writing Parareal and MGRIT as block iterations.
3.1. Description of the algorithm.
algorithm introduced by Lions, Maday, and Turinici [
] corresponds to a block iteration update with scalar blocks (
), and its convergence was analyzed in [
]. We propose here a new description of
in the scope of the GFM framework, which states that
is simply a combination of two preconditioned iterations applied to the global problem (
), namely one block Jacobi relaxation without damping (section
), followed by an ABGS iteration (section
We denote by
the intermediate solution after the Block Jacobi step. Using (
) and (
), the two successive primary block iteration steps are
\boldsymbol{u}_{n+1}^{k+1/2} &= \boldsymbol{\phi }^{-1}\boldsymbol{\chi }\boldsymbol{u}_{n}^k,
\boldsymbol{u}_{n+1}^{k+1} &= \left [\mathbf{I} - \boldsymbol{\tilde{\phi }^{-1}} \boldsymbol{\phi }\right ] \boldsymbol{u}_{n+1}^{k+1/2} + \boldsymbol{\tilde{\phi }^{-1}}\boldsymbol{\chi }\
Combining both yields the primary block iteration
$$\boldsymbol{u}_{n+1}^{k+1} = \left [\boldsymbol{\phi }^{-1}\boldsymbol{\chi } - \boldsymbol{\tilde{\phi }^{-1}}\boldsymbol{\chi }\right ]\boldsymbol{u}_{n}^k + \boldsymbol{\tilde{\phi }^{-1}}\
boldsymbol{\chi }\boldsymbol{u}_{n}^{k+1}.$$
Now as stated in section
\(\boldsymbol{\tilde{\phi }}\)
is an approximation of the integration operator
\(\boldsymbol{\phi }\)
, which is cheaper to invert but less accurate.
In other words, if we define
$$\mathcal{F} := \boldsymbol{\phi }^{-1}\boldsymbol{\chi }, \quad \mathcal{G} := \boldsymbol{\tilde{\phi }^{-1}}\boldsymbol{\chi }$$
to be a fine and a coarse propagator on one block, then (
) becomes
$$\boldsymbol{u}_{n+1}^{k+1} = \mathcal{F} \boldsymbol{u}_{n}^k + \mathcal{G} \boldsymbol{u}_{n}^{k+1} - \mathcal{G} \boldsymbol{u}_{n}^k,$$
which is the
update formula derived from the approximate Newton update in the multiple shooting approximation in [
]. Iteration (
) is a primary block iteration in the sense of Definition
, and
. Its
-graph is shown in Figure
(left). The consistency condition (
) is satisfied, since
\((0 - \mathbf{I})\mathcal{F} + \mathcal{G} + (\mathcal{F}-\mathcal{G}) = 0\)
. If we subtract
in (
), multiply both sides by
\(\boldsymbol{\phi }\)
, and rearrange terms, we can write
as the preconditioned fixed-point iteration
$$\boldsymbol{u}^{k+1} = \boldsymbol{u}^k + \mathbf{M}^{-1}(\boldsymbol{f}-\mathbf{A}\boldsymbol{u}^k),\quad \mathbf{M}:= \begin{pmatrix} \boldsymbol{\phi }& &\\ -\boldsymbol{\phi }\boldsymbol{\tilde
{\phi }^{-1}}\boldsymbol{\chi } & \boldsymbol{\phi } &\\ & \ddots & \ddots \end{pmatrix},$$
with iteration matrix
\(\mathbf{R}_{\rm{P\small{ARAREAL}}} = \mathbf{I} - \mathbf{M}^{-1}\mathbf{A}\)
Fig. 2. \(kn\) -graphs for Parareal/MGRIT with F-relaxation (left) and MGRIT with FCF-relaxation/Parareal with overlap (right).
Remark 3.1.
It is known in the literature that
is equivalent to a two-level MGRIT algorithm with F-relaxation [
]. In MGRIT, however, one also often uses FCF-relaxation, which is a combination of
nondamped (
) Block Jacobi relaxation steps, followed by an ABGS step: denoting by
the intermediary Block Jacobi iterations, we obtain
\boldsymbol{u}_{n+1}^{k+1/3} &= \boldsymbol{\phi }^{-1}\boldsymbol{\chi }\boldsymbol{u}_{n}^k,
\boldsymbol{u}_{n+1}^{k+2/3} &= \boldsymbol{\phi }^{-1}\boldsymbol{\chi }\boldsymbol{u}_{n}^{k+1/3},
\boldsymbol{u}_{n+1}^{k+1} &= \left [\mathbf{I} - \boldsymbol{\tilde{\phi }^{-1}} \boldsymbol{\phi }\right ] \boldsymbol{u}_{n+1}^{k+2/3} + \boldsymbol{\tilde{\phi }^{-1}}\boldsymbol{\chi }\
Shifting the
index in the first Block Jacobi iteration, combining all of them and reusing the
notation then gives
$$\boldsymbol{u}^{k+1}_{n+1} = \mathbf{B}^0_{-1}(\boldsymbol{u}^k_{n-1}) + \mathbf{B}^1_0\left (\boldsymbol{u}^{k+1}_{n}\right ), \quad \mathbf{B}^0_{-1} = (\mathcal{F} - \mathcal{G})\mathcal{F},\; \
mathbf{B}^1_{0} = \mathcal{G},$$
which is the update formula of
with overlap, shown to be equivalent to MGRIT with FCF-relaxation [
, Thm. 4].
This block iteration, whose
-graph is represented in Figure
(right), not only links two successive block variables with time index
, but also uses a block with time index
. It is not a primary block iteration in the sense of Definition
anymore. Although it can be analyzed using generating functions [
, Thm. 6], we focus on primary block iterations here and leave more complex block iterations like this one for future work.
3.2. Convergence analysis with GFM-bounds.
In their convergence analysis of
for nonlinear problems [
], the authors obtain a double recurrence of the form
\(e_{n+1}^{k+1} \leq \alpha e_{n}^k + \beta e_{n}^{k+1}\)
, where
come from Lipschitz constants and local truncation error bounds. Using the same notation as in section
, with
\(\alpha =\left \lVert \mathcal{F}-\mathcal{G}\right \rVert\)
\(\beta = \left \lVert \mathcal{G}\right \rVert\)
, we find [
, Thm. 1] that
$$e_{n+1}^k \leq \delta \frac{\alpha^k}{k!} \bar{\beta }^{n-k}\prod_{l=1}^{k}(n+1-l), \quad \bar{\beta } = \max (1, \beta ).$$
This is different from the GFM-bound
$$e_{n+1}^{k} \leq \delta \frac{\alpha^k}{(k-1)!} \sum_{i=0}^{n-k}\prod_{l=1}^{k-1}(i+l)\beta^{i}$$
we get when applying Theorem
to the block iteration of
. The difference stems from an approximation in the proof of [
, Thm. 1] which leads to the simpler and more explicit bound in (
). The two bounds are equal when
, but for
\(\beta \neq 1\)
, the GFM-bound in (
) is sharper. To illustrate this, we use the interface formulation of section
: we set
and use the block operators
$$\boldsymbol{\phi } := R(\lambda \Delta t/\ell )^{-\ell }, \quad \boldsymbol{\chi } := 1, \quad \boldsymbol{\tilde{\phi }} := R_\Delta (\lambda \Delta t/\ell_\Delta )^{-\ell_\Delta }.$$
We solve (
) for
\(\lambda \in \{i, -1\}\)
\(t\in [0,2\pi ]\)
, using
\(\ell :=10\)
fine time steps per block, the standard fourth order Runge–Kutta method for
\(\boldsymbol{\phi }\)
, and
coarse time steps per block with Backward Euler for
\(\boldsymbol{\tilde{\phi }}\)
. Figure
shows the resulting error (dashed line) at the last time point, the original error bound (
), and the new bound (
). We also plot the linear bound obtained from the
\(L^{\infty }\)
norm of the iteration matrix
defined just after (
). For both values of
, the GFM-bounds coincide with the linear bound from
for the first iteration, and the GFM-bound captures the superlinear contraction in later iterations. For
\(\lambda =i\)
, the old and new bounds are similar since
is close to 1. However, for
is smaller than one, the new bound gives a sharper estimate of the error, and we can also see that the new bound captures well the transition from the linear to the superlinear convergence regime. On
the left in Figure
seems to converge well for imaginary
\(\lambda =i\)
. This, however, should not be seen as a working example of
for a hyperbolic type problem, but is rather the effect of the relatively good accuracy of the coarse solver using 20 points per wave length for one wavelength present in the solution time interval
we consider. Denoting by
error with respect to the exact solution, the accuracy of the coarse solver (
\(\epsilon_\Delta =\)
6.22e-01) allows the
error to reach the fine solver error (
\(\epsilon_\Delta =\)
8.16e-07) in
iterations. Since the ideal parallel speedup of
, neglecting the coarse solver cost, is bounded by
, sect. 4], this indicates, however, almost no speedup in practical applications (see also [
]). If we increase the coarse solver error, for instance by multiplying
by a factor 4 to have now four times more wavelength in the domain, and only 12.5 points per wavelength resolution in the coarse solver, the convergence of
deteriorates massively, as we can see in Figure
(left), while this is not the case for the purely negative real fourfold
Fig. 3. Error bounds for Parareal for (2.1). Left: \(\lambda =i\) ; right: \(\lambda=-1\) . Note that for \(\lambda =i\) , the GFM-bound and the original one are almost identical.
Fig. 4. Error bounds for Parareal for (2.1). Left: \(\lambda=4i\) ; right: \(\lambda=-4\) .
This illustrates how Parareal has great convergence difficulties for hyperbolic problems, already well-documented in the literature; see, e.g., [
]. This is analogous to the difficulties due to the pollution error and damping in multigrid methods when solving medium to high frequency associated time harmonic problems; see [
] and references therein.
4. Writing two-level Time Multi-Grid as a block iteration.
The idea of TMG goes back to the 1980s and 1990s [
]. Furthermore, not long after
was introduced, it was shown to be equivalent to a TMG method, independently of the type of approximation used for the coarse solver [
]. This inspired the development of other time multilevel methods, in particular MGRIT [
]. However,
and MGRIT are usually viewed as iterations acting on values located at the block interface, while TMG-based algorithms, in particular STMG [
], use an iteration updating volume values (i.e., all fine time points in the time domain). In this section, we focus on a generic description of TMG and show how to write its two-level form applied
to the Dahlquist problem as block iteration. In particular, we will show in section
that PFASST can be expressed as a specific variant of TMG. The extension of this analysis to more levels and comparison with multilevel MGRIT is left for future work.
4.1. Definition of a coarse block problem for Time Multi-Grid.
To build a coarse problem, we consider a coarsened version of the global problem (
), with an
matrix having
\(N\cdot M_C\)
rows instead of
\(N\cdot M\)
. For each of the
blocks, let
\((\tau^C_{m})_{1\leq m\leq M_C}\)
be the normalized
grid points
of a
block discretization, with
\(M_C \lt M\)
We can define a coarse block operator
\(\boldsymbol{\phi }_C\)
by using the same time integration method as for
\(\boldsymbol{\phi }\)
on every block, but with fewer time points. This is equivalent to geometric coarsening used for
-multigrid (or geometric multigrid [
]), e.g., when using one time step of a Runge–Kutta method between each time grid point. It can also be equivalent to spectral coarsening used for
-multigrid (or spectral element multigrid [
]), e.g., when one step of a collocation method on
points is used within each block (as for PFASST, see section
We also consider the associated transmission operator
\(\boldsymbol{\chi }_C\)
and denote by
the block variable on this coarse time block, which satisfies
$$\boldsymbol{\phi }_C(\boldsymbol{u}^C_1) = \boldsymbol{\chi }_C\mathbf{T}_F^C(u_0\boldsymbol{{\mathbb{1}}}),\quad \boldsymbol{\phi }_C\boldsymbol{u}^C_{n+1} = \boldsymbol{\chi }_C\boldsymbol{u}^C_
{n} ,\quad n=1, 2, \dots, N-1.$$
be the global coarse variable that solves
$$\mathbf{A}_C\boldsymbol{u}^C := \begin{pmatrix} \boldsymbol{\phi }_C & & &\\ -\boldsymbol{\chi }_C & \boldsymbol{\phi }_C & &\\ & \ddots & \ddots &\\ & & -\boldsymbol{\chi }_C & \boldsymbol{\phi }
_C \end{pmatrix} \begin{bmatrix} \boldsymbol{u}^C_{1}\\\boldsymbol{u}^C_{2}\\\vdots \\\boldsymbol{u}^C_{N} \end{bmatrix} = \begin{bmatrix} \boldsymbol{\chi }_C\mathbf{T}_F^C(u_0\boldsymbol{{\mathbb
{1}}})\\0\\\vdots \\0 \end{bmatrix} =: \boldsymbol{f^C}.$$
is a block restriction operator, i.e., a transfer matrix from a fine (F) to a coarse (C) block discretization. Similarly, we have a block prolongation operator
, i.e., a transfer matrix from a coarse (C) to a fine (F) block discretization.
Remark 4.1.
While both
\(\boldsymbol{\phi }_C\)
\(\boldsymbol{\tilde{\phi }}\)
are approximations of the fine operator
\(\boldsymbol{\phi }\)
, the main difference between
\(\boldsymbol{\phi }_C\)
\(\boldsymbol{\tilde{\phi }}\)
is the size of the vectors they can be applied to (
). Furthermore,
\(\boldsymbol{\phi }_C\)
itself does need the transfer operators
to compute approximate values on the fine time points, while
\(\boldsymbol{\tilde{\phi }}\)
alone is sufficient (even if it can hide some restriction and interpolation process within). However, the definition of a Coarse Grid Correction in the classical multigrid formalism needs this
separation between transfer and coarse operators (see [
, sect. 2.2.2]), which limits the use of
\(\boldsymbol{\tilde{\phi }}\)
and requires the introduction of
\(\boldsymbol{\phi }_C\)
4.2. Block iteration of a Coarse Grid Correction.
Let us consider a standalone Coarse Grid Correction (CGC), without pre- or postsmoothing,
of a two-level multigrid iteration [
] applied to (
). One CGC step applied to (
) can be written as
$$\boldsymbol{u}^{k+1} = \boldsymbol{u}^{k} + \mathbf{\bar{T}}_C^F \mathbf{A}_C^{-1} \mathbf{\bar{T}}_F^C (\boldsymbol{f}-\mathbf{A}\boldsymbol{u}^k),$$
denotes the block diagonal matrix formed with
on the diagonal, and similarly for
. When splitting (
) into two steps,
\mathbf{A}_C\boldsymbol{d} &= \mathbf{\bar{T}}_F^C (\boldsymbol{f}-\mathbf{A}\boldsymbol{u}^k),
\boldsymbol{u}^{k+1} &= \boldsymbol{u}^{k} + \mathbf{\bar{T}}_C^F\boldsymbol{d},
the CGC term (or defect)
appears explicitly. Expanding the two steps for
\(n\gt 0\)
into a block formulation and inverting
\(\boldsymbol{\phi }_C\)
leads to
\boldsymbol{d}_{n+1} &= \boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\boldsymbol{\chi }\boldsymbol{u}^{k}_{n} - \boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\boldsymbol{\phi }\boldsymbol{u}^{k}_{n+1} + \
boldsymbol{\phi }_C^{-1}\boldsymbol{\chi }_C\boldsymbol{d}_{n},
\boldsymbol{u}_{n+1}^{k+1} &= \boldsymbol{u}_{n+1}^{k} + \mathbf{T}_C^F\boldsymbol{d}_{n+1}.
Now we need the following simplifying assumption.
Assumption 4.2.
followed by restriction
leaves the coarse block variables unchanged, i.e.,
$$\mathbf{T}_F^C \mathbf{T}_C^F = \mathbf{I}.$$
This condition is satisfied in many situations (e.g., restriction with standard injection on a coarse subset of the fine points, or polynomial interpolation with any possible coarse block
Using it in (
) for block index
$$\boldsymbol{d}_{n} = \mathbf{T}_F^C\left (\boldsymbol{u}_{n}^{k+1} - \boldsymbol{u}_{n}^{k}\right ).$$
into (
) on the right and the resulting
into (
) leads to
$$\boldsymbol{u}_{n+1}^{k+1} = (\mathbf{I}-\mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\boldsymbol{\phi })\boldsymbol{u}^{k}_{n+1} + \mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\boldsymbol{\chi }
_C\mathbf{T}_F^C\boldsymbol{u}_{n}^{k+1} + \mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\Delta_\chi \boldsymbol{u}^{k}_{n}$$
\(\Delta_\chi := \mathbf{T}_F^C\boldsymbol{\chi } - \boldsymbol{\chi }_C\mathbf{T}_F^C\)
This is a primary block iteration in the sense of Definition
, and we give its
-graph in Figure
(left). We can simplify it further using a second assumption.
Fig. 5. \(kn\) -graphs for the CGC block iteration, with Assumption 4.2 only (left) and with both Assumptions 4.2 and 4.3 (right).
Assumption 4.3.
We consider operators
\(\boldsymbol{\chi }\)
\(\boldsymbol{\chi }_C\)
such that
$$\Delta_\chi = \mathbf{T}_F^C\boldsymbol{\chi } - \boldsymbol{\chi }_C\mathbf{T}_F^C=0.$$
This holds for classical time-stepping methods when both left and right time subinterval boundaries are included in the block variables, or for collocation methods using Radau-II or Lobatto type
This last assumption is important to define PFASST (cf. section
and see Bolten, Moser, and Speck [
, Remark 1] for more details) and simplifies the analysis of TMG, as both methods use this block iteration. Then, (
) reduces to
$$\boldsymbol{u}_{n+1}^{k+1} = (\mathbf{I}-\mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\boldsymbol{\phi })\boldsymbol{u}^{k}_{n+1} + \mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\
boldsymbol{\chi }\boldsymbol{u}_{n}^{k+1}.$$
Again, this is a primary block iteration for which the
-graph is given in Figure
(right). It satisfies the consistency condition
) since
\(((\mathbf{I}-\mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\boldsymbol{\phi }) - \mathbf{I}) \boldsymbol{\phi }^{-1}\boldsymbol{\chi } + \mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C
\boldsymbol{\chi } = 0\)
4.3. Two-level Time Multi-Grid.
Gander and Neumüller introduced STMG for discontinuous Galerkin approximations in time [
], which leads to a similar system as (
). We describe the two-level approach for general time discretizations, following their multilevel description [
, sect. 3]. Consider a coarse problem defined as in section
and a damped Block Jacobi smoother as in section
with relaxation parameter
. Then, a two-level TMG iteration requires the following steps, each corresponding to a block iteration:
prerelaxation steps (
) with Block Jacobi smoother,
one CGC (
) inverting the coarse grid operators,
postrelaxation steps (
) with the Block Jacobi smoother.
If we combine all these block iterations we do not obtain a primary block iteration but a more complex expression, of which the analysis is beyond the scope of this paper. However, a primary block
iteration in the sense of Definition
is obtained when
holds, so that
only one prerelaxation step is used, \(\nu_1=1\) ,
and no postrelaxation step is considered, \(\nu_2=0\) .
Then, the two-level iteration reduces to the two block updates from (
) and (
\boldsymbol{u}_{n+1}^{k+1/2} &= (1-\omega )\boldsymbol{u}_{n+1}^k + \omega \boldsymbol{\phi }^{-1}\boldsymbol{\chi } \boldsymbol{u}_{n}^k,
\boldsymbol{u}_{n+1}^{k+1} &= \left (\mathbf{I}-\mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\boldsymbol{\phi }\right )\boldsymbol{u}^{k+1/2}_{n+1} + \mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\
boldsymbol{\chi }_C\mathbf{T}_F^C\boldsymbol{u}_{n}^{k+1},
as an intermediate index. Combining (
) and (
) leads to the primary block iteration
$$\boldsymbol{u}_{n+1}^{k+1} = \left (\mathbf{I}-\mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\boldsymbol{\phi }\right ) \left [(1-\omega )\boldsymbol{u}_{n+1}^k + \omega \boldsymbol{\phi }^
{-1}\boldsymbol{\chi } \boldsymbol{u}_{n}^k\right ] + \mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\boldsymbol{\chi }_C\mathbf{T}_F^C\boldsymbol{u}_{n}^{k+1}.$$
\(\omega \neq 1\)
, all block operators in this primary block iteration are nonzero, and applying Theorem
leads to the error bound (
). Since the latter is similar to the one obtained for PFASST in section
, we leave its comparison with numerical experiments to section
. For
we get the simplified iteration
$$\boldsymbol{u}_{n+1}^{k+1} = \left (\boldsymbol{\phi }^{-1}\boldsymbol{\chi }-\mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\boldsymbol{\chi }\right ) \boldsymbol{u}_{n}^k + \mathbf{T}_C^F\
boldsymbol{\phi }_C^{-1}\boldsymbol{\chi }_C\mathbf{T}_F^C\boldsymbol{u}_{n}^{k+1}$$
and the following result.
Proposition 4.4.
Consider a CGC as in section 4.2, such that the prolongation and restriction operators (in time) satisfy Assumption 4.2. If Assumption 4.3 also holds and only one Block Jacobi prerelaxation step
with \(\omega=1\) is used before the CGC, then two-level
is equivalent to Parareal, where the coarse solver \(\mathcal{G}\) uses the same time integrator as the fine solver \(\mathcal{F}\) but with larger time steps, i.e.,
$$\mathcal{G} := \mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\boldsymbol{\chi }.$$
This is a particular case of a result obtained before by Gander and Vandewalle [
, Theorem 3.1] but is presented here in the context of our GFM framework and the definition of
given in section
. In particular, it shows that the simplified two-grid iteration on (
) is equivalent to the preconditioned fixed-point iteration (
) of
if some conditions are met and
\(\boldsymbol{\tilde{\phi }^{-1}} := \mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\)
is used as the approximate integration operator.
However, the TMG iteration here updates also the fine time point values, using
to interpolate the coarse values computed with
\(\boldsymbol{\phi }_C\)
, hence applying the
to all volume values.
This is the only “difference” with the original definition of
in [
], where the update is only applied to the interface value between blocks.
One key idea of STMG that we have not described yet is the block diagonal Jacobi smoother used for relaxation. Even if its diagonal blocks use a time integration operator identical to those of the
fine problem (hence requiring the inversion of
\(\boldsymbol{\phi }\)
), their spatial part in STMG is approximated using one V-cycle multigrid iteration in space based on a pointwise smoother [
, sect. 4.3]. We do not cover this aspect in our description of TMG here, since we focus on time only, but describe in the next section a similar approach that is used for PFASST.
5. Writing PFASST as a block iteration.
PFASST is also based on a TMG approach using an approximate relaxation step, but the approximation of the Block Jacobi smoother is
done in time and not in space, in contrast to
STMG. In addition, the CGC in PFASST is also approximated, i.e., there is no direct solve on the coarse level to compute the CGC as in STMG. One PFASST iteration is therefore a combination of an
Approximate Block Jacobi
(ABJ) smoother (see section
), followed by one (or more) ABGS iteration(s) of section
on the coarse level to approximate the CGC [
, sect. 3.2]. While we describe only the two-level variant, the algorithm can use more levels [
]. The main component of PFASST is the approximation of the time integrator blocks using Spectral Deferred Correction (SDC) [
], from which its other key components (ABJ and ABGS) are built. Hence we first describe how SDC is used to define an ABGS iteration in section
, then ABJ in section
, and finally PFASST in section
5.1. Approximate Block Gauss–Seidel with Spectral Deferred Correction.
SDC can be seen as a preconditioner when integrating the ODE problem (
) with collocation methods; see section
. Consider the block operators
$$\boldsymbol{\phi } := (\mathbf{I}-\mathbf{Q}), \quad \boldsymbol{\chi } := \mathbf{H} \quad \Longrightarrow \quad (\mathbf{I}-\mathbf{Q}) \boldsymbol{u}_{n+1} = \mathbf{H} \boldsymbol{u}_{n}.$$
SDC approximates the quadrature matrix
$$\mathbf{Q}_\Delta = \lambda \Delta t \left (\tilde{q}_{m,j}\right ), \quad \tilde{q}_{m,j} = \int_{0}^{\tau_m} \tilde{l}_{j}(s)ds,$$
is an approximation of the Lagrange polynomial
. Usually,
is lower triangular [
, sect. 3] thus easy to invert.
This approximation is used to build the preconditioned iteration
$$\boldsymbol{u}_{n+1}^{k+1} = \boldsymbol{u}_{n+1}^{k} + [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}\left ( \mathbf{H} \boldsymbol{u}_{n} - (\mathbf{I}-\mathbf{Q})\boldsymbol{u}_{n+1}^k \right )$$
to solve (
), with
as unknown. We obtain the generic preconditioned iteration for one block,
$$\boldsymbol{u}_{n+1}^{k+1} = \left [\mathbf{I} - \boldsymbol{\tilde{\phi }^{-1}}\boldsymbol{\phi }\right ]\boldsymbol{u}_{n+1}^k + \boldsymbol{\tilde{\phi }^{-1}} \boldsymbol{\chi }\boldsymbol{u}_
{n} \quad \text{with} \quad \boldsymbol{\tilde{\phi }}:=\mathbf{I} - \mathbf{Q}_\Delta.$$
This shows that SDC inverts the
\(\boldsymbol{\phi }\)
operator approximately using
\(\boldsymbol{\tilde{\phi }}\)
block by block to solve the global problem (
), i.e., it fixes an
in (
), iterates over
until convergence, and then increments
by one. Hence SDC gives a natural way to define an approximate block integrator
\(\boldsymbol{\tilde{\phi }}\)
to be used to build ABJ and ABGS iterations. Defining the ABGS iteration (
) of section
using the SDC block operators gives the block updating formula
$$\boldsymbol{u}_{n+1}^{k+1} = \boldsymbol{u}_{n+1}^{k} + [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}\left ( \mathbf{H} \boldsymbol{u}_{n}^{k+1} - (\mathbf{I}-\mathbf{Q})\boldsymbol{u}_{n+1}^k \right ),$$
which we call
Block Gauss–Seidel SDC
, very similar to (
) except that we use the new iterate
and not the converged solution
. This is a primary block iteration in the sense of Definition
$$\begin{split} \mathbf{B}^0_1 &:= \mathbf{I} - [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}(\mathbf{I}-\mathbf{Q}) = [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}(\mathbf{Q}-\mathbf{Q}_\Delta ),\\ \mathbf{B}^0_0
&:= 0,\quad \mathbf{B}^1_0:= [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}\mathbf{H}, \end{split}$$
and its
-graph is shown in Figure
Fig. 6. \(kn\) -graphs for Block Jacobi SDC (left) and Block Gauss–Seidel SDC (right).
5.2. Approximate Block Jacobi with Spectral Deferred Correction.
Here we solve the global problem (
) using a preconditioner that can be easily parallelized (Block Jacobi) and combine it with the approximation of the collocation operator
\(\boldsymbol{\phi }\)
\(\boldsymbol{\tilde{\phi }}\)
defined in (
) and (
). This leads to the
preconditioned iteration
$$\boldsymbol{u}^{k+1} = \boldsymbol{u}^k + \mathbf{P}_{Jac}^{-1}(\boldsymbol{f}-\mathbf{A}\boldsymbol{u}^k),\quad \mathbf{P}_{Jac} = \begin{bmatrix} \boldsymbol{\tilde{\phi }} & & \\ & \boldsymbol{\
tilde{\phi }} & \\ & & \ddots \end{bmatrix}.$$
This is equivalent to the Block Jacobi relaxation in section
, except that the block operator
\(\boldsymbol{\phi }\)
is approximated by
\(\boldsymbol{\tilde{\phi }}\)
. Using the SDC block operators (
) gives the block updating formula
$$\boldsymbol{u}_{n+1}^{k+1} = \boldsymbol{u}_{n+1}^{k} + [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}\left ( \mathbf{H} \boldsymbol{u}_{n}^{k} - (\mathbf{I}-\mathbf{Q})\boldsymbol{u}_{n+1}^k \right ),$$
which we call
Block Jacobi SDC
. This is a primary block iteration with
$$\begin{split} \mathbf{B}^0_1 &:= \mathbf{I} - [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}(\mathbf{I}-\mathbf{Q}) = [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}(\mathbf{Q}-\mathbf{Q}_\Delta ),\\ \mathbf{B}^0_0
&:= [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}\mathbf{H},\quad \mathbf{B}^1_0 := 0. \end{split}$$
-graph is shown in Figure
(left). This block iteration can be written in the more generic form
$$\boldsymbol{u}_{n+1}^{k+1} = \left [\mathbf{I} - \boldsymbol{\tilde{\phi }^{-1}} \boldsymbol{\phi }\right ] \boldsymbol{u}_{n+1}^{k} + \boldsymbol{\tilde{\phi }^{-1}}\boldsymbol{\chi }\boldsymbol
This is similar to (
) except that we use the current iterate
from the previous block and not the converged solution
. Note that
\(\boldsymbol{\phi }\)
\(\boldsymbol{\tilde{\phi }}\)
do not need to correspond to the SDC operators (
) and (
). This block iteration does not explicitly depend on the use of SDC, hence the name
Approximate Block Jacobi
5.3. PFASST.
We now give a simplified description of PFASST [
] applied to the Dahlquist problem (
). In particular, this corresponds to doing only one SDC sweep on the coarse level. To write PFASST as a block iteration, we first build the coarse level as in section
. From that we can form the
quadrature matrix associated with the coarse nodes and the coarse matrix
, as we would have done if we were using the collocation method of section
on the coarse nodes. This leads to the definition of the
\(\boldsymbol{\phi }_C\)
\(\boldsymbol{\chi }_C\)
operators for the coarse level, combined with the transfer operators
, from which we can build the global matrices
; see section
. Then we build the two-level PFASST iteration by defining a specific smoother and a modified CGC.
The smoother corresponds to a Block Jacobi SDC iteration (
) from section
to produce an intermediate solution
$$\boldsymbol{u}_{n+1}^{k+1/2} = [\mathbf{I}-\mathbf{Q}_\Delta ]^{-1}(\mathbf{Q}-\mathbf{Q}_\Delta )\boldsymbol{u}_{n+1}^k + [\mathbf{I}-\mathbf{Q}_\Delta ]^{-1}\mathbf{H} \boldsymbol{u}_{n}^k,$$
denoted with iteration index
. Using a CGC as in section
would provide the global update formula
\mathbf{A}_C\boldsymbol{d} &= \mathbf{\bar{T}}_F^C (\boldsymbol{f}-\mathbf{A}\boldsymbol{u}^{k+1/2}),
\boldsymbol{u}^{k+1} &= \boldsymbol{u}^{k+1/2} + \mathbf{\bar{T}}_C^F\boldsymbol{d}.
Instead of a direct solve with
to compute the defect
, in PFASST one uses
Block Gauss–Seidel SDC iterations (or sweeps) to approximate it. Then (
) becomes
$$\mathbf{\tilde{P}}_{GS} \boldsymbol{d}^{\ell } = (\mathbf{\tilde{P}}_{GS} - \mathbf{A}_C)\boldsymbol{d}^{\ell -1} + \mathbf{\bar{T}}_F^C (\boldsymbol{f}-\mathbf{A}\boldsymbol{u}^{k+1/2}), \quad \
boldsymbol{d}^0=0,\quad \ell \in \{1,\ldots,L\},$$
and reduces for one sweep only (
) to
$$\mathbf{\tilde{P}}_{GS} \boldsymbol{d} = \mathbf{\bar{T}}_F^C (\boldsymbol{f}-\mathbf{A}\boldsymbol{u}^{k+1/2}), \quad \mathbf{\tilde{P}}_{GS} = \begin{bmatrix} \boldsymbol{\tilde{\phi }}_C & & \\
- \boldsymbol{\chi }_C & \boldsymbol{\tilde{\phi }}_C & \\ & \ddots & \ddots \end{bmatrix}.$$
correspond to the
preconditioning matrix, but written on the coarse level using an SDC-based approximation
\(\boldsymbol{\tilde{\phi }}_C\)
of the
\(\boldsymbol{\phi }_C\)
coarse time integrator. Combined with the prolongation on the fine level (
), we get the modified CGC update
$$\boldsymbol{u}^{k+1} = \boldsymbol{u}^{k+1/2} + \mathbf{\bar{T}}_C^F \mathbf{\tilde{P}}_{GS}^{-1} \mathbf{\bar{T}}_F^C (\boldsymbol{f}-\mathbf{A}\boldsymbol{u}^{k+1/2}), \quad \mathbf{\tilde{P}}_
{GS} = \begin{bmatrix} \boldsymbol{\tilde{\phi }}_C & & \\ - \boldsymbol{\chi }_C & \boldsymbol{\tilde{\phi }}_C & \\ & \ddots & \ddots \end{bmatrix},$$
and together with (
) a two-level method for the global system (
) [
, sect. 2.2]. Note that this is the same iteration we obtained for the CGC in section
, except that the coarse operator
\(\boldsymbol{\phi }_C\)
has been replaced by
\(\boldsymbol{\tilde{\phi }}_C\)
. Assumption
holds, since using Lobatto or Radau-II nodes means
has the form (
), which implies
$$\Delta_\chi = \mathbf{T}_F^C\mathbf{H} -\mathbf{\tilde{H}}\mathbf{T}_F^C=0.$$
Using similar computations as in section
and the block operators defined for collocation and SDC (cf. sections
) we obtain the block iteration
$$\boldsymbol{u}_{n+1}^{k+1} = [\mathbf{I}-\mathbf{T}_C^F (\mathbf{I} - \mathbf{\tilde{Q}}_\Delta )^{-1}\mathbf{T}_F^C(\mathbf{I} - \mathbf{Q})]\boldsymbol{u}^{k+1/2}_{n+1} + \mathbf{T}_C^F(\mathbf
{I} - \mathbf{\tilde{Q}}_\Delta )^{-1}\mathbf{T}_F^C\mathbf{H} \boldsymbol{u}_{n}^{k+1}$$
by substitution into (
). Finally, the combination of the two gives
$$\begin{split} \boldsymbol{u}_{n+1}^{k+1} &= [\mathbf{I}-\mathbf{T}_C^F (\mathbf{I} - \mathbf{\tilde{Q}}_\Delta )^{-1}\mathbf{T}_F^C(\mathbf{I} - \mathbf{Q})] [\mathbf{I}-\mathbf{Q}_\Delta ]^{-1}(\
mathbf{Q}-\mathbf{Q}_\Delta )\boldsymbol{u}_{n+1}^k \\ &\quad+ (\mathbf{I}-\mathbf{T}_C^F [\mathbf{I}-\mathbf{\tilde{Q}}_\Delta ]^{-1}\mathbf{T}_F^C(\mathbf{I}-\mathbf{Q})) [\mathbf{I}-\mathbf{Q}_\
Delta ]^{-1}\mathbf{H} \boldsymbol{u}_{n}^k \\ &\quad+ \mathbf{T}_C^F(\mathbf{I} - \mathbf{\tilde{Q}}_\Delta )^{-1}\mathbf{T}_F^C\mathbf{H} \boldsymbol{u}_{n}^{k+1}. \end{split}$$
Using the generic formulation with the
\(\boldsymbol{\phi }\)
operators gives
$$\begin{split} \boldsymbol{u}_{n+1}^{k+1} &= [\mathbf{I} - \mathbf{T}_C^F\boldsymbol{\tilde{\phi }}_C^{-1}\mathbf{T}_F^C \boldsymbol{\phi }] (\mathbf{I} - \boldsymbol{\tilde{\phi }^{-1}} \boldsymbol
{\phi })\boldsymbol{u}_{n+1}^k \\ &\quad+ (\mathbf{I} - \mathbf{T}_C^F\boldsymbol{\tilde{\phi }}_C^{-1}\mathbf{T}_F^C \boldsymbol{\phi }) \boldsymbol{\tilde{\phi }^{-1}}\boldsymbol{\chi }\boldsymbol
{u}_{n}^k + \mathbf{T}_C^F\boldsymbol{\tilde{\phi }}_C^{-1}\mathbf{T}_F^C\boldsymbol{\chi }\boldsymbol{u}_{n}^{k+1}. \end{split}$$
This is again a primary block iteration in the sense of Definition
, but in contrast to most previously described block iterations, all block operators are nonzero.
5.4. Similarities between PFASST, TMG, and Parareal.
From the description in the previous section, it is clear that PFASST is very similar to TMG. While TMG uses a (damped) Block Jacobi smoother for prerelaxation and a direct solve for the CGC, PFASST
uses instead an approximate Block Jacobi smoother and solves the CGC using one (or more) ABGS iterations on the coarse grid. This interpretation was obtained by Bolten, Moser, and Speck [
, Theorem 1] but is derived here using the GFM framework, and we summarize those differences in Table
. Changing only the CGC or the smoother in TMG with
in contrast to both like in PFASST produces two further PinT algorithms. We call those
(replacing the coarse solver by one step of ABGS) and
(replacing the fine Block Jacobi solver by ABJ). Note that
can be interpreted as
using an approximate integration operator and larger time step for the coarse propagator if we set
$$\mathcal{G} := \mathbf{T}_C^F\boldsymbol{\tilde{\phi }}_C^{-1}\mathbf{T}_F^C\boldsymbol{\chi }.$$
Thus, the version of
used in section
is equivalent to
and differs from PFASST only by the type of smoother used on the fine level.
Table 1 Classification of two-level TMG methods, depending on their smoother for fine-level relaxation and computation of the CGC.
5.5. Analysis and numerical experiments.
5.5.1. Convergence of PFASST iteration components.
Since Block Jacobi SDC (
) can be written as a primary block iteration, we can apply Theorem
to get the error bound
$$e_{n+1}^{k} \leq \begin{cases} \delta (\gamma + \alpha )^k \text{ if } k \leq n, \\ \displaystyle \delta \gamma^k \sum_{i=0}^{n} \binom{k}{i}\left (\frac{\alpha }{\gamma }\right )^i \text{
otherwise,} \end{cases}$$
\(\gamma := \left \lVert [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}(\mathbf{Q}-\mathbf{Q}_\Delta )\right \rVert\)
\(\alpha := \left \lVert [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}\mathbf{H} \right \rVert\)
. Note that
is proportional to
\(\lambda \Delta t\)
through the
term and for small
\(\Delta t\)
tends to
\(\left \lVert \mathbf{H} \right \rVert\)
which is constant. We can identify two convergence regimes: for early iterations (
\(k\leq n\)
), the bound does not contract if
\(\gamma +\alpha \geq 1\)
(which is generally the case). For later iterations (
\(k\gt n\)
), a small-enough time step leads to convergence of the algorithm through the
Similarly, for Block Gauss–Seidel SDC (
), Theorem
$$e^k_{n+1} \leq \delta \frac{\gamma^k}{(k-1)!} \sum_{i=0}^{n}\prod_{l=1}^{k-1}(i+l)\beta^{i},$$
\(\gamma := \left \lVert [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}(\mathbf{Q}-\mathbf{Q}_\Delta )\right \rVert\)
\(\beta := \left \lVert [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}\mathbf{H} \right \rVert\)
. This iteration contracts in early iterations if
is small enough. Since the value for
is the same for both Block Gauss–Seidel SDC and Block Jacobi-SDC, both algorithms have an asymptotically similar convergence rate.
We illustrate this with the following example. Let
\(\lambda :=i\)
, and let the time interval
\([0,\pi ]\)
be divided into
subintervals. Inside each subinterval, we use one step of the collocation method from section
Lobatto–Legendre nodes [
]. This gives us block variables of size
and we choose
as the matrix defined by a single Backward Euler step between nodes to build the
\(\boldsymbol{\tilde{\phi }}\)
operator. The starting value
for the iteration is initialized with random numbers starting from the same seed. Figure
(left) shows the numerical error for the last block using the
\(L^{\infty }\)
norm, the bound obtained with the GFM method, and the linear bound using the norm of the global iteration matrix. As for
in section
, the GFM-bound is similiar to the iteration matrix bound for the first few iterations, but much tighter for later iterations. In particular, the linear bound cannot show the change in convergence
regime of the Block Jacobi SDC iteration (after
) but the GFM-bound does. Also, we observe that while the GFM-bound overestimates the error, the interface approximation of Corollary
gives a very good estimate of the error at the interface; see Figure
Fig. 7. Comparison of numerical errors with GFM-bounds for Block Jacobi SDC and Block Gauss–Seidel SDC. Left: error on the block variables (dashed), GFM-bounds (solid), linear bound from the
iteration matrix (dotted). Right: error estimate using the interface approximation from Corollary 2.9. Note that the numerical errors on block variables (left) and at the interface (right) are close
but not identical (see Remark 2.10).
5.5.2. Analysis and convergence of PFASST.
The GFM framework provides directly an error bound for PFASST: applying Theorem
to (
) gives
$$e_{n+1}^k \leq \delta \gamma^k \sum_{i=0}^{\min (n, k)} \sum_{l=0}^{n-i} \binom{k}{i}\binom{l+k-1}{l} \left (\frac{\alpha }{\gamma }\right )^i\beta^l$$
\(\gamma := || [\mathbf{I}-\mathbf{T}_C^F (\mathbf{I} - \mathbf{\tilde{Q}}_\Delta )^{-1}\mathbf{T}_F^C(\mathbf{I} - \mathbf{Q})] [\mathbf{I}-\mathbf{Q}_\Delta ]^{-1}(\mathbf{Q}-\mathbf{Q}_\Delta )||
\(\beta := || \mathbf{T}_C^F(\mathbf{I} - \mathbf{\tilde{Q}}_\Delta )^{-1}\mathbf{\tilde{H}}\mathbf{T}_F^C ||\)
, and
\(\alpha := || (\mathbf{I}-\mathbf{T}_C^F [\mathbf{I}-\mathbf{\tilde{Q}}_\Delta ]^{-1}\mathbf{T}_F^C(\mathbf{I}-\mathbf{Q})) [\mathbf{I}-\mathbf{Q}_\Delta ]^{-1}\mathbf{H} ||\)
We compare this bound with numerical experiments. Let
\(\lambda :=i\)
. The time interval
\([0,2\pi ]\)
for the Dahlquist problem (
) is divided into
subintervals. Inside each subinterval we use
Lobatto–Legendre nodes on the fine level and
Lobatto nodes on the coarse level. The
operators use Backward Euler. In Figure
(left) we compare the measured numerical error with the GFM-bound and the linear bound from the iteration matrix. As in section
, both bounds overestimate the numerical error, even if the GFM-bound shows convergence for the later iterations, which the linear bound from the iteration matrix cannot. We also added an error
estimate built using the spectral radius of the iteration matrix, for which an upper bound was derived in [
]. For this example, the spectral radius reflects the asymptotic convergence rate for the last iterations better than GFM. This highlights a weakness of the current GFM-bound: applying norm and
triangle inequalities to the vector error recurrence (
) can induce a large approximation error in the scalar error recurrence (
) that is then solved with generating functions. Improving this is planned for future work.
Fig. 8. Comparison of numerical errors with GFM-bounds for PFASST. Left: error bound using volume values. Right: estimate using the interface approximation. Note that the numerical errors on block
variables (left) and at the interface (right) are very close but not identical (see Remark 2.10).
However, one advantage of the GFM-bound over the spectral radius is its generic aspect allowing it to be applied to many iterative algorithms, even those having an iteration matrix with spectral
radius equal to zero like
]. Furthermore, the interface approximation from Corollary
allows us to get a significantly better estimation of the numerical error, as shown in Figure
(right). For the GFM-bound we have
\((\alpha,\beta, \gamma ) = (0.16, 1, 0.19)\)
, while for the interface approximation we get
\((\bar{\alpha }, \bar{\beta }, \bar{\gamma }) = (0.16, 0.84, 0.02)\)
. In the second case, since
\(\bar{\gamma }\)
is one order smaller than the other coefficients, we get an error estimate that is closer to the one for
in section
. This similarity between PFASST and
(cf. section
) will be highlighted in the next section.
6. Comparison of iterative PinT algorithms.
Using the notation of the GFM framework, we provide the primary block iterations of all iterative PinT algorithms investigated throughout this paper in Table
. In particular, the first rows summarize the basic block iterations used as components to build the iterative PinT methods. While damped Block Jacobi (section
) and ABJ (section
) are more suitable for smoothing,
ABGS (section
) is mostly used as a solver (e.g., to compute the CGC). This allows us to compare the convergence of each block iteration, and we illustrate this with the following examples.
Table 2 Summary of all the methods we analyzed, and their block iteration operators. Note that TMG with \(\omega=1\) and \(\text{TMG}_c\) corresponds to Parareal with a specific choice of the coarse
Algorithm \(\mathbf{B}_1^0\) ( \(\boldsymbol{u}_{n+1}^k\) ) \(\mathbf{B}_0^0\) ( \(\boldsymbol{u}_{n}^k\) ) \(\mathbf{B}_0^1\) ( \(\boldsymbol{u}
_{n}^{k+1}\) )
Block \(\mathbf{I}-\omega \mathbf{I}\) \(\omega \boldsymbol{\phi }^{-1}\boldsymbol{\chi }\) –
ABJ \(\mathbf{I} - \boldsymbol{\tilde{\phi }^{-1}} \boldsymbol{\phi }\) \(\boldsymbol{\tilde{\phi }^{-1}}\boldsymbol{\chi }\) –
ABGS \(\mathbf{I} - \boldsymbol{\tilde{\phi }^{-1}} \boldsymbol{\phi }\) – \(\boldsymbol{\tilde{\phi }^{-1}}\
boldsymbol{\chi }\)
Parareal – \((\boldsymbol{\phi }^{-1}-\boldsymbol{\tilde{\phi }^{-1}})\boldsymbol{\chi \(\boldsymbol{\tilde{\phi }^{-1}}\
}\) boldsymbol{\chi }\)
\((1-\omega )(\mathbf{I} - \mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf \(\omega (\boldsymbol{\phi }^{-1}-\mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\ \(\mathbf{T}_C^F\boldsymbol{\phi }_C^
TMG {T}_F^C \boldsymbol{\phi })\) mathbf{T}_F^C)\boldsymbol{\chi }\) {-1}\mathbf{T}_F^C\boldsymbol{\chi }
\(\text \((\boldsymbol{\phi }^{-1}-\mathbf{T}_C^F\boldsymbol{\tilde{\phi }}_C^{-1}\ \(\mathbf{T}_C^F\boldsymbol{\tilde{\
{TMG}_c\) – mathbf{T}_F^C)\boldsymbol{\chi }\) phi }}_C^{-1}\mathbf{T}_F^C\
boldsymbol{\chi }\)
\(\text \((\mathbf{I} - \mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C \ \((\boldsymbol{\tilde{\phi }^{-1}} - \mathbf{T}_C^F\boldsymbol{\phi }_C^{-1} \(\mathbf{T}_C^F\boldsymbol{\phi }_C^
{TMG}_f\) boldsymbol{\phi }) (\mathbf{I} - \boldsymbol{\tilde{\phi }^{-1}} \ \mathbf{T}_F^C \boldsymbol{\phi }\boldsymbol{\tilde{\phi }^{-1}}) \ {-1}\mathbf{T}_F^C\boldsymbol{\chi }
boldsymbol{\phi })\) boldsymbol{\chi }\) \)
\((\mathbf{I} - \mathbf{T}_C^F\boldsymbol{\tilde{\phi }}_C^{-1}\mathbf{T} \((\boldsymbol{\tilde{\phi }^{-1}} - \mathbf{T}_C^F\boldsymbol{\tilde{\phi \(\mathbf{T}_C^F\boldsymbol{\tilde{\
PFASST _F^C \boldsymbol{\phi }) (\mathbf{I} - \boldsymbol{\tilde{\phi }^{-1}} \ }}_C^{-1}\mathbf{T}_F^C \boldsymbol{\phi }\boldsymbol{\tilde{\phi }^{-1}}) \ phi }}_C^{-1}\mathbf{T}_F^C\
boldsymbol{\phi })\) boldsymbol{\chi }\) boldsymbol{\chi }\)
We consider the Dahlquist problem with
\(\lambda :=2i-0.2\)
. First, we decompose the simulation interval
\([0, 2\pi ]\)
subintervals. Next, we choose a block discretization with
Lobatto–Legendre nodes, a collocation method on each block for fine integrator
\(\boldsymbol{\phi }\)
; see section
. We build a coarse block discretization using
, and define on each level an approximate integrator using Backward Euler. This allows us to define the
\(\boldsymbol{\tilde{\phi }}\)
\(\boldsymbol{\phi }_C\)
, and
\(\boldsymbol{\tilde{\phi }}_C\)
integrators; see the legend of Table
for more details, where we show the maximum absolute error in time for each of the four propagators run sequentially. The high order collocation method with
\(\boldsymbol{\phi }^{-1} \boldsymbol{\chi }\)
is the most accurate. The coarse collocation method with
nodes interpolated to the fine mesh is still more accurate than the Backward Euler method with
\(\boldsymbol{\tilde{\phi }^{-1}} \boldsymbol{\chi }\)
or the Backward Euler method with
interpolated to the fine mesh. Then we run all algorithms in Table
, initializing the block variable iterate with the same random initial guess. The error for the last block variable with respect to the fine sequential solution is shown in Figure
(left). In addition, we show the same results in Figure
(right), but using the classical fourth order Runge–Kutta method as a fine propagator, second order Runge–Kutta (Heun method) for the approximate integration operator, and equidistant points using a
volume formulation as described in section
. Note that
, and
are each
algorithms using respectively
\(\boldsymbol{\tilde{\phi }^{-1}}\boldsymbol{\chi }\)
\(\mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\boldsymbol{\chi }\)
, and
\(\mathbf{T}_C^F\boldsymbol{\tilde{\phi }}_C^{-1}\mathbf{T}_F^C\boldsymbol{\chi }\)
as coarse propagator
(see Table
for their discretization error).
Fig. 9. Comparison of iterative methods convergence using the GFM framework. Left: collocation as fine integrator. Right: fourth order Runge–Kutta method as fine integrator. Parareal ( \(_{\omega=1}
\) ) and Parareal ( \(\text{TMG}_c\) ) denote a specific coarse propagator for Parareal.
Table 3 Maximum error over time for each block propagator run sequentially. The first column shows the error of the fine propagator, while the next three columns show the error of the three possible
approximate propagators. In the top row, \(\boldsymbol{\phi }\) corresponds to a collocation method with \(M=5\) nodes while \(\boldsymbol{\phi }_C\) is a collocation method with \(M=3\) nodes. \(\
boldsymbol{\tilde{\phi }}\) is a backward Euler method with \(M=5\) steps per block while \(\boldsymbol{\tilde{\phi }}_C\) is Backward Euler with \(M=3\) steps per block. In the bottom row, \(\
boldsymbol{\phi }\) corresponds to \(M=5\) uniform steps per block of a fourth order Runge–Kutta method, and \(\boldsymbol{\phi }_C\) is the same method with \(M=3\) steps per block. \(\boldsymbol{\
tilde{\phi }}\) is a second order Runge–Kutta method (Heun) with \(M=5\) uniform steps per block while \(\boldsymbol{\tilde{\phi }}_C\) is the same method with \(M=3\) uniform time steps per block.
\(\boldsymbol{\phi }^{-1}\ \(\boldsymbol{\tilde{\phi }^{-1}}\ \(\mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^ \(\mathbf{T}_C^F\boldsymbol{\tilde{\phi }}_C^{-1}\mathbf{T}
boldsymbol{\chi }\) boldsymbol{\chi }\) C\boldsymbol{\chi }\) _F^C\boldsymbol{\chi }\)
Figure 9 \(1.20e^{-5}\) \(3.57e^{-1}\) \(1.19e^{-2}\) \(4.87e^{-1}\)
Figure 9 \(3.14e^{-4}\) \(6.24e^{-2}\) \(5.14e^{-3}\) \(2.67e^{-1}\)
The TMG iteration converges fastest, since it uses the most accurate block integrators on both levels; cf. Table
. Keeping the same CGC but approximating the smoother,
improves the first iterations, but convergence for later iterations is closer to PFASST. This suggests that convergence for later iterations is mostly governed by the accuracy of the smoother since
and PFASST use ABJ. This is corroborated by the comparison of PFASST and
, which differ only in their choice of smoother. While the exact Block Jacobi relaxation makes
converge after
iterations (a well-known property of
), using the ABJ smoother means that PFASST does not share this property.
On the other hand, the first iterations are also influenced by the CGC accuracy. The iteration error is very similar for PFASST and
, which have the same CGC. This is more pronounced when using the fourth order Runge–Kutta method for
\(\boldsymbol{\phi }\)
, as we see in Figure
(right). Early iteration errors are similar for two-level methods that use the same CGC (TMG/
, and PFASST/
). Similarities of the first iteration errors can also be observed for
and ABGS. Both algorithms use the same
operator; see Table
. This suggests that early iteration errors are mostly governed by the accuracy of
, which is corroborated by the two-level methods (TMG and
use the same
operator, as PFASST and
Remark 6.1.
An important aspect of this analysis is that it compares only the convergence of each algorithm, and not their overall computational cost. For instance, PFASST and \(\text{TMG}_c\) appear to be
equivalent for the first iterations, but the block iteration of PFASST is cheaper than \(\text{TMG}_c\) , because an approximate block integrator is used for relaxation. To account for this and build
a model for computational efficiency, the GFM framework would need to be combined with a model for computational cost of the different parts in the block iterations. Such a study is beyond the scope
of this paper but is the subject of ongoing work.
7. Conclusion.
We have shown that the generating function method (GFM) can be used to compare convergence of different iterative PinT algorithms. To do so, we formulated popular methods like
, PFASST, MGRIT, and TMG in a common framework based on the definition of a primary block iteration. The GFM analysis showed that all these methods eventually converge superlinearly
due to the evolution nature of the problems. We confirmed this by numerical experiments, and our
code is publically available [
Our analysis opens up further research directions. For example, studying multistep block iterations like MGRIT with FCF-relaxation and more complex two-level methods without Assumption
would be a useful extension of the GFM framework. Similarly, an extension to multilevel versions of STMG, PFASST, and MGRIT would be very valuable. Finally, in practice PinT methods are used to solve
space-time problems. The GFM framework should be able to provide convergence bounds in this case as well, potentially even for nonlinear problems, considering GFM was used successfully to study
applied to nonlinear systems of ODEs [
Appendix A. Error bounds for primary block iterations.
A.1. Incomplete primary block iterations.
First, we consider
\text{(PBI-1)} :\quad \boldsymbol{u}^{k+1}_{n+1} &= \mathbf{B}^1_0\left (\boldsymbol{u}^{k+1}_{n}\right ) + \mathbf{B}^0_0\left (\boldsymbol{u}^{k}_{n}\right ),
\text{(PBI-2)} :\quad \boldsymbol{u}^{k+1}_{n+1} &= \mathbf{B}^0_1(\boldsymbol{u}^k_{n+1}) + \mathbf{B}^0_0\left (\boldsymbol{u}^{k}_{n}\right ),
\text{(PBI-3)} :\quad \boldsymbol{u}^{k+1}_{n+1} &= \mathbf{B}^0_1(\boldsymbol{u}^k_{n+1}) + \mathbf{B}^1_0\left (\boldsymbol{u}^{k+1}_{n}\right ),
where one block operator is zero. (PBI-1) corresponds to
, (PBI-2) to Block Jacobi SDC, and (PBI-3) to Block Gauss–Seidel SDC. We recall the notation
$$\alpha := \left \lVert \mathbf{B}^0_0\right \rVert, \quad \beta := \left \lVert \mathbf{B}^1_0\right \rVert, \quad \gamma := \left \lVert \mathbf{B}^0_1\right \rVert.$$
Application of Lemma
gives the recurrence relations
\text{(PBI-1)} :\quad \rho_{k+1}(\zeta ) &\leq \frac{\alpha \zeta }{1-\beta \zeta } \rho_{k}(\zeta ) \Longrightarrow \rho_{k}(\zeta ) \leq \alpha^{k}\left (\frac{\zeta }{1-\beta \zeta }\right )^{k}\
rho_{0}(\zeta ),
\text{(PBI-2)} :\quad \rho_{k+1}(\zeta ) &\leq (\gamma + \alpha \zeta ) \rho_{k}(\zeta ) \Longrightarrow \rho_{k}(\zeta ) \leq \gamma^{k}\left (1 + \frac{\alpha }{\gamma }\zeta \right )^{k} \rho_{0}
(\zeta ),
\text{(PBI-3)} :\quad \rho_{k+1}(\zeta ) &\leq \frac{\gamma }{1-\beta \zeta } \rho_{k}(\zeta ) \Longrightarrow \rho_{k}(\zeta ) \leq \gamma^{k}\frac{1}{(1-\beta \zeta )^{k}} \rho_{0}(\zeta )
for the corresponding generating functions. Using definition
) for
, we find that
\(\rho_{0}(\zeta ) \leq \delta \sum_{n=0}^{\infty }\zeta^{n+1}\)
. By using the binomial series expansion
$$\frac{1}{(1-\beta \zeta )^{k}} = \sum_{n=0}^{\infty } \binom{n+k-1}{n}(\beta \zeta )^n$$
\(k\gt 0\)
and the Newton binomial sum, we obtain for the three block iterations
\text{(PBI-1)} :\quad \rho_{k}(\zeta ) &\leq \delta \alpha^{k} \zeta \left [\sum_{n=0}^{\infty } \binom{n+k-1}{n}\beta^n\zeta^{n+k}\right ] \left [\sum_{n=0}^{\infty }\zeta^{n}\right ],
\text{(PBI-2)} :\quad \rho_{k}(\zeta ) &\leq \delta \gamma^{k} \zeta \left [\sum_{n=0}^{k} \binom{k}{n}\left (\frac{\alpha }{\gamma }\right )^n\zeta^{n}\right ] \left [\sum_{n=0}^{\infty }\zeta^{n}\
right ] ,
\text{(PBI-3)} :\quad \rho_{k}(\zeta ) &\leq \delta \gamma^{k} \zeta \left [\sum_{n=0}^{\infty } \binom{n+k-1}{n}\beta^n\zeta^{n}\right ] \left [\sum_{n=0}^{\infty }\zeta^{n}\right ].
Error bound for PBI-1.
We simplify the expression using
$$\left [\sum_{n=0}^{\infty } \binom{n+k-1}{n}\beta^n\zeta^{n+k}\right ] = \left [\sum_{n=k}^{\infty } \binom{n-1}{n-k}\beta^{n-k}\zeta^{n}\right ],$$
and then the series product formula
$$\left [\sum_{n=0}^{\infty } a_n\zeta^n\right ] \left [\sum_{n=0}^{\infty } b_n\zeta^n\right ] = \sum_{n=0}^{\infty } c_n\zeta^n, \quad c_n = \sum_{i=0}^{n} a_i b_{n-i},$$
\(b_n = 1\)
a_n = \begin{cases} 0 \text{ if } n \lt k, \\ \displaystyle \binom{n-1}{n-k}\beta^{n-k} \text{ otherwise.} \end{cases}
From this we get
$$c_n = \sum_{i=k}^{n} \binom{i-1}{i-k}\beta^{i-k} = \sum_{i=0}^{n-k} \binom{i+k-1}{i}\beta^{i} = \sum_{i=0}^{n-k} \frac{\prod_{l=1}^{k-1}(i+l)}{(k-1)!}\beta^i,$$
using the convention that the product reduces to one when there are no terms in it. Identifying coefficients in the power series and rearranging terms yields for
\(k\gt 0\)
$$\boxed{\text{(PBI-1)} :\quad e_{n+1}^{k} \leq \delta \frac{\alpha^k}{(k-1)!} \sum_{i=0}^{n-k}\prod_{l=1}^{k-1}(i+l)\beta^{i}. }$$
Following an idea by Gander and Hairer [
], we can also consider the error recurrence
\(e_{n+1}^{k+1} \leq \alpha e_{n}^k + \bar{\beta } e_{n}^{k+1}\)
\(\bar{\beta } = \max (1, \beta )\)
. Using the upper bound
\(\sum_{n=0}^{\infty }\zeta^n = \frac{1}{1-\zeta } \leq \frac{1}{1-\bar{\beta }\zeta }\)
, for the initial error, we avoid the series product and get
\(\rho_{k}(\zeta ) \leq \delta \alpha^k \frac{\zeta^k}{(1-\bar{\beta })^{k+1}}\)
as bound on the generating function. We then obtain the simpler error bound
$$e_{n+1}^k \leq \delta \frac{\alpha^k}{k!} \bar{\beta }^{n-k}\prod_{l=1}^{k}(n+1-l)$$
as in the proof of [
, Thm. 1].
Error bound for PBI-2.
We use (
) again with
\(b_n = 1\)
to get
a_n = \begin{cases} \displaystyle \binom{k}{n}\left (\frac{\alpha }{\gamma }\right )^n \text{ if } n \leq k, \\ 0 \text{ otherwise.} \end{cases}
From this we get
\(c_n = \sum_{i=0}^{\min (n,k)}\binom{k}{i}\left (\frac{\alpha }{\gamma }\right )^i\)
, which yields for
\(k\gt 0\)
the error bound
$${\boxed{\text{(PBI-2)} :\quad e_{n+1}^{k} \leq \begin{cases} \delta (\gamma + \alpha )^k \text{ if } k \leq n, \\ \displaystyle \delta \gamma^k \sum_{i=0}^{n} \binom{k}{i}\left (\frac{\alpha }{\
gamma }\right )^i \text{ otherwise.} \end{cases} }}$$
Error bound for PBI-3.
We use (
) with
for the series product to get
a_n = \binom{n+k-1}{n}\beta^n = \frac{\prod_{l=1}^{k-1}(n+l)}{(k-1)!}\beta^n,
which yields the error bound
$$\boxed{\text{(PBI-3)} :\quad e^k_{n+1} \leq \delta \frac{\gamma^k}{(k-1)!} \sum_{i=0}^{n}\prod_{l=1}^{k-1}(i+l)\beta^{i} }$$
\(k\gt 0\)
A.2. Full primary block iteration.
We now consider a primary block iteration (
) with all block operators nonzero,
$$\text{(PBI-Full)} :\quad \boldsymbol{u}^{k+1}_{n+1} = \mathbf{B}^0_1\left (\boldsymbol{u}^{k}_{n+1}\right ) + \mathbf{B}^1_0\left (\boldsymbol{u}^{k+1}_{n}\right ) + \mathbf{B}^0_0\left (\
boldsymbol{u}^{k}_{n}\right ),$$
, and
defined in (
). Applying Lemma
leads to
$$\rho_{k+1}(\zeta ) \leq \frac{\gamma + \alpha \zeta }{1-\beta \zeta } \rho_{k}(\zeta ) \quad \Longrightarrow \quad \rho_{k}(\zeta ) \leq \left (\frac{\gamma + \alpha \zeta }{1-\beta \zeta }\right )
^{k}\rho_{0}(\zeta ).$$
Combining the calculations performed for PBI-2 and PBI-3, we obtain
\rho_{k}(\zeta ) &\leq \delta \zeta \gamma^k \left [\sum_{n=0}^{k} \binom{k}{n}\left (\frac{\alpha }{\gamma }\right )^n\zeta^{n}\right ] \left [\sum_{n=0}^{\infty } \binom{n+k-1}{n}\beta^n\zeta^{n}\
right ] \left [\sum_{n=0}^{\infty }\zeta^{n}\right ]
&= \delta \zeta \gamma^k \left [\sum_{n=0}^{k} \binom{k}{n}\left (\frac{\alpha }{\gamma }\right )^n\zeta^{n}\right ] \left [\sum_{n=0}^{\infty } \sum_{i=0}^{n} \binom{i+k-1}{i}\beta^i \zeta^{n}\right
Then using (
) with
$$a_n = \begin{cases} \displaystyle \binom{k}{n}\left (\frac{\alpha }{\gamma }\right )^n \text{ if } n \leq k, \\ 0 \text{ otherwise,} \end{cases}\quad b_n = \sum_{i=0}^{n} \binom{i+k-1}{i}\beta^i,$$
we obtain
$$\rho_{k}(\zeta ) \leq \delta \zeta \gamma^k \sum_{n=0}^{\infty } c_n \zeta^n, \ \text{with}\ c_n = \sum_{i=0}^{\min (n, k)} \sum_{l=0}^{n-i} \binom{k}{i}\binom{l+k-1}{l} \left (\frac{\alpha }{\
gamma }\right )^i\beta^l.$$
From this we can identify the error bound
$${\boxed{\text{(PBI-Full)} :\quad e_{n+1}^k \leq \delta \gamma^k \sum_{i=0}^{\min (n, k)} \sum_{l=0}^{n-i} \binom{k}{i}\binom{l+k-1}{l} \left (\frac{\alpha }{\gamma }\right )^i\beta^l. }}$$
We greatly appreciate the very detailed feedback from the anonymous reviewers. It helped a lot to improve the organization of the paper and to make it more accessible.
Number of citations since publication, according to Google Scholar in April 2023.
We do not analyze in detail MGRIT with FCF relaxation, only with F relaxation, in which case the two-level variant is equivalent to
. Our framework could, however, be extended to include FCF relaxation; see Remark
Since we focus only on the time dimension, the spatial component of STMG is left out.
This specific form of the matrix \(\mathbf{H}\) comes from the use of Lobatto or Radau-II rules, which treat the right interface of the time subinterval as a node. A similar description can also be
obtained for Radau-I or Gauss-type quadrature rules that do not use the right boundary as node, but we omit it for the sake of simplicity.
The notation
is specific to SDC and collocation methods (see, e.g., [
]), while the
\(\boldsymbol{\chi }\)
notation from the GFM framework is generic for arbitrary time integration methods.
Condition (
) is necessary for the block iteration to have the correct fixed point.
This is the case for all time-integration methods considered in this paper, even if this is not a necessary condition to use the GFM framework.
For an interface block iteration (
\(M=1, \tau_{1}=1\)
), (
) becomes a rigorous inequality and Corollary
thus becomes an upper bound.
In the original paper [
], this approximation is done using larger time-steps, but many other types of approximations have been used since then in the literature.
It was shown in [
] that MGRIT with (FC)
\(^{\nu }\)
F-relaxation, where
\(\nu \gt 0\)
is the number of additional FC-relaxations, is equivalent to an overlapping version of
overlaps. Generalizing our computations shows that those algorithms are equivalent to
\((\nu -1)\)
nondamped Block Jacobi iterations followed by an ABGS step.
Those do not need to be a subset of the fine block grid points, although they usually are in applications.
The CGC is not convergent by itself without a smoother.
In some situations, e.g., when the transpose of linear interpolation is used for restriction (full-weighting), we do not get the identity in Assumption
but an invertible matrix. The same simplifications can be done, except one must take into account the inverse of
Note that the consistency condition is satisfied even without Assumption
Note that, even if
\(\mathbf{T}_C^F\boldsymbol{\phi }_C^{-1}\mathbf{T}_F^C\)
is not invertible, this abuse of notation is possible as (
) requires an approximation of
\(\boldsymbol{\phi }^{-1}\)
rather than an approximation of
\(\boldsymbol{\phi }\)
The notation
was chosen instead of
for consistency with the literature; cf. [
We implicitly use
\([\mathbf{I}-\mathbf{Q}_\Delta ]^{-1}(\mathbf{Q}-\mathbf{Q}_\Delta )=\mathbf{I} - [\mathbf{I} - \mathbf{Q}_\Delta ]^{-1}(\mathbf{I}-\mathbf{Q}) = \mathbf{I} - \boldsymbol{\tilde{\phi }^{-1}} \
boldsymbol{\phi }\)
; see (
Note that algorithms used as smoothers have \(\mathbf{B}_0^1=0\) , which is a necessary condition for parallel computation across all blocks.
This is due to the factorial term stemming from the binomial sums in the estimates (
The definition of \(\delta\) as maximum error for \(n\in \{0,\dots, N\}\) can be extended to \(n\in \mathbb{N}\) , as the error values for \(n\gt N\) do not matter and can be set to zero.
E. Aubanel, Scheduling of tasks in the Parareal algorithm, Parallel Comput., 37 (2011), pp. 172–182.
G. Bal, On the convergence and the stability of the parareal algorithm to solve partial differential equations, in Domain Decomposition Methods in Science and Engineering, Lect. Notes Comput. Sci.
Eng. 40, R. Kornhuber, et al., eds., Springer, Berlin, 2005, pp. 426–432.
M. Bolten, D. Moser, and R. Speck, A multigrid perspective on the parallel full approximation scheme in space and time, Numer. Linear Algebra Appl., 24 (2017), e2110.
M. Bolten, D. Moser, and R. Speck, Asymptotic convergence of the parallel full approximation scheme in space and time for linear problems, Numer. Linear Algebra Appl., 25 (2018), e2208.
J. Burmeister and G. Horton, Time-parallel multigrid solution of the Navier-Stokes equations, in Multigrid Methods III, Springer, Berlin, 1991, pp. 155–166.
A. J. Christlieb, C. B. Macdonald, and B. W. Ong, Parallel high-order integrators, SIAM J. Sci. Comput., 32 (2010), pp. 818–835.
P.-H. Cocquet and M. J. Gander, How large a shift is needed in the shifted Helmholtz preconditioner for its effective inversion by multigrid?, SIAM J. Sci. Comput., 39 (2017), pp. A438–A478.
V. Dobrev, T. Kolev, N. Petersson, and J. Schroder, Two-Level Convergence Theory for Multigrid Reduction in Time (MGRIT), Technical report LLNL-JRNL-692418, Lawrence Livermore National Laboratory,
A. Dutt, L. Greengard, and V. Rokhlin, Spectral deferred correction methods for ordinary differential equations, BIT, 40 (2000), pp. 241–266.
H. C. Elman, O. G. Ernst, and D. P. O’leary, A multigrid method enhanced by Krylov subspace iteration for discrete Helmholtz equations, SIAM J. Sci. Comput., 23 (2001), pp. 1291–1315.
M. Emmett and M. Minion, Toward an efficient parallel in time method for partial differential equations, Commun. Appl. Math. Comput. Sci., 7 (2012), pp. 105–132.
O. G. Ernst and M. J. Gander, Why it is difficult to solve Helmholtz problems with classical iterative methods, in Numerical Analysis of Multiscale Problems, Springer, Berlin, 2012, pp. 325–363.
O. G. Ernst and M. J. Gander, Multigrid methods for Helmholtz problems: A convergent scheme in 1d using standard components, Direct Inverse Probl. Wave Propagation Appl., 14 (2013).
R. D. Falgout, S. Friedhoff, Tz. V. Kolev, S. P. MacLachlan, and J. B. Schroder, Parallel time integration with multigrid, SIAM J. Sci. Comput., 36 (2014), pp. C635–C661.
C. Farhat and M. Chandesris, Time-decomposed parallel time-integrators: Theory and feasibility studies for fluid, structure, and fluid-structure applications, Internat. J. Numer. Methods Engrg., 58
(2003), pp. 1397–1434.
S. Friedhoff, R. Falgout, T. Kolev, S. MacLachlan, and J. Schroder, A multigrid-in-time algorithm for solving evolution equations in parallel, in Proceedings of the 16th Copper Mountain Conference on
Multigrid Methods, 2013.
M. J. Gander, Analysis of the Parareal algorithm applied to hyperbolic problems using characteristics, Bol. Soc. Esp. Mat. Apl., 42 (2008), pp. 21–35.
M. J. Gander, 50 years of time parallel time integration, in Multiple Shooting and Time Domain Decomposition Methods, T. Carraro, M. Geiger, S. Körkel, and R. Rannacher, eds., Springer, Berlin, 2015,
pp. 69–114.
M. J. Gander, I. G. Graham, and E. A. Spence, Applying GMRES to the Helmholtz equation with shifted Laplacian preconditioning: What is the largest shift for which wavenumber-independent convergence
is guaranteed?, Numer. Math., 131 (2015), pp. 567–614.
M. J. Gander and S. Güttel, PARAEXP: A parallel integrator for linear initial-value problems, SIAM J. Sci. Comput., 35 (2013), pp. C123–C142.
M. J. Gander and E. Hairer, Nonlinear convergence analysis for the Parareal algorithm, in Domain Decomposition Methods in Science and Engineering, Lect. Notes in Comput. Sci. Eng. 60, O. B. Widlund
and D. E. Keyes, eds., Springer, Berlin, 2008, pp. 45–56.
M. J. Gander, F. Kwok, and H. Zhang, Multigrid interpretations of the Parareal algorithm leading to an overlapping variant and MGRIT, Comput. Vis. Sci., 19 (2018), pp. 59–74.
M. J. Gander and T. Lunet, Toward error estimates for general space-time discretizations of the advection equation, Comput. Vis. Sci., 23 (2020), pp. 1–14.
M. J. Gander and T. Lunet, ParaStieltjes: Parallel computation of Gauss quadrature rules using a Parareal-like approach for the Stieltjes procedure, Numer. Linear Algebra Appl., 28 (2021), e2314.
M. J. Gander and M. Neumüller, Analysis of a new space-time parallel multigrid algorithm for parabolic problems, SIAM J. Sci. Comput., 38 (2016), pp. A2173–A2208.
M. J. Gander and S. Vandewalle, Analysis of the Parareal time-parallel time-integration method, SIAM J. Sci. Comput., 29 (2007), pp. 556–578.
W. Gautschi, Orthogonal Polynomials: Computation and Approximation, Oxford University Press, New York, 2004.
S. Götschel, M. Minion, D. Ruprecht, and R. Speck, Twelve ways to fool the masses when giving parallel-in-time results, in Springer Proc. Math. Stat., Springer, Berlin, 2021, pp. 81–94.
S. Günther, L. Ruthotto, J. B. Schroder, E. C. Cyr, and N. R. Gauger, Layer-parallel training of deep residual neural networks, SIAM J. Math. Data Sci., 2 (2020), pp. 1–23.
W. Hackbusch, Parabolic multi-grid methods, in Proceedings of the Sixth International Symposium on Computing Methods in Applied Sciences and Engineering, VI, North-Holland, Amsterdam, 1984,
pp. 189–197.
W. Hackbusch, Multi-grid Methods and Applications, Vol. 4, Springer, Berlin, 2013.
A. Hessenthaler, B. S. Southworth, D. Nordsletten, O. Röhrle, R. D. Falgout, and J. B. Schroder, Multilevel convergence analysis of multigrid-reduction-in-time, SIAM J. Sci. Comput., 42 (2020),
pp. A771–A796.
C. Hofer, U. Langer, M. Neumüller, and R. Schneckenleitner, Parallel and robust preconditioning for space-time isogeometric analysis of parabolic evolution problems, SIAM J. Sci. Comput., 41 (2019),
pp. A1793–A1821.
D. E. Knuth, The Art of Computer Programming. 1. Fundamental Algorithms, Addison-Wesley, Reading, MA, 1975.
M. Lecouvez, R. D. Falgout, C. S. Woodward, and P. Top, A parallel multigrid reduction in time method for power systems, in Proceedings of the Power and Energy Society General Meeting, IEEE, 2016,
pp. 1–5.
J.-L. Lions, Y. Maday, and G. Turinici, A “Parareal” in time discretization of PDE’s, C. R. Math. Acad. Sci. Paris, 332 (2001), pp. 661–668.
T. Lunet, J. Bodart, S. Gratton, and X. Vasseur, Time-parallel simulation of the decay of homogeneous turbulence using Parareal with spatial coarsening, Comput. Vis. Sci., 19 (2018), pp. 31–44.
Y. Maday and E. M. Rønquist, Parallelization in time through tensor-product space-time solvers, C. R. Math., 346 (2008), pp. 113–118.
M. L. Minion, R. Speck, M. Bolten, M. Emmett, and D. Ruprecht, Interweaving PFASST and parallel multigrid, SIAM J. Sci. Comput., 37 (2015), pp. S244–S263.
S. Murata, N. Satofuka, and T. Kushiyama, Parabolic multi-grid method for incompressible viscous flows using a group explicit relaxation scheme, Comput. & Fluids, 19 (1991), pp. 33–41.
B. W. Ong and J. B. Schroder, Applications of time parallelization, Comput. Vis. Sci., 23 (2020).
E. M. Rønquist and A. T. Patera, Spectral element multigrid. I. Formulation and numerical results, J. Sci. Comput., 2 (1987), pp. 389–406.
D. Ruprecht, Wave propagation characteristics of parareal, Comput. Vis. Sci., 19 (2018), pp. 1–17.
D. Ruprecht and R. Krause, Explicit parallel-in-time integration of a linear acoustic-advection system, Comput. & Fluids, 59 (2012), pp. 72–83.
D. Ruprecht and R. Speck, Spectral deferred corrections with fast-wave slow-wave splitting, SIAM J. Sci. Comput., 38 (2016), pp. A2535–A2557.
M. Schreiber, P. S. Peixoto, T. Haut, and B. Wingate, Beyond spatial scalability limitations with a massively parallel method for linear oscillatory problems, Internat. J. High Perform. Comput. Appl.
, 32 (2018), pp. 913–933.
B. S. Southworth, Necessary conditions and tight two-level convergence bounds for Parareal and multigrid reduction in time, SIAM J. Matrix Anal. Appl., 40 (2019), pp. 564–608.
R. Speck, D. Ruprecht, R. Krause, M. Emmett, M. L. Minion, M. Winkel, and P. Gibbon, A massively space-time parallel N-body solver, in Proceedings of the International Conference on High Performance
Computing, Networking, Storage and Analysis, IEEE, 2012, pp. 92:1–92:11.
G. A. Staff and E. M. Rønquist, Stability of the Parareal algorithm, in Domain Decomposition Methods in Science and Engineering, Lect. Notes Comput. Sci. Eng. 40, R. Kornhuber, et al., eds.,
Springer, Berlin, 2005, pp. 449–456.
U. Trottenberg, C. W. Oosterlee, and A. Schüller, Multigrid, Academic Press, New York, 2001.
G. Wanner and E. Hairer, Solving Ordinary Differential Equations II, Springer, Berlin, 1996.
Published In
SIAM Journal on Scientific Computing
Pages: A2275 - A2303
ISSN (online): 1095-7197
© 2023 SIAM. Published by SIAM under the terms of the Creative Commons 4.0 license. This work is licensed under a Creative Commons Attribution 4.0 International License.
Submitted: 28 March 2022
Accepted: 14 April 2023
Published online: 21 September 2023
Reproducibility of computational results.
This paper has been awarded the “SIAM Reproducibility Badge: code and data available”, as a recognition that the authors have followed reproducibility principles valued by SISC and the scientific
computing community. Code and data that allow readers to reproduce the results in this paper are available at
Funding Information
European High-Performance Computing Joint Undertaking (JU): 955701
German Federal Ministry of Education and Research (BMBF): 16HPC048
Funding: This project has received funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement 955701. The JU receives support from the European Union’s Horizon
2020 research & innovation program and from Belgium, France, Germany, and Switzerland. This project also received funding from the German Federal Ministry of Education and Research (BMBF), grant
If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click | {"url":"https://epubs.siam.org/doi/10.1137/22M1487163","timestamp":"2024-11-11T14:15:28Z","content_type":"text/html","content_length":"322316","record_id":"<urn:uuid:9b06e0a4-cee9-4e0b-84e0-00477ec87597>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00464.warc.gz"} |
Easiest Sudoku Competition
... by: Ilmars
Friday 5-Nov-2010
For the puzzle with 31 clues - 22/21/7
For the puzzle with 30 clues - 19/19/13
REPLY TO THIS POST
... by: Ilmars
Friday 5-Nov-2010
--- At first, something for fun.
The puzzle 014000000009700605080002730205001004070050000600397008000680310097400080000005029 (with 32 clues) is solved in three rounds by solver - with number of solved cells in each 22/23/4.
Can someone find a puzzle with 32 clues which can be solved in two rounds?
--- Next...
How exactly are calculated the simplicity (grade) of such simple puzzles which can be solved only by solved cells? I feel such lack of knowledge not good for searching of easiest puzzles and also not
good for confidence about precisity of grader. I feeling confused.
--- Finally, the grader got flu or something similar. Now the easiest puzzles have grade no 1, but 13 or similar. I hope that it is change only in number, not in diference between two puzzles (i
mean, one puzzle is harder than another both with old grader and the "new").
Also because I found puzzle with 30 clues and "new" grade 11 - which is better than puzzle of David. :) But it is more for fun, not for founding really easiest puzzle - yes, because I'm confused
about calculation of grade.
--- PS.
I will post also puzzles for 31 and 30 clues which have fewest rounds (solving by solver) and bigger number of cells in first rounds. Again for fun - who can make better? :)
-- 31 clues --
-- 30 clues --
... by: David Filmer MA (Cantab) david@flockman.com
Saturday 21-Aug-2010
Wedding Sudoku: Grade 1 and only 30 clues!
Hello Andrew and fellow Sudoku creators.
It's my 55th Wedding Anniversary and a wet Saturday morning, so I am submitting one of several Sudokus I did that beat previous winners as under: -
This can also be set out as follows:
I have been working with Mats Anderbok (also a previous winner) on a simple method of solving gentle sudokus using a strategy advocated by the late Michael Mepham. Instead of normal computer methods,
this works like a human where one cell is solved AND ENTERED at a time. I worked out the logic and Mats made an excellent job of the programming in just 2 days! An advantage is that no "Candidate
numbers" are entered: the correct solution is entered directly.
The following print out shows how, and the order in which, each of the 51 blank cells is solved
[31,1] to [81,1]
"[31-81" shows the first/last cell solved : ",1]" shows the grade to date
SS1 (Subset) Total 6. SS1=Col 1-3: 3=Col 7-9: 4=Row A-C: 6=Row G-J
V0-V3: No of times a number occurs in a subset.V3=3:V2=2:V1=1:V0=0
You will see from the following, that the Grade on this method is also 1.
[31,1] SS1 V2: A intercept, B2 occupied: SOLUTION: 4 (C2)
[32,1] SS1 V2: J intercept, H2 occupied: SOLUTION: 5 (G2)
[33,1] SS1 V2: A intercept, B1 occupied: SOLUTION: 7 (C1)
[34,1] SS1 V2: D intercept, E1 occupied: SOLUTION: 8 (F1)
[35,1] SS1 V1: E,F intercept, D3 occupied: SOLUTION: 1 (D1)
[36,1] SS1 V1: C intercept, A3,B1,B3 occupied: SOLUTION: 3 (A1)
[37,1] SS1 V1: E,1 intercept, D3 occupied: SOLUTION: 3 (F3)
[38,1] SS1 V1: D intercept, E1,F1,F2 occupied: SOLUTION: 6 (E2)
[39,1] SS1 V1: G,2 intercept, J1 occupied: SOLUTION: 6 (H1)
[40,1] SS1 V1: C intercept, A3,B2,B3 occupied: SOLUTION: 9 (A2)
[41,1] SS1 V1: G,2 intercept, J3 occupied: SOLUTION: 9 (H3)
[42,1] SS1 V0: B intercept, A1,A2,A3,C1,C2 filled: SOLUTION: 2 (C3)
[43,1] SS1 V0: F,3 intercept, D1,E1,E2 occupied: SOLUTION: 2 (D2)
[44,1] SS1 V0: 2,3 intercept, H1,J1 occupied: SOLUTION: 2 (G1)
[45,1] SS1 V2: No intercept, H3,J3 occupied: SOLUTION: 1 (G3)
[46,1] SS2 V2: G intercept, J4 occupied: SOLUTION: 2 (H4)
[47,1] SS2 V2: E intercept, F4 occupied: SOLUTION: 9 (D4)
[48,1] SS2 V1: E,F intercept, D4 occupied: SOLUTION: 3 (D6)
[49,1] SS2 V1: J,6 intercept, H4 occupied: SOLUTION: 3 (G4)
[50,1] SS2 V1: A,C intercept, B5 occupied: SOLUTION: 4 (B6)
[51,1] SS2 V1: J,6 intercept, G5 occupied: SOLUTION: 4 (H5)
[52,1] SS2 V1: B,C intercept, A6 occupied: SOLUTION: 5 (A5)
[53,1] SS2 V1: D,5 intercept, F6 occupied: SOLUTION: 5 (E6)
[54,1] SS2 V1: A,C intercept, B5 occupied: SOLUTION: 7 (B4)
[55,1] SS2 V1: F,4 intercept, E5 occupied: SOLUTION: 7 (D5)
[56,1] SS2 V1: D,F intercept, E5 occupied: SOLUTION: 8 (E4)
[57,1] SS2 V1: H,4 intercept, G5 occupied: SOLUTION: 8 (J5)
[58,1] SS2 V0: D,E intercept, F4,F6 occupied: SOLUTION: 6 (F5)
[59,1] SS2 V0: A,5 intercept, B4,B6,C6 occupied: SOLUTION: 6 (C4)
[60,1] SS2 V0: G,H,4,5 intercept: SOLUTION: 6 (J6)
[61,1] SS2 V1: B intercept, A6,C4,C6 occupied: SOLUTION: 1 (A4)
[62,1] SS2 V1: G,4 intercept, J6 occupied: SOLUTION: 1 (H6)
[63,1] SS3 V2: A,C intercept: SOLUTION: 6 (B7)
[64,1] SS3 V1: A,B intercept, C7 occupied: SOLUTION: 1 (C9)
[65,1] SS3 V1: G,H,9 intercept: SOLUTION: 1 (J7)
[66,1] SS3 V1: A,C intercept, B7 occupied: SOLUTION: 3 (B8)
[67,1] SS3 V1: G,J,8 intercept: SOLUTION: 3 (H7)
[68,1] SS3 V1: E,F intercept, D7 occupied: SOLUTION: 4 (D8)
[69,1] SS3 V1: H,J,8 intercept: SOLUTION: 4 (G7)
[70,1] SS3 V1: D,E intercept, F8 occupied: SOLUTION: 5 (F9)
[71,1] SS3 V1: G,J,9 intercept: SOLUTION: 5 (H8)
[72,1] SS3 V1: D,F intercept, E9 occupied: SOLUTION: 7 (E8)
[73,1] SS3 V1: G,J,8 intercept: SOLUTION: 7 (H9)
[74,1] SS3 V1: A,B intercept, C9 occupied: SOLUTION: 8 (C8)
[75,1] SS3 V1: H,J,8 intercept: SOLUTION: 8 (G9)
[76,1] SS3 V0: B,C intercept, A7,A9 occupied: SOLUTION: 2 (A8)
[77,1] SS3 V0: D,F,8 intercept, E9 occupied: SOLUTION: 2 (E7)
[78,1] SS3 V0: G,H,7,8 intercept: SOLUTION: 2 (J9)
[79,1] SS3 V0: A,C intercept, B7,B8 occupied: SOLUTION: 9 (B9)
[80,1] SS3 V0: D,E,9 intercept, F8 occupied: SOLUTION: 9 (F7)
[81,1] SS3 V0: G,H,7,9 intercept: SOLUTION: 9 (J8)
... by: Mats Anderbok, Sweden
Thursday 8-Oct-2009
Here are my contributions to your "competition" for easiest sudoku with 32 clues. I produced around 40 puzzles which scored 2 on your grader, and these are two of the easiest ones:
They can be solved in two rounds using all independent singles, or in three rounds using only naked singles. The number of solved are 36/49-18/39/49 and 35/49-20/38/49 after each round. It is
probably possible to get score 2 with less than 32 clues or score 1 with 32, but I don't have the time and tools to investigate this further, because I don't know exactly how the grader computes the
May I suggest, as a different rule instead of the arbitrary 32 clues, that the puzzle should be minimal (no clue can be removed without multiple solutions)? It's kind of cheating to start with a
valid sudoku and add digits to make it easier.
Method: Starting with a list of puzzles with 17 clues, I filled 15 random cells, and then counted the number of naked singles available. I suppose your grader views naked singles as easier than
hidden (which I don't agree with as a general rule). From a total of more than 200 million generated, I tested around 5000 on the grader, most of them scoring 4-8. This is a tedious task, even if I
don't do it manually, because using the grader over the web takes a few seconds for each puzzle (I apologize for the server work load). In all, it took me two days.
Note: It is not always true that a "partially completed puzzle will have an easier grade", at least not a lower or equal score. This example has two additional clues and scores 4, which shows how
hard it is to implement a good grading algorithm.
... by: David Filmer
Saturday 26-Sep-2009
In your introduction to this competition you say "I'd be happy to list equal winners".
May I therefore submit the following, which also has a grade of 3.
Will Gibson's entry solves in 4 rounds when you click on "Take Step". In turn, the 4 steps solve 14, 11, 18 and 6 cells.
If you do the same for my entry, the 4 steps solve 14, 22, 12 and 1 cell in turn, so in that respect, I think that mine is easier although the grade is the same. I have produced a Sudoku which grades
2, but it has 33 clues, so breaks the rules. I am still struggling to find a Grade 2 Sudoku with 32 clues!
David Filmer | {"url":"https://www.sudokuwiki.org/Easiest_Sudoku_Competition","timestamp":"2024-11-04T05:55:53Z","content_type":"text/html","content_length":"26202","record_id":"<urn:uuid:df0d6be5-3292-47a0-b1f2-1183b3646f23>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00625.warc.gz"} |
What is the sin of 1 in radians?
The Value of the Inverse Sin of 1
(1) is 90° or, in radian measure, Π/2 . '1' represents the maximum value of the sine function . It happens at Π/2 and then again at 3Π/2 etc..
33 Related Question Answers Found
Gilles Nieblas
Sometimes sin^-^1 is called asin or arcsin. Likewise cos^-^1 is called acos or arccos. And tan^-^1 is called atan or arctan.
Mayelin Diaz De Tuesta
As stated, one radian is equal to 180/π degrees. Thus, to convert from radians to degrees, multiply by 180/π. Conversely, to convert from degrees to radians, multiply by π/180.
Olimpiu Buckolt
Sines and cosines for special common angles
Degrees Radians sine
60° π/3 √3 / 2
45° π/4 √2 / 2
30° π/6 1/2
0° 0 0
Geremias Tzedekah
The functions are usually abbreviated: sine (sin), cosine (cos), tangent (tan) cosecant (csc), secant (sec), and cotangent (cot). According to the standard notation for inverse functions (f^-^1), you
will also often see these written as sin^-^1, cos^-^1, tan^-^1 arccsc^-^1, arcsec^-^1, and arccot^-^1.
Ramdan Silber
The Value of the Inverse Sin of 1
As you can see below, the inverse sin^-^1 (1) is 90° or, in radian measure, Π/2 . '1' represents the maximum value of the sine function .It happens at Π/2 and then again at 3Π/2 etc..
Mariia Schmukler
The inverse is used to obtain the measure of an angle using the ratios from basic right triangle trigonometry. The inverse of sine is denoted as Arcsine or on a calculator it will appear as asin or
Catinca Gerspacher
(1, 0) = (x, y) = (cos 0, sin 0), cos 0 = 1, sin 0 = 0. The values of angles outside Quadrant I can be computed using reference angles, and the values of the other trigonometric functions can be
computed using the reciprocal and quotient identities.
Zaineb Agostino
For every trigonometry function such as sec, there is an inverse function that works in reverse. These inverse functions have the same name but with 'arc' in front. So the inverse of sec is arcsec
etc. When we see "arcsec A", we interpret it as "the angle whose secant is A".
Tamie Arrruazu
Sinh is the hyperbolic sine function, which is the hyperbolic analogue of the Sin circular function used throughout trigonometry. It is defined for real numbers by letting be twice the area between
the axis and a ray through the origin intersecting the unit hyperbola .
Hristiyan Craig
Inverse Sine, Cosine and Tangent. The inverse trigonometric functions (sin^-^1, cos^-^1, and tan^-^1) allow you to find the measure of an angle in a right triangle. All that you need to know are any
two sides as well as how to use SOHCAHTOA.
Servanda Schenman
The arctangent function is the inverse of the tangent function. The cotangent function is the reciprocal of the tangent function. The arctangent function is the inverse of the tangent function.
Ilhem Gaastra
1 Expert Answer
In sin^-^1x, the "-1" is NOT an exponent. It represents the inverse of the sine function. Recall f(x) and f^-^1(x). sin^-^1x means the same as arcsin x, i.e., the arc whose sine is x.
Idir Dworrak
pi/2 radians = 180/2 = 90 degrees. 22/7, 3.14 or 3.14159 etc are all approximate to pi but never equal. pi is the ratio of the circumference to the diameter (double radius = diameter) of a circle.
Osmel Entorf
Radian. A unit of measure for angles. One radian is the angle made at the center of a circle by an arc whose length is equal to the radius of the circle.
Anselm Lockyer
Why Radian Measure Makes Life Easier In Mathematics And Physics. The two most commonly used measures for angles are degrees and radians. When is the entire circumference of the circle, the
corresponding angle is that of the entire circle. Since the circumference of a circle is , the angle of a full circle is .
Brynn Amaral
Succinctly, pi—which is written as the Greek letter for p, or π—is the ratio of the circumference of any circle to the diameter of that circle. Regardless of the circle's size, this ratio will always
equal pi. In decimal form, the value of pi is approximately 3.14.
Babak Mazo
Because the length of the circumference of a circle is exactly 2*pi times the radius and by definition 1 radian is the angle subtended by a portion of the circumference equal in length to the radius.
Caifen Luthcke
Radians measure angles by distance traveled. or angle in radians (theta) is arc length (s) divided by radius (r). A circle has 360 degrees or 2pi radians — going all the way around is 2 * pi * r / r.
So a radian is about 360 /(2 * pi) or 57.3 degrees.
Ashley Arincon
Calculus is always done in radian measure. Degree (a right angle is 90 degrees) and gradian measure (a right angle is 100 grads) have their uses. Radians make it possible to relate a linear measure
and an angle measure. A unit circle is a circle whose radius is one unit.
Andre Yablonsky
So one radian = 180/ PI degrees and one degree = PI /180 radians. Therefore to convert a certain number of degrees in to radians, multiply the number of degrees by PI /180 (for example, 90º = 90 × PI
/180 radians = PI /2). To convert a certain number of radians into degrees, multiply the number of radians by 180/ PI .
Milada Glaria
Write down the formula for finding the circumference of a circle using the diameter. The formula is simply this: C = πd. In this equation, "C" represents the circumference of the circle, and "d"
represents its diameter. That is to say, you can find the circumference of a circle just by multiplying the diameter by pi.
Eldridge Wilkinson
The angle formed at the centre by a line to each of its ends is one radian. A radian is approximately equal to 57.29678 degrees. Since the circumference is 2.pi x the radius, there are 2pi radians to
a full circle (360 degrees). This may be referred to as "an angle of 2pi". 2.3k views · View 6 Upvoters.
Mahamadou Laverenz
As our first quadrant angle increases, the tangent will increase very rapidly. As we get closer to 90 degrees, this length will get incredibly large. At 90 degrees we must say that the tangent is
undefined (und), because when you divide the leg opposite by the leg adjacent you cannot divide by zero.
Ylena Vavrovsky
Important Angles: 30°, 45° and 60°
Angle Tan=Sin/Cos
30° 1 √3 = √3 3
45° 1
60° √3
Placentina Mittelhausser
Trigonometry - Pi Value - Contents
The pi value represents the ratio of the diameter of a circle divided by its circumference. This ratio is always the same number regardless of the size of the circle. A circle is a geometric shape
called a conic section.
Jiajun Olmo
The sine function, along with cosine and tangent, is one of the three most common trigonometric functions. In any right triangle, the sine of an angle x is the length of the opposite side (O) divided
by the length of the hypotenuse (H). In a formula, it is written as 'sin' without the 'e':
Haixia Serrana
Cosecant is the reciprocal of sine. Secant is the reciprocal of cosine. Cotangent is the reciprocal of tangent.
Adahy Reisch
Radians. Most of the time we measure angles in degrees. For example, there are 360° in a full circle or one cycle of a sine wave, and sin(30°) = 0.5 and cos(90°) = 0. But it turns out that a more
natural measure for angles, at least in mathematics, is in radians.
Gurvinder Szyfransk
Arctan definition
The arctangent of x is defined as the inverse tangent function of x when x is real (x∈ℝ). When the tangent of y is equal to x: tan y = x. Then the arctangent of x is equal to the inverse tangent
function of x, which is equal to y: arctan x= tan^-^1 x = y.
Jacquie Hollerieth
Cosecant, Secant and Cotangent
Cosecant Function: csc(θ) = Hypotenuse / Opposite
Secant Function: sec(θ) = Hypotenuse / Adjacent
Cotangent Function: cot(θ) = Adjacent / Opposite
Jonatan Stolowsk
What is 'cotx'? cot is a short way to write 'cotangent'. This is the reciprocal of the trigonometric function 'tangent' or tan(x). Therefore, cot(x) can be simplified to 1/tan(x). Using trigonometric
rules, an alternative way to write 1/tan(x) is cos(x)/sin(x).
Ganix Escutia
The tangent of x is defined to be its sine divided by its cosine: tan x = sin x cos x . The cotangent of x is defined to be the cosine of x divided by the sine of x: cot x = cos x sin x . | {"url":"https://everythingwhat.com/what-is-the-sin-of-1-in-radians","timestamp":"2024-11-02T22:03:40Z","content_type":"text/html","content_length":"61338","record_id":"<urn:uuid:beb591d2-b46e-4815-a79f-1e13e74495aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00356.warc.gz"} |
Addition Subtraction Multiplication Division Of Integers Worksheets
Mathematics, specifically multiplication, forms the keystone of various scholastic disciplines and real-world applications. Yet, for several students, grasping multiplication can pose a difficulty.
To resolve this difficulty, teachers and parents have actually welcomed a powerful tool: Addition Subtraction Multiplication Division Of Integers Worksheets.
Introduction to Addition Subtraction Multiplication Division Of Integers Worksheets
Addition Subtraction Multiplication Division Of Integers Worksheets
Addition Subtraction Multiplication Division Of Integers Worksheets -
This assortment of adding and subtracting integers worksheets have a vast collection of printable handouts to reinforce performing the operations of addition and subtraction on integers among 6th
grade 7th grade and 8th grade students
This page includes Mixed operations math worksheets with addition subtraction multiplication and division and worksheets for order of operations We ve started off this page by mixing up all four
operations addition subtraction multiplication and division because that might be what you are looking for
Relevance of Multiplication Technique Recognizing multiplication is critical, laying a strong foundation for innovative mathematical concepts. Addition Subtraction Multiplication Division Of Integers
Worksheets supply structured and targeted technique, promoting a much deeper understanding of this essential math procedure.
Development of Addition Subtraction Multiplication Division Of Integers Worksheets
Integers Worksheets Dynamically Created Integers Worksheets
Integers Worksheets Dynamically Created Integers Worksheets
Welcome to our Addition Subtraction Multiplication Division worksheets page The worksheets on this page each include a range of mixed addition subtraction multiplication and division calculations The
sheets include both mental calculation and calculations where the standard method is used These sheets are aimed at 3rd to 5th graders
Build your own mixed operations worksheets in seconds Choose from the topics below to create a variety of custom worksheets Mixed Worksheets Addition Subtraction Multiplication and Division Click the
Create Worksheet button to create worksheets for various levels and topics More Worksheets Create New Worksheet
From typical pen-and-paper workouts to digitized interactive styles, Addition Subtraction Multiplication Division Of Integers Worksheets have actually developed, catering to varied discovering styles
and choices.
Types of Addition Subtraction Multiplication Division Of Integers Worksheets
Basic Multiplication Sheets Straightforward exercises focusing on multiplication tables, assisting learners construct a strong arithmetic base.
Word Problem Worksheets
Real-life situations integrated right into issues, improving important thinking and application abilities.
Timed Multiplication Drills Examinations developed to improve rate and accuracy, helping in rapid mental math.
Benefits of Using Addition Subtraction Multiplication Division Of Integers Worksheets
Pin On Projects To Try
Pin On Projects To Try
2 2 4 2 2 0 2 2 4 2 2 0 Addition and subtraction Addition and subtraction are two primary arithmetic operations in Maths Besides these two operations multiplication and division are also two primary
operations that we learn in basic Maths The addition represents the values added to the existing value
Grade 5 Integers Division of integers Division of integers Negative numbers division Division of or by negative numbers can be conceptually difficult these integer worksheets provide extra practice
in both normal and long division form Horizontal Worksheet 1 Worksheet 2 Worksheet 3 Long division Worksheet 4 Worksheet 5 Worksheet 6
Boosted Mathematical Skills
Constant method develops multiplication efficiency, enhancing overall mathematics capabilities.
Enhanced Problem-Solving Talents
Word problems in worksheets develop analytical thinking and strategy application.
Self-Paced Understanding Advantages
Worksheets fit individual knowing rates, fostering a comfortable and adaptable understanding environment.
How to Develop Engaging Addition Subtraction Multiplication Division Of Integers Worksheets
Incorporating Visuals and Colors Lively visuals and colors catch attention, making worksheets visually appealing and engaging.
Including Real-Life Situations
Connecting multiplication to everyday scenarios includes importance and usefulness to exercises.
Customizing Worksheets to Different Ability Degrees Personalizing worksheets based upon varying effectiveness degrees ensures inclusive learning. Interactive and Online Multiplication Resources
Digital Multiplication Tools and Gamings Technology-based resources provide interactive learning experiences, making multiplication interesting and pleasurable. Interactive Internet Sites and Apps On
the internet systems give varied and available multiplication practice, supplementing standard worksheets. Customizing Worksheets for Different Learning Styles Aesthetic Learners Aesthetic help and
layouts help understanding for students inclined toward aesthetic learning. Auditory Learners Spoken multiplication issues or mnemonics cater to students that realize ideas via acoustic means.
Kinesthetic Students Hands-on activities and manipulatives sustain kinesthetic learners in recognizing multiplication. Tips for Effective Implementation in Discovering Uniformity in Practice Normal
technique enhances multiplication abilities, advertising retention and fluency. Stabilizing Rep and Range A mix of repetitive exercises and diverse trouble styles preserves interest and
comprehension. Giving Useful Feedback Feedback aids in determining areas of renovation, encouraging continued development. Challenges in Multiplication Practice and Solutions Motivation and
Engagement Obstacles Dull drills can cause uninterest; ingenious strategies can reignite inspiration. Getting Over Concern of Math Negative assumptions around math can hinder development; creating a
favorable learning setting is vital. Impact of Addition Subtraction Multiplication Division Of Integers Worksheets on Academic Performance Researches and Research Study Searchings For Study shows a
positive connection in between constant worksheet usage and enhanced math performance.
Final thought
Addition Subtraction Multiplication Division Of Integers Worksheets become versatile tools, fostering mathematical proficiency in learners while suiting diverse discovering styles. From basic drills
to interactive on the internet resources, these worksheets not just enhance multiplication abilities yet additionally promote important reasoning and problem-solving abilities.
15 Best Images Of Multiplying Integers Worksheets Grade 7 Adding Integers Printable Worksheet
NCERT Class 7 Solutions Integers Chapter 1 Exercise 1 4 Part 2 FlexiPrep
Check more of Addition Subtraction Multiplication Division Of Integers Worksheets below
An Instruction Sheet For Adding And Subtracting Ineggers To Solve The Problem
integers worksheets Dynamically Created integers worksheets 35 multiplication And division Of
Integers Worksheets Dynamically Created Integers Worksheets Integers Worksheet Integers
Pin On Math Aids Com
Adding Subtracting Integers Worksheets Free Homeshealth info Worksheet Template Tips And Reviews
Integer Worksheets addition subtraction multiplication And division Teaching Resources
Mixed Operations Math Worksheets Math Drills
This page includes Mixed operations math worksheets with addition subtraction multiplication and division and worksheets for order of operations We ve started off this page by mixing up all four
operations addition subtraction multiplication and division because that might be what you are looking for
Grade 6 Integers Worksheets free printable K5 Learning
These grade 6 worksheets cover addition subtraction multiplication and division of integers Integers are whole numbers no fractional or decimal part and can be negative or positive Sample Grade 6
Integers Worksheet What is K5 K5 Learning offers free worksheets flashcards and inexpensive workbooks for kids in kindergarten to grade 5
This page includes Mixed operations math worksheets with addition subtraction multiplication and division and worksheets for order of operations We ve started off this page by mixing up all four
operations addition subtraction multiplication and division because that might be what you are looking for
These grade 6 worksheets cover addition subtraction multiplication and division of integers Integers are whole numbers no fractional or decimal part and can be negative or positive Sample Grade 6
Integers Worksheet What is K5 K5 Learning offers free worksheets flashcards and inexpensive workbooks for kids in kindergarten to grade 5
integers worksheets Dynamically Created integers worksheets 35 multiplication And division Of
Adding Subtracting Integers Worksheets Free Homeshealth info Worksheet Template Tips And Reviews
Integer Worksheets addition subtraction multiplication And division Teaching Resources
Math Integers Worksheets 7th Grade 1356449 Adding Integers Worksheet Subtracting Integers
Multiplying And Dividing Integers Worksheets Grade 7
Multiplying And Dividing Integers Worksheets Grade 7
Rules For Adding Subtracting Multiplying And Dividing Positve And Negative integers Negative
FAQs (Frequently Asked Questions).
Are Addition Subtraction Multiplication Division Of Integers Worksheets suitable for every age groups?
Yes, worksheets can be tailored to different age and ability levels, making them adaptable for numerous learners.
Exactly how often should pupils practice utilizing Addition Subtraction Multiplication Division Of Integers Worksheets?
Regular method is crucial. Normal sessions, ideally a couple of times a week, can produce significant improvement.
Can worksheets alone enhance mathematics abilities?
Worksheets are an useful device but must be supplemented with diverse learning methods for extensive ability advancement.
Exist online platforms providing free Addition Subtraction Multiplication Division Of Integers Worksheets?
Yes, several educational web sites provide free access to a large range of Addition Subtraction Multiplication Division Of Integers Worksheets.
Exactly how can moms and dads sustain their children's multiplication technique in the house?
Encouraging consistent practice, giving assistance, and developing a positive learning setting are valuable actions. | {"url":"https://crown-darts.com/en/addition-subtraction-multiplication-division-of-integers-worksheets.html","timestamp":"2024-11-12T23:27:19Z","content_type":"text/html","content_length":"30227","record_id":"<urn:uuid:f684da5e-3e14-4a3a-a0ea-ff09cd9a9d0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00359.warc.gz"} |
Babbage Describes the Logic and Operation of Machinery by Means of Notation
In 1826 mathematician and engineer Charles Babbage published "On a Method of Expressing by Signs the Action of Machinery," Philosophical Transactions 111 (1826) 250-65, 4 plates. This was the first
publication of Babbage's exposition of his system of mechanical notation that enabled him to describe the logic and operation of his machines on paper as they would be fabricated in metal. Babbage
later stated that "Without the aid of this language I could not have invented the Analytical Engine; nor do I believe that any machinery of equal complexity can ever be contrived without the
assistance of that or of some other equivalent language. The Difference Engine No. 2 . . . is entirely described by its aid" (Babbage, Passages from the Life of a Philosopher [1864], 104).
Babbage considered his mechanical notation system to be one of his finest inventions, and thought it should be widely implemented. It was a source of frustration to him that no other machine designer
adopted it (probably because no other engineer during Babbage's time attempted to build machines as logically and mechanically complex as Babbage's). More than one hundred years later, in the 1930s,
when developments in logic were applied to switching systems in the earliest efforts to develop electromechanical calculators, Claude Shannon demonstrated that Boolean algebra could be applied to the
same types of problems for which Babbage had designed his mechanical notation system.
"While making designs for the Difference Engine, Babbage found great difficulty in ascertaining from ordinary drawings-plans and elevations-the state of rest or motion of individual parts as
computation proceeded: that is to say in following in detail succeeding stages of a machine's action. This led him to develop a mechanical notation which provided a systematic method for labeling
parts of a machine, classifying each part as fixed or moveable; a formal method for indicating the relative motions of the several parts which was easy to follow; and means for relating notations and
drawings so that they might illustrate and explain each other. As the calculating engines developed the notation became a powerful but complex formal tool. Although its scope was much wider than
logical systems, the mechanical notation was the most powerful formal method for describing switching systems until Boolean algebra was applied to the problem in the middle of the twentieth century.
In its mature form the mechanical notation was to comprise three main components: a systematic method for preparing and labeling complex mechanical drawings; timing diagrams; and logic diagrams,
which show the general flow of control" (Hyman, Charles Babbage [1982], 58).
Hook & Norman, Origins of Cyberspace (2001) no. 37. | {"url":"https://historyofinformation.com/detail.php?entryid=3179","timestamp":"2024-11-01T23:07:32Z","content_type":"text/html","content_length":"17424","record_id":"<urn:uuid:6d0757fd-7d65-4103-999d-d5b1d635c283>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00377.warc.gz"} |
Quantum Computing: The Next Computing Revolution
Quantum computing represents a paradigm shift in computational capability, harnessing quantum mechanics principles to perform calculations impossible for classical computers. This comprehensive
overview examines its fundamentals, challenges, and revolutionary potential.
Quantum Computing Fundamentals
Traditional computers process information using bits, which exist in either 0 or 1 states. Quantum computers utilize quantum bits or qubits, which can exist in multiple states simultaneously through
superposition. This property, combined with quantum entanglement, enables parallel processing capabilities far beyond classical computers.
Quantum Superposition and Entanglement
Superposition allows qubits to exist in multiple states simultaneously, dramatically increasing computational possibilities. Quantum entanglement creates correlations between qubits, enabling them to
share information instantaneously regardless of distance. These properties form the foundation of quantum computing's power.
Current Technology Status
Today's quantum computers remain in early development stages. Companies like IBM, Google, and Intel compete to increase qubit counts and reduce error rates. Quantum supremacy demonstrations show
quantum computers solving specific problems faster than classical supercomputers, though practical applications remain limited.
Technical Challenges
Maintaining quantum states presents significant challenges. Quantum systems are extremely sensitive to environmental interference, requiring sophisticated error correction methods. Decoherence, where
quantum states deteriorate over time, remains a major obstacle to building practical quantum computers.
Potential Applications
Quantum computers promise revolutionary advances in multiple fields. They could optimize supply chains, improve financial modeling, and accelerate drug discovery through molecular simulation.
Cryptography particularly faces disruption, as quantum computers could break current encryption methods while enabling unbreakable quantum encryption.
Impact on Scientific Research
Quantum computing could transform scientific research. Simulating quantum systems becomes feasible, potentially revolutionizing materials science and chemical engineering. Complex optimization
problems in physics and astronomy become solvable, potentially unlocking new discoveries about our universe.
Future Prospects
Experts predict significant quantum computing advances in coming decades. Improved error correction and increased qubit coherence time will enable more practical applications. However, quantum
computers will likely complement rather than replace classical computers, excelling at specific types of calculations. | {"url":"https://sciencestyle.com.au/quantum-computing","timestamp":"2024-11-02T21:49:30Z","content_type":"text/html","content_length":"14811","record_id":"<urn:uuid:3e84dbe0-0149-4cec-ad59-b51d222e0be8>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00701.warc.gz"} |
Dimensionality Reduction Methods
Dimensionality Reduction Methods#
This section demonstrates the use of dimensionality reduction methods on various datasets to show the workings of each of the methods. The methods used in this section include Principal Component
Analysis (PCA) and Kernel PCA. PCA is a linear dimensionality reduction method that is often applicable to very simple datasets and Kernel PCA is a nonlinear dimensionality reduction method that has
a wider range of applicability. In the dimensionality reduction problems shown here, the main aim is to discover the intrinsic structure that is present within the data and represent that structure
in a lower dimension than the original data.
To start off, the block of code below will import the required packages for this section.
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_circles, make_swiss_roll, make_blobs
from sklearn.decomposition import PCA, KernelPCA
from sklearn.preprocessing import StandardScaler
from smt.sampling_methods import LHS
from smt.problems import CantileverBeam
Gaussian Clusters Dataset#
The first dataset is the Gaussian clusters dataset that consists of two clusters of points in two dimensions. The block of code below will plot this dataset for a 1000 samples and fixed random state.
The red and blue coloured points represent the two clusters. The aim of dimensionality reduction is to represent the structure of the data within a single dimension. The structure to look for in this
data is that there are two separable clusters in the data which must be visible in the single dimension.
# Generating half moon shape dataset
X, y = make_blobs(n_samples=1000, random_state = 20)
# Plotting the dataset
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ax.scatter(X[y==0, 0], X[y==0, 1], color='red', alpha=0.5)
ax.scatter(X[y==1, 0], X[y==1, 1], color='blue', alpha=0.5)
ax.set_title('Isotropic Gaussian Clusters Dataset')
The block of code below uses PCA to reduce the dimension of the dataset to a single dimension. The plot shows the principal component that is obtained after dimensionality reduction and the
y-coordinate of the plot is essentially zero since the data is one dimensional. The red and blue colours are still used to denote the points from each half of the original dataset.
# Applying PCA to the dataset
transform = PCA(n_components = 1)
x_transform = transform.fit_transform(X)
x1 = x_transform[y==0, 0]
x2 = x_transform[y==1, 0]
# Plotting the embedding generated using PCA
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ax.scatter(x_transform[y==0, 0], np.zeros((x1.shape[0],1)), color='red', alpha=0.5)
ax.scatter(x_transform[y==1, 0], np.zeros((x2.shape[0],1)), color='blue', alpha=0.5)
ax.set_title('Embedding generated using PCA')
ax.set_xlabel('Principal Component 1')
The plot above shows that in the single principal component chosen there is a clear separation between the points coming from each cluster of the dataset and PCA is able to fully separate the two
clusters of the dataset. This means that PCA can represent the underlying structure that is present within the original dataset.
The next block of code will apply Kernel PCA to the dataset and creates a similar plot as the one created for PCA. This block of code using the radial basis function kernel for Kernel PCA.
# Applying Kernel PCA to the dataset
transform = KernelPCA(n_components = 1, kernel = 'rbf', gamma = 0.25)
x_transform = transform.fit_transform(X)
x1 = x_transform[y==0, 0]
x2 = x_transform[y==1, 0]
# Plotting the embedding generated using PCA
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ax.scatter(x_transform[y==0, 0], np.zeros((x1.shape[0],1)), color='red', alpha=0.5)
ax.scatter(x_transform[y==1, 0], np.zeros((x2.shape[0],1)), color='blue', alpha=0.5)
ax.set_title('Embedding generated using Kernel PCA')
ax.set_xlabel('Principal Component 1')
The above plot shows that Kernel PCA is able to separate the two clusters of the original data set as represented by a clear separation between the red and blue coloured points. This shows that
Kernel PCA is able to discover the intrinsic structure of the data set and separate the two clusters originally present in the data.
Concentric Circles Dataset#
The second data set used in this section is the concentric circles dataset. This is a noisy dataset that represents points within two concentric circles. Similar to the half-moon dataset, the
intrinsic structure of the dataset is the two distinct circular regions present within the dataset that must be separated when the data is reduced to a single dimension. The block of code below
generates 1000 samples from the noisy dataset and plots the samples in two dimensions.
# Generating concentric circles dataset
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
# Plotting the dataset
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ax.scatter(X[y==0, 0], X[y==0, 1], color='red', alpha=0.5)
ax.scatter(X[y==1, 0], X[y==1, 1], color='blue', alpha=0.5)
ax.set_title('Concentric circles datset')
The next block of code will reduce the concentric circles dataset to a single dimension using PCA and a plot of the principal component obtained is shown.
# Applying PCA to the dataset
transform = PCA(n_components = 1)
x_transform = transform.fit_transform(X)
# Plotting the embedding generated using PCA
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ax.scatter(x_transform[y==0, 0], np.zeros((500,1)), color='red', alpha=0.5)
ax.scatter(x_transform[y==1, 0], np.zeros((500,1)), color='blue', alpha=0.5)
ax.set_title('Embedding generated using PCA')
ax.set_xlabel('Principal Component 1')
Similar to the half moon dataset, PCA is unable to separate the points from the two circles and there is a significant overlap between the points in the single principal component that is obtained.
The block of code below will use Kernel PCA to reduce the dimension of the concentric circles dataset using the radial basis function as the kernel for reducing the dimension.
# Applying Kernel PCA to the dataset
transform = KernelPCA(n_components = 1, kernel = 'rbf', gamma = 15)
x_transform = transform.fit_transform(X)
# Plotting the embedding generated using PCA
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ax.scatter(x_transform[y==0, 0], np.zeros((500,1)), color='red', alpha=0.5)
ax.scatter(x_transform[y==1, 0], np.zeros((500,1)), color='blue', alpha=0.5)
ax.set_title('Embedding generated using Kernel PCA')
ax.set_xlabel('Principal Component 1')
Kernel PCA is able to separate the points from the two circles in the single principal component and therefore, can represent the intrinsic structure of the dataset within a single dimension. In
general, both of these datasets have an intrinsic nonlinear that can be best represented using Kernel PCA rather than PCA. PCA will be more effective in simpler cases where the data has an intrinsic
linear structure. It is, therefore, important to analyze the data that is being used to understand if there is an intrinsic structure to the data that is well-suited to a dimensionality reduction
Swiss Roll Dataset#
The final dataset is the swiss roll dataset that consists of points in three dimensions that form the shape of a swiss roll. The intrinsic structure within the data is that the swiss roll is actually
a two dimensional plane that has been rolled into the shape of a swiss roll. A successful dimensionality reduction would be able to represent the two dimensional plane using two principal components.
The block of code below generates 1500 samples from the dataset and creates a three dimensional scatter plot of the data.
# Obtaining Swiss Roll dataset from scikit learn
sr_points, sr_color = make_swiss_roll(n_samples=1500, random_state=0)
# Plotting the dataset
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection="3d")
ax.scatter(sr_points[:, 0], sr_points[:, 1], sr_points[:, 2], c=sr_color, s=50, alpha=0.8)
ax.set_title("Swiss Roll Dataset")
ax.view_init(azim=-66, elev=12)
The next blocks of code apply PCA to reduce the dimension to two principal components and create a two dimensional scatter plot of the principal components.
# Applying PCA to the data
transform = PCA(n_components=2)
xprime = transform.fit_transform(sr_points)
fig, ax = plt.subplots(figsize=(8, 6))
ax.scatter(xprime[:, 0], xprime[:, 1], c=sr_color)
ax.set_xlabel("Principal Component 1")
ax.set_ylabel("Principal Component 2")
ax.set_title("PCA Embedding of the Swiss Roll Function")
Text(0.5, 1.0, 'PCA Embedding of the Swiss Roll Function')
The above scatter plot of the principal componets shows that PCA is not able to discover the two dimensional plane the underlies the swiss roll dataset. In fact, it represents the swiss roll spiral
directly in two dimensions which means it is unable to discover the underlying structure of the dataset. This was expected as the swiss roll dataset is a highly nonlinear dimensionality reduction
problem and PCA is not well-suited to this dataset.
The next block of code applies Kernel PCA to the dataset and creates a plot of the principal components. The radial basis function kernel is used in this case as well.
# Applying Kernel PCA to the data
transform = KernelPCA(n_components=2, kernel='rbf', gamma = 0.01)
xprime = transform.fit_transform(sr_points)
fig, ax = plt.subplots(figsize=(8, 6))
ax.scatter(xprime[:, 0], xprime[:, 1], c=sr_color)
ax.set_xlabel("Principal Component 1")
ax.set_ylabel("Principal Component 2")
ax.set_title("Kernel PCA Embedding of the Swiss Roll Function")
Text(0.5, 1.0, 'Kernel PCA Embedding of the Swiss Roll Function')
The above plot of the principal components shows that Kernel PCA could partially unravel the roll in the dataset and represent it in two dimensions. A full unraveling into a two dimensional plane
would be taking the edges with the yellow coloured and purple coloured points and pulling them apart to create a two dimensional plane. However, Kernel PCA can only unravel it to the point that it
appears similar to a circle in two dimensions. This shows that while Kernel PCA is a nonlinear method, it may not be well-suited for all nonlinear problems and in this case, a different
dimensionality reduction method is required to fully represent the intrinsic structure of the swiss roll dataset. Examples of dimensionality reduction methods that could prove more effective in this
case are isometric mapping (IsoMap) and locally linear embedding (LLE).
Choosing the number of principal components for PCA#
In the datasets shown above, the choice of principal components can seem trivial since the datasets are two or three-dimensional and there are few possibilities for the number of principal
components. However, choosing the exact number of principal components can be a challenge in datasets with much higher dimensionality. To correctly choose the principal components, it is necessary to
calculate the cumulative ratio of the total variance that is explained by each of the principal components. This cumulative explained variance ratio can be calculated as
\[\frac{\sum_{i=1}^d \lambda_i}{\sum_{j=1}^n \lambda_j},\]
where \(\lambda_{i,j}\) are the eigenvalues of the covariance matrix used in PCA, \(d\) is the number of principal components and \(n\) is the total dimensionality of the original dataset. The
cumulative explained variance ratio allows the different principal components to be ranked and the most important components can be identified. To determine the correct number of principal
components, a threshold value is set for the cumulative explained variance ratio. A good choice for the threshold value can be between 90% to 95%. The minimum number of principal components that can
satisfy this threshold value are used to perform dimensionality reduction.
This procedure of determining the correct number of principal components for a given dataset is demonstrated on the 30-dimensional Cantilever Beam dataset that is provided in smt. The block code
below defines the dataset using smt, generates a sampling plan using Latin Hypercube Sampling (LHS) and applies PCA to the dataset for many different number of principal components. At each number of
principal components, the cumulative explained variance ratio is calculated. A plot is created that shows the variation of the cumulative explained variance ratio against the number of principal
components. A threshold value of 95% is chosen to select the correct number of principal components.
# Defining Cantilever Beam problem
ndim = 30
problem = CantileverBeam(ndim=ndim)
# Generating data
sampler = LHS(xlimits=problem.xlimits, criterion='ese')
num_train = 50
xtrain = sampler(num_train)
ytrain = problem(xtrain)
# Pre-processing data before applying PCA
center = xtrain.mean(axis = 0)
xtrain_center = xtrain - center
# Calculating cumulative explained variance ratios
sum_explained_variance_ratio_ = []
n_components = np.arange(1,ndim,1)
for component in n_components:
# Applying PCA to the data
transform = PCA(n_components=component)
xprime = transform.fit_transform(xtrain_center)
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ax.plot(n_components, sum_explained_variance_ratio_, marker='.')
ax.set_xlabel('Number of Principal Components', fontsize = 14)
ax.set_ylabel('Cumulative Explained Variance Ratio', fontsize = 14)
ax.axhline(0.95, xmin = 0.0, xmax = ndim, color='k', linestyle='--')
<matplotlib.lines.Line2D at 0x13d7ba100>
From the above plot, it can be seen that using 18 principal components approximately reaches the threshold of 95% that was set for this problem. This indicates that using 18 principal components is
the right choice to reduce the dimensionality of the 30-dimensional Cantilever Beam problem. It is important to look for the minimum integer value of the number of principal components that can be
used to meet the threshold of the cumulative explained variance ratio. | {"url":"https://computationaldesignlab.github.io/surrogate-methods/dim_reduction/dim_red_methods.html","timestamp":"2024-11-14T02:01:27Z","content_type":"text/html","content_length":"73141","record_id":"<urn:uuid:a17aea09-ad53-4011-af5f-f53e2784888f>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00144.warc.gz"} |
How do you write an equation of a line going through (0,7), (3,5)? | Socratic
How do you write an equation of a line going through (0,7), (3,5)?
2 Answers
See the entire solution process below:
First, we need to determine the slope of the line going through the two points. The slope can be found by using the formula: $m = \frac{\textcolor{red}{{y}_{2}} - \textcolor{b l u e}{{y}_{1}}}{\
textcolor{red}{{x}_{2}} - \textcolor{b l u e}{{x}_{1}}}$
Where $m$ is the slope and ($\textcolor{b l u e}{{x}_{1} , {y}_{1}}$) and ($\textcolor{red}{{x}_{2} , {y}_{2}}$) are the two points on the line.
Substituting the values from the points in the problem gives:
$m = \frac{\textcolor{red}{5} - \textcolor{b l u e}{7}}{\textcolor{red}{3} - \textcolor{b l u e}{0}} = - \frac{2}{3}$
Now, we can use the point-slope formula to find an equation going through the two points. The point-slope formula states: $\left(y - \textcolor{red}{{y}_{1}}\right) = \textcolor{b l u e}{m} \left(x -
Where $\textcolor{b l u e}{m}$ is the slope and $\textcolor{red}{\left(\left({x}_{1} , {y}_{1}\right)\right)}$ is a point the line passes through.
Substituting the slope we calculated and the values from the first point gives:
$\left(y - \textcolor{red}{7}\right) = \textcolor{b l u e}{- \frac{2}{3}} \left(x - \textcolor{red}{0}\right)$
We can also substitute the slope we calculated and the values from the second point giving:
$\left(y - \textcolor{red}{5}\right) = \textcolor{b l u e}{- \frac{2}{3}} \left(x - \textcolor{red}{3}\right)$
We can also solve the first equation for $y$ to transform the equation to slope-intercept form. The slope-intercept form of a linear equation is: $y = \textcolor{red}{m} x + \textcolor{b l u e}{b}$
Where $\textcolor{red}{m}$ is the slope and $\textcolor{b l u e}{b}$ is the y-intercept value.
$y - \textcolor{red}{7} = \textcolor{b l u e}{- \frac{2}{3}} x$
$y - \textcolor{red}{7} + 7 = - \frac{2}{3} x + 7$
$y - 0 = - \frac{2}{3} x + 7$
$y = \textcolor{red}{- \frac{2}{3}} x + \textcolor{b l u e}{7}$
Since a point is given at $\left(0 , 7\right)$, you know already that the y - intercept has to be 7. Therefore, you know b in the following equation:
$y = m x + b$
Now, use the slope formula to find m, the slope.
$\frac{7 - 5}{0 - 3} = \frac{2}{- 3}$
So the equation must be: $y = - \frac{2}{3} x + 7$
Hope that helps!!
Impact of this question
14932 views around the world | {"url":"https://socratic.org/questions/how-do-you-write-an-equation-of-a-line-going-through-0-7-3-5","timestamp":"2024-11-14T17:51:02Z","content_type":"text/html","content_length":"39798","record_id":"<urn:uuid:229bc387-8ae6-4691-bd56-7ac10542065e>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00235.warc.gz"} |
Markov Bas
The Markov Bases Database
Independence model of 5 binary variables.
It is a hierarchical model of 5 variables. The dimension of the model is 5.The cardinality of the statespace is 32.
Properties of the Markov basis
Markov degree 2
minimal size 285
The model has the following properties:
• All variables are binary.
• It is a graph model and a graphical model.
• The semigroup is normal.
Markov basis: uni5-1_bin.mar (27.00 kb)
sufficient statistics matrix: uni5-1_bin.mat (656 b)
model description: uni5-1_bin.mod (37 b)
all in one tar.gz: uni5-1_bin.tar.gz (2.11 kb)
Wrong or missing information? Write us an email. | {"url":"https://markov-bases.de/show.php?name=uni5-1_bin","timestamp":"2024-11-09T07:49:09Z","content_type":"text/html","content_length":"4037","record_id":"<urn:uuid:93d07c70-0942-4957-ac36-16e7d67db482>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00008.warc.gz"} |
Logistics Network Design in context of Transportation Network Optimization
30 Aug 2024
Logistics Network Design: Optimizing Transportation Networks
In today’s fast-paced and competitive business environment, logistics network design has become a critical component of supply chain management. Effective logistics network design enables companies
to optimize their transportation networks, reduce costs, improve service levels, and increase customer satisfaction. In this article, we will delve into the concept of logistics network design, its
importance, and provide formulas to illustrate key concepts.
What is Logistics Network Design?
Logistics network design refers to the process of designing a network of warehouses, distribution centers, and transportation routes that efficiently move goods from suppliers to customers. It
involves analyzing the flow of goods, identifying bottlenecks, and optimizing the network to minimize costs, reduce transit times, and improve service levels.
Key Components of Logistics Network Design
1. Network Structure: The network structure refers to the physical layout of warehouses, distribution centers, and transportation routes.
2. Demand Forecasting: Accurate demand forecasting is crucial in logistics network design. It helps companies anticipate future demand and plan accordingly.
3. Supply Chain Constraints: Supply chain constraints include factors such as capacity limitations, lead times, and inventory levels that affect the flow of goods.
4. Transportation Modes: Transportation modes refer to the various ways goods are transported, including truckload, less-than-truckload, air freight, and ocean freight.
Formulas for Logistics Network Design
1. Total Cost Formula: The total cost formula helps companies calculate the overall cost of their logistics network:
TC = TC_W + TC_TC + TC_T
Where: TC = Total Cost TC_W = Warehouse costs (rent, utilities, labor) TC_TC = Transportation costs (fuel, tolls, labor) TC_T = Terminal costs (loading/unloading, storage)
1. Service Level Formula: The service level formula helps companies measure the effectiveness of their logistics network:
SL = (On-time delivery rate) x (Order fulfillment rate)
Where: SL = Service Level On-time delivery rate = Number of on-time deliveries / Total number of deliveries Order fulfillment rate = Number of orders fulfilled / Total number of orders
1. Capacity Formula: The capacity formula helps companies determine the maximum amount of goods that can be transported:
C = Q x V
Where: C = Capacity (tons or units) Q = Quantity of goods per shipment V = Vehicle capacity (tons or units)
Optimization Techniques
1. Linear Programming: Linear programming is a mathematical technique used to optimize logistics networks by minimizing costs and maximizing service levels.
2. Integer Programming: Integer programming is a variation of linear programming that involves solving problems with integer variables.
3. Dynamic Programming: Dynamic programming is a technique used to solve complex optimization problems by breaking them down into smaller sub-problems.
Best Practices for Logistics Network Design
1. Collaborate with Stakeholders: Collaborate with suppliers, customers, and other stakeholders to understand their needs and requirements.
2. Use Data Analytics: Use data analytics to analyze demand patterns, supply chain constraints, and transportation modes.
3. Test and Refine: Test different scenarios and refine the logistics network design based on results.
Logistics network design is a critical component of supply chain management that requires careful planning, analysis, and optimization. By understanding key concepts, formulas, and best practices,
companies can optimize their transportation networks, reduce costs, improve service levels, and increase customer satisfaction.
Related articles for ‘Transportation Network Optimization’ :
• Reading: Logistics Network Design in context of Transportation Network Optimization
Calculators for ‘Transportation Network Optimization’ | {"url":"https://blog.truegeometry.com/tutorials/education/6d70a545f49030780b9a88e3b7ece0d7/JSON_TO_ARTCL_Logistics_Network_Design_in_context_of_Transportation_Network_Opti.html","timestamp":"2024-11-11T19:53:52Z","content_type":"text/html","content_length":"20094","record_id":"<urn:uuid:e48ee67e-6eac-43d0-ad9c-2d8dac9568ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00875.warc.gz"} |
What is: Population Parameter
What is a Population Parameter?
A population parameter is a numerical value that summarizes a characteristic of a population. It is a fixed value that represents an entire group, such as the mean, median, mode, or standard
deviation of a dataset. Unlike sample statistics, which are derived from a subset of the population, population parameters provide a comprehensive view of the entire population’s attributes.
Understanding population parameters is crucial in statistics, as they serve as the foundation for various statistical analyses and hypothesis testing.
Importance of Population Parameters in Statistics
Population parameters play a vital role in statistical inference, which involves making predictions or generalizations about a population based on sample data. By estimating population parameters,
researchers can draw conclusions about the entire population without needing to collect data from every individual. This process is essential for efficient data analysis, allowing statisticians to
make informed decisions and recommendations based on limited information.
Common Types of Population Parameters
There are several common types of population parameters that statisticians frequently encounter. The most notable include the population mean, which indicates the average value of a dataset; the
population variance, which measures the dispersion of data points; and the population proportion, which reflects the fraction of the population that possesses a certain characteristic. Each of these
parameters provides valuable insights into the population’s structure and behavior, aiding in effective data analysis.
How Population Parameters Differ from Sample Statistics
While both population parameters and sample statistics aim to describe data, they differ significantly in their scope and application. Population parameters are fixed values that describe the entire
population, whereas sample statistics are estimates derived from a smaller subset of that population. This distinction is crucial, as sample statistics can vary from one sample to another, leading to
potential inaccuracies in representing the population. Understanding this difference is essential for accurate data interpretation and analysis.
Methods for Estimating Population Parameters
Estimating population parameters typically involves using sample data to calculate sample statistics, which serve as approximations of the true population values. Common methods for estimating these
parameters include point estimation, where a single value is provided as an estimate, and interval estimation, which offers a range of values within which the population parameter is likely to fall.
These estimation techniques are fundamental in statistics, allowing researchers to make educated guesses about population characteristics.
Challenges in Determining Population Parameters
Determining population parameters can present several challenges, particularly when dealing with large or heterogeneous populations. Issues such as sampling bias, non-response bias, and measurement
errors can significantly impact the accuracy of the estimated parameters. Additionally, in cases where the population is difficult to access or define, obtaining representative samples becomes
increasingly complex, complicating the estimation process and potentially leading to misleading conclusions.
Applications of Population Parameters in Data Science
In the field of data science, population parameters are utilized extensively for predictive modeling, machine learning, and data visualization. By understanding the underlying population
characteristics, data scientists can build more accurate models that reflect real-world scenarios. Furthermore, population parameters inform decision-making processes in various industries, from
healthcare to marketing, by providing insights into consumer behavior, trends, and preferences.
Population Parameters and Hypothesis Testing
Population parameters are integral to hypothesis testing, a statistical method used to determine the validity of a claim about a population. In hypothesis testing, researchers formulate null and
alternative hypotheses based on population parameters and use sample data to assess the likelihood of these hypotheses being true. This process helps in making data-driven decisions and validating
assumptions, thereby enhancing the reliability of research findings.
Conclusion: The Significance of Understanding Population Parameters
Grasping the concept of population parameters is essential for anyone involved in statistics, data analysis, or data science. These parameters not only provide a comprehensive understanding of
population characteristics but also serve as the basis for various statistical methodologies. By accurately estimating and interpreting population parameters, researchers and analysts can enhance
their data-driven decision-making processes, leading to more effective outcomes in their respective fields. | {"url":"https://statisticseasily.com/glossario/what-is-population-parameter-detailed-overview/","timestamp":"2024-11-05T06:06:57Z","content_type":"text/html","content_length":"138753","record_id":"<urn:uuid:337514b2-9a87-4a80-b2ac-1aac7f189d52>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00143.warc.gz"} |
Calculating the Payment in an Ordinary Annuity (PMT) | AccountingCoach
Calculating the Payment in an Ordinary Annuity (PMT)
Present value calculations allow us to determine the amount of the recurring payments in an ordinary annuity if we know the other components: present value, interest rate, and the length of the
annuity. Exercises 5 and 6 will demonstrate how to solve for the payment amount.
Exercise #5
On June 1, 2023, Grandma deposited $1,733 into an account to help pay for Emily’s summer volleyball camp for four consecutive years. The first camp is scheduled for June 2024. The account earns 6%
interest per year, compounded annually. The interest earned on the account balance is deposited into the account on May 31 of each year. If Grandma wants the balance to be $0 at the end of the four
years, how much should she withdraw for Emily each June?
The following timeline helps us visualize the facts:
Calculation of Exercise #5 using the PVOA Table
Using the above information and factors from our PVOA Table, we can solve for the unknown payment amount (PMT) as follows:
We use simple algebra and the appropriate present value factor to determine that Grandma can withdraw $500 each June 1 beginning in 2024.
The following table shows the account activity, confirming that $500 can be withdrawn each year for four years:
Exercise #6
Your company plans to borrow $10,152 on January 1, 2024. You would like to repay the loan by making six semiannual loan payments beginning on June 30, 2024. The payments will be equal amounts and
will cover a portion of both the interest (10% per year compounded semiannually) and the principal repayment. The payments will be paid on each June 30 and December 31. What will be the amount of
each of the six payments?
Calculation of Exercise #6 using the PVOA Table
Our first step is to construct a timeline to organize the information:
Using the above information and factors from our PVOA Table, we can solve for the unknown payment amount (PMT) with the following equation:
We use simple algebra and the appropriate present value factor to determine that each of the six payments will be $2,000. The first payment will be made on June 30, 2024 and the final payment will
occur on December 31, 2026.
The following loan amortization schedule shows the amount of interest and principal contained in each loan payment and confirms that the loan will be paid by December 31, 2026.
Send Feedback
Please let us know how we can improve this explanation
No Thanks | {"url":"https://www.accountingcoach.com/present-value-of-an-ordinary-annuity/explanation/5","timestamp":"2024-11-12T02:26:58Z","content_type":"text/html","content_length":"110692","record_id":"<urn:uuid:d3f4d27d-90c1-4186-a6cf-bc7eab6d0af5>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00861.warc.gz"} |
Sequences and Series Revision - AQA Maths A-level - PMT
Sequences and Series
Hakan Y. ★
Queen Mary University of London - MSc Mathematics
Mathematics and physics are my passion in life
£125 / hour
• Qualified Teacher
• Graduate
January mocks on the horizon? Kick-start your revision with our online Mock Preparation Courses. 2-day Pure and 1-day Mechanics and Statistics courses running 22-23rd December and 2-3rd January.
Click here for more information and dates!
Questions by Topic || Worksheets
This topic is included in Papers 1 & 2 for AS-level AQA Maths and Papers 1, 2 & 3 for A-level AQA Maths.
Cheat Sheets
Year 1
Questions by Topic
Mark schemes and examiner reports are available at the end of each document.
Year 2
These are Solomon Press worksheets. They were written for the outgoing specification but we have carefully selected ones which are relevant to the new specification. | {"url":"https://www.physicsandmathstutor.com/maths-revision/a-level-aqa/sequences-and-series/","timestamp":"2024-11-11T00:51:26Z","content_type":"text/html","content_length":"99072","record_id":"<urn:uuid:f24528f2-9eb5-4c12-8312-c434c3c10ac9>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00308.warc.gz"} |
Input Variable Selection Methods for Construction of Interpretable Regression Models
The doctoral dissertations of the former Helsinki University of Technology (TKK) and Aalto University Schools of Technology (CHEM, ELEC, ENG, SCI) published in electronic format are available in the
electronic publications archive of Aalto University - Aaltodoc.
Input Variable Selection Methods for Construction of Interpretable Regression Models
Jarkko Tikka
Dissertation for the degree of Doctor of Science in Technology to be presented with due permission of the Faculty of Information and Natural Sciences for public examination and debate in Auditorium
T1 at Helsinki University of Technology (Espoo, Finland) on the 12^th of December, 2008, at 12 noon.
Overview in PDF format (ISBN 978-951-22-9664-4) [1039 KB]
Dissertation is also available in print (ISBN 978-951-22-9663-7)
Large data sets are collected and analyzed in a variety of research problems. Modern computers allow to measure ever increasing numbers of samples and variables. Automated methods are required for
the analysis, since traditional manual approaches are impractical due to the growing amount of data. In the present thesis, numerous computational methods that are based on observed data with subject
to modelling assumptions are presented for producing useful knowledge from the data generating system.
Input variable selection methods in both linear and nonlinear function approximation problems are proposed. Variable selection has gained more and more attention in many applications, because it
assists in interpretation of the underlying phenomenon. The selected variables highlight the most relevant characteristics of the problem. In addition, the rejection of irrelevant inputs may reduce
the training time and improve the prediction accuracy of the model.
Linear models play an important role in data analysis, since they are computationally efficient and they form the basis for many more complicated models. In this work, the estimation of several
response variables simultaneously using the linear combinations of the same subset of inputs is especially considered. Input selection methods that are originally designed for a single response
variable are extended to the case of multiple responses. The assumption of linearity is not, however, adequate in all problems. Hence, artificial neural networks are applied in the modeling of
unknown nonlinear dependencies between the inputs and the response.
The first set of methods includes efficient stepwise selection strategies that assess usefulness of the inputs in the model. Alternatively, the problem of input selection is formulated as an
optimization problem. An objective function is minimized with respect to sparsity constraints that encourage selection of the inputs. The trade-off between the prediction accuracy and the number of
input variables is adjusted by continuous-valued sparsity parameters.
Results from extensive experiments on both simulated functions and real benchmark data sets are reported. In comparisons with existing variable selection strategies, the proposed methods typically
improve the results either by reducing the prediction error or decreasing the number of selected inputs or with respect to both of the previous criteria. The constructed sparse models are also found
to produce more accurate predictions than the models including all the input variables.
This thesis consists of an overview and of the following 7 publications:
1. Timo Similä and Jarkko Tikka. 2005. Multiresponse sparse regression with application to multidimensional scaling. In: Włodzisław Duch, Janusz Kacprzyk, Erkki Oja, and Sławomir Zadrożny (editors).
Proceedings of the 15th International Conference on Artificial Neural Networks: Formal Models and Their Applications (ICANN 2005). Part II. Warsaw, Poland. 11-15 September 2005. Springer-Verlag.
Lecture Notes in Computer Science, volume 3697, pages 97-102. © 2005 by authors and © 2005 Springer Science+Business Media. By permission.
2. Timo Similä and Jarkko Tikka. 2006. Common subset selection of inputs in multiresponse regression. In: Proceedings of the 2006 International Joint Conference on Neural Networks (IJCNN 2006).
Vancouver, BC, Canada. 16-21 July 2006, pages 1908-1915. © 2006 IEEE. By permission.
3. Timo Similä and Jarkko Tikka. 2007. Input selection and shrinkage in multiresponse linear regression. Computational Statistics & Data Analysis, volume 52, number 1, pages 406-422. © 2007 Elsevier
Science. By permission.
4. Jarkko Tikka and Jaakko Hollmén. 2008. Sequential input selection algorithm for long-term prediction of time series. Neurocomputing, volume 71, numbers 13-15, pages 2604-2615. © 2008 Elsevier
Science. By permission.
5. Jarkko Tikka and Jaakko Hollmén. 2008. Selection of important input variables for RBF network using partial derivatives. In: Michel Verleysen (editor). Proceedings of the 16th European Symposium
on Artificial Neural Networks - Advances in Computational Intelligence and Learning (ESANN 2008). Bruges, Belgium. 23-25 April 2008. d-side publications, pages 167-172.
6. Jarkko Tikka. 2007. Input selection for radial basis function networks by constrained optimization. In: Joaquim Marques de Sá, Luís A. Alexandre, Włodzisław Duch, and Danilo Mandic (editors).
Proceedings of the 17th International Conference on Artificial Neural Networks (ICANN 2007). Part I. Porto, Portugal. 9-13 September 2007. Springer-Verlag. Lecture Notes in Computer Science,
volume 4668, pages 239-248. © 2007 by author and © 2007 Springer Science+Business Media. By permission.
7. Jarkko Tikka. 2008. Simultaneous input variable and basis function selection for RBF networks. Neurocomputing, accepted for publication. © 2008 by author and © 2008 Elsevier Science. By
Keywords: data analysis, machine learning, function approximation, multiresponse linear regression, nonlinear regression, artificial neural networks, input variable selection, model selection,
constrained optimization
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
© 2008 Helsinki University of Technology
Last update 2011-05-26 | {"url":"http://lib.tkk.fi/Diss/2008/isbn9789512296644/","timestamp":"2024-11-07T10:25:11Z","content_type":"application/xhtml+xml","content_length":"14646","record_id":"<urn:uuid:08a9839d-f3f1-49c0-b930-e1e62e2b76c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00078.warc.gz"} |
Standart Form to Scientific Notation Converter | Decimal Converter
Decimal to Scientific Notation
Scientific Notation
Scientific Notation Converter
Decimal to Scientific Notation
Instructions for use:
• If the decimal is 123445;
• If the decimal is 1.43665;
• If the decimal is 0.12001;
• If the decimal is 2316.5567;
• If the decimal is 0.00114011478;
Decimal Number:
To transform a number into scientific notation, it is necessary to understand what are base powers 10. From the definition of power, we have to:
10^0 = 1
10^1 = 10
10^2 = 10 · 10 = 100
10^3 = 10 · 10 · 10 = 1.000
10^4 = 10 · 10· 10· 10 = 10.000
10^5 = 10· 10· 10· 10· 10 = 100.000
Note that as the exponent increases, so does the number of zeros in the answer. Also see that the number in the exponent is the number of zeros that we have on the right. This is equivalent to saying
that the number of decimal places moved to the right is equal to the power exponent. For example, 10^10 is equal to 10,000,000,000.
Another case that we must analyze is when the exponent is a negative number.
Having reviewed the idea of base 10 power, we will now understand how to transform a number into scientific notation. It is important to emphasize that, regardless of the number, to write it in the
form of scientific notation, we must always leave it with a significant figure.
Thus, to write a number in the form of scientific notation, the first step is to write it in the form of a product, so that a base 10 power (decimal form) appears. See the examples:
a) 0.0000034 = 3.4 x 0.000001 = 3.4 x 10^– 6
b) 134,000,000,000 = 134 x 1,000,000,000 = 134 x 10^9
We agree that this process is not at all practical, so, in order to facilitate it, note that when we “walk” with the comma to the right, the base 10 exponent decreases the number of decimal places
walked. Now, when we “walk” decimal places to the left, the base 10 exponent increases the number of walked places.
In summary, if the zeros are to the left of the number, the exponent is negative and coincides with the number of zeros; if the zeros appear to the right of the number, the exponent is positive and
also coincides with the number of zeros.
a) The distance between the planet Earth and the Sun is 149,600,000 km.
Observe the number and see that, to write it in scientific notation, it is necessary to “walk” with the decimal point eight decimal places to the left, so the base 10 exponent will be positive:
149,600,000 = 1.49610^8
b) The approximate age of planet Earth is 4,543,000,000 years.
Similarly, see that, to write the number in scientific notation, it is necessary to move 9 decimal places to the left, then:
4.543,000,000 = 4.54310^9
c) The diameter of an atom is on the order of 1 nanometer, that is, 0.0000000001.
To write this number using scientific notation, we must move 10 decimal places to the right, so:
0,0000000001 = 1 x 10^-10
This website was created to help students and curious people to solve problems with scientific notation. Addition, subtraction, division, multiplication and conversions.
Scientific-Notation 2020 © All Rights Reserved | {"url":"https://scientific-notation.com/standart-form-to-scientific-notation/","timestamp":"2024-11-03T06:53:37Z","content_type":"text/html","content_length":"94006","record_id":"<urn:uuid:dfb9b668-d330-4f36-8752-131f46ba08c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00484.warc.gz"} |
Carbon Dating Graph
The function f(x)=1000*(1/2)^(x/6000) is the formula to find the half life of a sample of carbon on planet frisbee. The initial sample has 1000 atoms with 10% of decaying carbon 14 molecules. On
planet Frisbee molecules of carbon 14 has a half life of 6000 years. In this equation, the total amount of carbon in the atmosphere is labeled on the y-axis and time is labeled on the x-axis. In
order to discover how much time has past since the initial amount, set the equation equal to the new sample size. The amount of samples you have and the carbon dating left, is found by the
intersection of the c(x) function and line a, which gives you point A (time, sample). | {"url":"https://stage.geogebra.org/m/QYxFWxzQ","timestamp":"2024-11-03T00:03:37Z","content_type":"text/html","content_length":"90110","record_id":"<urn:uuid:4322884c-22e7-499b-9dd7-1229622d488f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00471.warc.gz"} |
Abstract Algebra
Hw4 Problem 6
Return to Homework 4, Homework Problems, Glossary, Theorems
Problem 6
Let $G$ be a group and let $g \in G$ be fixed. Show that the map $\gamma_{g}: G \rightarrow G$ defined by $\gamma_{g}(x) = gxg'$ is an isomorphism of $G$ with itself.
Let $x,y\in G$. Now $\gamma_{g}(x)=gxg'$ and $\gamma_{g}(y)=gyg'$. Suppose
through cancellation. Hence $\gamma_{g}$ is one-to-one.
Let $y\in G$. Now, we want to find $x\in G$ such that
\gamma_{g}(x)=y\\ gxg'=y
By subsituting $x=g'yg$, we get
g(g'yg)g'=y\\ (gg')y(gg')=y\\ y=y
Hence, $\gamma_{g}$ is onto.
\gamma_{g}(x*y)=g(x*y)g'\\ =g*x*(g'*g)*y*g,
where $g'*g=e=1$,
Hence, $\gamma_{g}$ is a homomorphism.
Therefore, $\gamma_{g}:G\rightarrow G$ defined by $\gamma_{g}(x)=gxg'$ is an isomorphism of $G$ with itself. | {"url":"http://algebra2014.wikidot.com/hw4-problem-6","timestamp":"2024-11-09T22:54:07Z","content_type":"application/xhtml+xml","content_length":"29644","record_id":"<urn:uuid:1ea8ac42-61e3-4b1e-8cb2-af3ba5d89621>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00323.warc.gz"} |
Arithmetic By R S Agrawal Pdf Download
EPUB Arithmetic By R S Agrawal PDF Book is the book you are looking for, by download PDF Arithmetic By R S Agrawal book you are also motivated to search from other sources
Anshika Agrawal; Stuti Agrawal S0601CALIFORNIA STATE SCIENCE FAIR 2017 PROJECT SUMMARY Ap2/17 Name(s) Project Number Project Title Abstract Summary Statement Help Received Anshika Agrawal; Stuti
Agrawal The Most Effective Antacid S0601 Objectives/Goals Several Antacids Were Tested In This Experiment To Determine The PH Range In Which They Show Buffer Activity. 3th, 2024Arithmetic By R S
AgrawalRs Agarwal 9788121907415 Objective Arithmetic Abebooks R S Abebookscom Objective Arithmetic 9788121907415 By R S Aggarwal And A Great Selection Of Similar New' 'ARITHMETIC BY R S AGRAWAL
PDFSDOCUMENTS2 COM MAY 1ST, 2018 - ARITHMETIC BY R 1th, 2024STRAND C: Consumer Arithmetic Unit 9 Consumer ArithmeticMEP Jamaica: STRAND C UNIT 9 Consumer Arithmetic: Student Text 8 Exercises 1. Anna
Earns J$21 000 Per Week. She Is Given A 3% Pay Increase. How Much Does She Now Earn Per Week? 2. Mrs Ray Has A Job For Which The Basic Pay Is $5.60 Per Hour, And The Overtime Rate Of Pay Is $8.40 Per
Hour. D 3th, 2024.
ARITHMETIC MEAN AND THE N TERM OF AN ARITHMETIC …Arithmetic Sequence Finds The Nth Term Of An Arithmetic Sequence Lists Down The First Few Terms Of An Arithmetic Sequence Given The General Term And
Vice-versa Solves Word Problems Involving Arithmetic Mean Applies The Concepts Of Mean And The Nth Term Of An Arithmetic Sequence 3th, 2024History Of Arithmetic Coding Lecture 9: Arithmetic Coding
...Arithmetic Coding Provides A Practical Way Of Encoding A Source In A Very Nearly Optimal Way. Even Faster Arithmetic Coding Methods That Avoid Multiplies And Divides Have Been Devised. However:
It’s Not Necessarily The Best Solution To Every Problem. Sometimes Hu Man Coding Is Faster And Almost As Good. Other Codes May Also Be Useful. ... 3th, 2024Arithmetic Sequences Worksheet #2 1) For
The Arithmetic ...Arithmetic Sequences Worksheet #2 1) For The Arithmetic Sequence 42, 32, 22, 12… A. Find The 5 Th, 6th, And 7th Terms B. Find The Formula For The Nth Term. C. Find The 18th Term In
T 1th, 2024.
Engineering Mathematics 1 By Dc Agrawal | Hsm1.signorityEngineering Mathematics Is A Relatively Short, Orderly Text That Is Organized For Maximum Comprehension. The Text Opens With An Introduction To
Complex Variables Because They Offer Powerful Techniques For Understanding And Computing Fourier, Laplace And Z-transforms. This Book Contains A 1th, 2024Hotel Management System - Prof. Dipesh
AgrawalThe Hotel Management System Is A Tool For Booking The Rooms Of Hotel Through Online By The Customer. It Provides The Proper Management Tools And Easy Access To The Customer Information. 1.1
Purpose This Hotel Management System Software Requirement Specification (SRS) Main Objective Is To 2th, 2024Cute Love Story By Nidhi Agrawal [PDF]Cute Love Story By Nidhi Agrawal Media Publishing
EBook, EPub, Kindle PDF View ID 3321300fc Aug 17, 2020 By Danielle Steel Sensible And A Passionate Lover Neeraj Loves Aakriti Who Is Simple Innocent But Very Naughty They 1th, 2024.
Rs Agrawal Math Class 12 Solution - Thearmenianpalace.comRead Online Rs Agrawal Math Class 12 Solution Rs Agrawal Math Class 12 Solution|helveticabi Font Size 10 Format Eventually, You Will Extremely
Discover A New Experience And Deed By Spending More Cash. Still When? Realize You Agree To That You Require To Get Those Every Needs Like Having Significantly Cash? Why Don't You Try To Acquire
Something Basic In The Beginning? That's Something That ... 2th, 2024Rs Agrawal Math Solutions Of 8 ClassJune 20th, 2018 - Download The Solution Of RS Aggarwal Mathematics Class 10 11 And 12 Online
For Free RS Aggarwal Solution For Class 8 RS Aggarwal Solution For Class 7''RS Aggarwal Maths Class10 Solution Apps On Google Play June 12th, 2018 - RS Agrawal Class 10 Maths Solutions App Is
Specially Designed For The CBSE And 3th, 2024Dc Agrawal Engineering Maths - Stocksgazette.comArihant Objective Mathematics For Engineering Entrances By A M Aggarwal Sir | Book Review | EduTube By Edu
Tube 2 Months Ago 10 Minutes, 32 Seconds 544 Views Here Is A , Book , Review Of Arihant Objective , Mathematics , For , Engineering , Entrances By Amit M , Aggarwal , Sir. 1th, 2024.
R S Agrawal Book Verbal Nonverbal ReasoningTags: A Modern Approach To Verbal & Non Verbal Reasoning Analytical Reasoning By Mk Pandey Pdf Arihant Reasoning Book Arihant Verbal And Non Verbal
Reasoning Book Pdf Free Download Logical Reasoning Books Non Verbal Reasoning Tricks Pdf Rs Aggarwal Logical Reasoning Pdf Quora Verbal And Non Verbal Reasoning By R S Agarwal Pdf Quora 3th,
2024AGRAWAL Et Al: MODELLING COVID-19 & IMPACT OF ...1 AGRAWAL Et Al: MODELLING COVID-19 & IMPACT OF INTERVENTIONS Original Article Modelling The Spread Of The SARS-CoV-2 Pandemic - Impact Of
Lockdowns & Interventions Manindra Agrawal1, Madhuri Kanitkar3 & M. Vidyasagar2 1Department Of Computer Science & Engineering, IIT Kanpur, Kanpur, Uttar Pradesh, 2Department Of Artificial
Intelligence, IIT Hyderabad, Hyderabad, Telangana, India & 3Dy 1th, 2024Vivek Agrawal, MD - PatientPopArthroscopic Coracoclavicular Ligament Reconstruction Utilizing A Semitendinosis Graft And
Titanium Flip Button Tension Band Construct. Indiana Orthopaedic Journal. 2010. Volume 4: P79-83. Vivek Agrawal, MD Cover Image: Arthroscopic Repair Of Large Bony Bankart Lesion. Arthroscopy: The
Journal Of Ar 2th, 2024.
Jagdish Agrawal Professor Of Marketing/Dean College Of ..."Toward Understanding The Measurement Of Market Efficiency," Journal Of Public Policy And Marketing, 15(2), Fall, Pp. 167-184. (Winner Of The
Outstanding Journal Of Public Policy And Marketing Article Award, 1995-19 2th, 2024R S Agrawal Class 9th Math - Universitas SemarangRs Aggarwal Class 10 Solutions Free Pdf Download For Maths. R S
Aggarwal Mathematics Solutions Class 9 Pdf Download. Rs Aggarwal Class 9 Solutions Class 6 To 12 Byju S. Rs Aggarwal Maths Book Class 10 Pdf The Pyrex Kid. R S Aggarwal Class 9 Maths Zigb 1th, 2024Rs
Agrawal Math - Kidbridge.comRS Aggarwal 2020 Textbook Solutions For Class 10 Math Rs Aggarwal Mathematics Solution 2020 – 21. Here We Are Providing Latest Edition Of Rs Aggarwal Math Book Solution
From Class 1 To 09. 2020 – 21 Session R.S. Aggarwal Bo 3th, 2024.
Rs Agrawal Mathematics Class 12Rs Agrawal Mathematics Class 12 Author: Thepopculturecompany.com-2021-04-14T00:00:00+00:01 Subject: Rs Agrawal Mathematics Class 12 Keywords: Rs, Agrawal, Mathematics,
2th, 2024Aayushman Hospital & Res. Center Agrawal Nursing Home Dr ...Chawla Nursing Home Dr. R.S. Chawla Masiha Gunj, Sipri Bazar Jhansi Ph. 2443607 M. 9415113283 Kailash Jain Hospital Dr. Rajat Jain
Opp. M.L.B. Medical College Kanpur Road Jhansi M. 9415031714 Kamla Hospital Dr. Vinod Misurya Opp.gate No.1 Gate No.1, Kanpur Rd Jhansi Ph. 2321159, 1th, 2024Engineering Drawing By Basant
AgrawalEngineering Drawing - Basant Agrawal - Google Books About The Book: Engineering Drawing: 2nd Edition This Textbook On Engineering Drawing Is Designed As A Basic Textbook For All First-year
Engineering Students. It Aims At Simplifying The Study Of Engineering Drawing By Emphasizing On The Ba 2th, 2024.
JAGDISH SINGH KHEHAR CJI, R K AGRAWAL J, D Y …JAGDISH SINGH KHEHAR CJI, R K AGRAWAL J, D Y CHANDRACHUD ,and S ABDUL NAZEER J (read By Dr D Y CHANDRACHUD, J): This Judgment Has Been Divided Into
Sections To Facilitate Analysis. They Are : A The Reference B Decision In M P Sharma C Decision In Kharak Singh 2th, 2024Rs Agrawal Quantitative Aptitude[2020*Latest] RS Aggarwal Reasoning Book PDF
Free Download Rs Agarawal Quantitative Pdf Quantitative Aptitude Book Is Highly Recommended Book For All Exams In Which Quant Questions Are Asked. This Book Is Important For
MBA,CAT,Banking,NDA,CDS,SBI,RBI,Railways,HSSC, HPSC,MPSC, KPSC,MPPSC,RAS,H 2th, 2024Quantitative Aptitude R S AgrawalDownload RS Aggarwal Quantitative Aptitude Book PDF 2019 RS Aggarwal Is An Author
Who Has Authored More Than 75 Books From Nursery To M. Sc And Has Also Authored Books For Competitive Examinations Ranging From Clerical Exams To I. A. S Level Of Exams. Aptitude, Math Books - Dr.
R.S. Aggarwal 1th, 2024.
RAKESH AGRAWAL - Purdue University- Participant, U.S. Department Of Energy’s Workshop On Separations (2005) - Member, NSF Panel On Process Design And Control (2005) - Member, AIChE Energy Commission
(2005-2007) - Member, NRC Board On Energy And Environmental Sciences (2005-11) - Member, NRC Panel On DOE’s Inte 1th, 2024 | {"url":"https://isco-iss.faperta.unpad.ac.id/download/arithmetic-by-r-s-agrawal.html","timestamp":"2024-11-02T06:34:20Z","content_type":"text/html","content_length":"20968","record_id":"<urn:uuid:e4e70017-56df-41a1-ae0e-67600709f324>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00264.warc.gz"} |
Valence can control the nonexponential viscoelastic relaxation of multivalent reversible gels
Many robotics problems are formulated as optimization problems. However, most optimization solvers in robotics are locally optimal and the performance depends a lot on the initial guess. For
challenging problems, the solver will often get stuck at poor loc ... | {"url":"https://graphsearch.epfl.ch/en/publication/311601","timestamp":"2024-11-07T20:35:59Z","content_type":"text/html","content_length":"106809","record_id":"<urn:uuid:81d2e3ca-fe61-440b-b189-48d045a8bd07>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00148.warc.gz"} |
Automatic Analysis of Series of Trials Save Options • Genstat v21
Use this to save results from an automatic analysis of series of trials in Genstat data structures. The individual trial or the meta analysis of all trials must be selected using the Model to save
results for dropdown list.
1. After selecting the appropriate boxes, type the identifiers of the data structures into the corresponding In: fields.
Model to save results for
The results are saved for the selected trial or the meta analysis. The meta analysis option will be available only if the Run a meta analysis to combine information across trials option has been
chosen in the Options dialog,
Fitted values Variate Fitted values from the analysis
Residuals Variate Residuals from the analysis
Residual deviance Scalar Residual deviance from fitting the full fixed model
Residual degrees of freedom Scalar Residual degrees of freedom after fitting the full fixed model
Predicted means Table Predicted means for the specified term
Variance matrix of means Symmetric Variance-covariance matrix for the predicted means
Standard error of difference Symmetric Standard errors of differences between the predicted means
between means matrix
Estimated effects Table Estimated regression coefficients for the specified term
Variance matrix of effects Symmetric Variance-covariance matrix for the estimated effects
Standard error of difference Symmetric Standard errors of differences between the estimated parameters of each term
between effects matrix
Akaike information Scalar Akaike information coefficient to assess the random model
coefficient (AIC)
Schwarz information Scalar Schwarz information coefficient to assess the random model
coefficient (SIC)
Unit variance-covariance Symmetric Matrix giving the fitted variance-covariance relationship between all the units. Note: this may be very large. This uses the random and residual terms, but
matrix matrix not spline terms. It cannot be formed if the model contains sparse inverse covariance matrices.
Display in spreadsheet
Select this to display the results in a new spreadsheet window.
Export to file
Save selected results into a Genstat or Excel file using the Save REML results in a spreadsheet dialog
Model terms for effects and means
Specifies the model term for which tables of estimated effects and predicted means are to be saved. The string ‘Constant’ may be used to save results for the constant term. If tables of effects or
means are required for more than one model term, this menu should be invoked once for each term, changing the specification of the model term each time.
Method for residuals
The list allows selection from type of residuals that can be calculated.
Combine all random terms Use the residuals combined from all random terms.
Final random term only Use the residuals from the final random term.
Standardized residuals from all random terms Uses standardized residuals after combining them from all random terms.
Standardized residuals from final random term only Uses standardized residuals from the final random term.
Combine all random terms, excluding spline terms Use the residuals combined from all random terms except spline terms.
See also | {"url":"https://genstat21.kb.vsni.co.uk/knowledge-base/automatic-series-save/","timestamp":"2024-11-05T08:48:16Z","content_type":"text/html","content_length":"43611","record_id":"<urn:uuid:ceaeb92d-742a-4d1b-af34-4a93387cb763>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00324.warc.gz"} |
2D Bin Packing Problem Solver
This online calculator tries to solve an offline 2D bin packing problem using Maximal Rectangles heuristic algorithm
This content is licensed under Creative Commons Attribution/Share-Alike License 3.0 (Unported). That means you may freely redistribute or modify this content under the same license conditions and
must attribute the original author by placing a hyperlink from your site to this work https://planetcalc.com/8449/. Also, please do not modify any references to the original work (if any) contained
in this content.
Articles that describe this calculator
The file is very large. Browser slowdown may occur during loading and creation.
The file is very large. Browser slowdown may occur during loading and creation.
The file is very large. Browser slowdown may occur during loading and creation.
Calculators used by this calculator
Similar calculators
PLANETCALC, 2D Bin Packing Problem Solver | {"url":"https://planetcalc.com/8449/?license=1","timestamp":"2024-11-04T07:06:49Z","content_type":"text/html","content_length":"48853","record_id":"<urn:uuid:d37a3850-c5a0-4164-a1fc-5d39d184eb15>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00261.warc.gz"} |
Is the body mass index a good measure of health? - ASDAH
by Jon Robison, PhD, MS
The BMI is a measure of height and weight – specifically weight divided by height squared. It is the predominate measure by which health professionals and governments determine what is and is not a
“healthy weight” for a particular individual, thereby informing them if they are “at risk” for morbidity and premature mortality. In reality, however, BMI is not only not a good measure of health, it
is actually not a measure of health at all.
The formula itself was created around 1850 by the brilliant Belgian mathematician, astronomer and statistician Lambert Adolphe Jacques Quetelet – and appropriately named The Quetelet Index. Dr.
Quetelet was not a health professional and he was not interested in fat or health risk. He was fascinated by the idea of using statistics to draw conclusions about societies – and the “average man.”
Some of us will remember the 20^th century figure portraying the average family as having 2.4 children. Not only was his formula not health related, it was never meant to be used on individuals, only
on populations. As Stanford University mathematician Keith Devlin (the Math Guy on NPR’s Weekend Edition) recently commented, “the absurdity of using statistical formulas to make any claims about a
single individual is made clear by the old joke about the man who had his head in the refrigerator and his feet in the fire: on the average he felt fine!” A wonderful expose of the inherent
mathematical absurdities associated with the use of this formula can be found in Dr. Devlin’s article Do You Believe in Fairies, Unicorns or the BMI?
The Quetelet Index remained as such until 1972 when Dr. Ancel Keys appropriated it as a proxy for body fat percentage (renaming it the Body Mass Index) in an article in The Journal of Chronic
Diseases. The rest, as they say, is history.
So the formula is being used for something for which it was never intended and in a manner that is mathematically indefensible. Are there any other problems? We have been told that the BMI serves as
a measure of health because it is a good indicator of body fatness, and therefore a good predictor of health problems and premature mortality. Is this true or isn’t it?
Statistician Dr. Gregory Kline examined this question in an article in The Healthy Weight Journal in 2001. He found that while the BMI can give us a pretty accurate average body fat percentage for a
large group of people, on an individual level it is a poor predictor of body fat percentage. For example, Kline showed that in a sample of 1,000 people from Central Massachusetts, for a BMI of 35 the
average percent body fat was around 32. However, individuals with a BMI of 35 had a range of body fat percentages from 18 to 47! (Remember the guy with his head in the fridge?) Dr. Kline also
encountered the same problem when he used BMI to predict individual fitness or blood pressure concluding that:
“Using BMI to assess degree of adiposity and, more importantly, health risk for an individual is questionable and unwarranted due to the magnitude of error in prediction.”
But wait, there is more! Not only is BMI not a good predictor of body fat, fitness, or blood pressure, it is also not good at predicting mortality or morbidity. In 2006 a large systematic review of
the relationship between bodyweight, mortality and coronary artery disease in the esteemed British medical journal The Lancet concluded that BMI was a poor predictor of either. In an accompanying
editorial, another physician researcher wrote:
“BMI can definitely be left aside as a clinical and epidemiological measure of cardiovascular disease for both primary and secondary prevention.”
Two years later, back in the States in the Archives of Internal Medicine, Wildman et al. analyzed a representative sample of the US population and found that using BMI as a proxy for health resulted
in misdiagnosing 51% of the healthy people as unhealthy. Dr David Haslam, clinical director of Britain’s National Obesity Forum got it right when he said; “it’s now widely accepted that the BMI is
useless for assessing the healthy weight of individuals.”
So, there we have it. The measure we are using for the supposedly most serious health problem facing us today is mathematically bereft, lacks a theoretical foundation and is a poor indicator of
health. According to the Math Guy this realty should come as no surprise as:
“The BMI was formulated, by a mathematician, not a medical physician, to provide a simple, easy-to-apply mathematical formula to give a broad, society-level measure of weight issues. It has
absolutely no scientific or medical basis. It is based purely on a crude statistical analysis. It measures a general society trend, it does not predict.”
The sooner the health establishment gets its head out of the sand and owns up to this reality the better. I probably wouldn’t bet much on that happening anytime soon. For now, however, it is at least
somewhat comforting to know that the people who really know about these things are willing to lead the way – again quoting the Math Guy:
“Since the entire sorry saga of the BMI was started by a mathematician – one of us – I think the onus is on us, as the world’s experts on the formulation and application of mathematical formulas,
to start to eradicate this nonsense and demand the responsible use of our product.”
Come on health professionals – Now it’s our turn! | {"url":"https://asdah.org/is-the-body-mass-index-a-good-measure-of-health/","timestamp":"2024-11-10T00:13:04Z","content_type":"text/html","content_length":"111601","record_id":"<urn:uuid:65313339-0052-49fd-b18a-d104c65d043c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00384.warc.gz"} |