content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Maxwell’s equations review (plus magnetic sources and currents) - Peeter Joot's BlogJanuary 28, 2015 – Peeter Joot's Blog
Maxwell’s equations review (plus magnetic sources and currents)
January 28, 2015 ece1229 constituative relations, continuity equation, divergence theorem, ece1229, Green's function, impulse response, linear time invariant, magnetic charge, magnetic source,
Maxwell's equations, phasor, Stokes' theorem
[Click here for a PDF of this post with nicer formatting]
These are notes for the UofT course ECE1229, Advanced Antenna Theory, taught by Prof. Eleftheriades, covering ch. 3 [1] content.
Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides that match the textbook so closely, there is little value
to me taking notes that just replicate the text. Instead, I am annotating my copy of textbook with little details instead. My usual notes collection for the class will contain musings of details that
were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book.)
Maxwell’s equation review
For reasons that are yet to be seen (and justified), we work with a generalization of Maxwell’s equations to include
electric AND magnetic charge densities.
\spacegrad \cross \boldsymbol{\mathcal{E}} = – \boldsymbol{\mathcal{M}} – \PD{t}{\boldsymbol{\mathcal{B}}}
\spacegrad \cross \boldsymbol{\mathcal{H}} = \boldsymbol{\mathcal{J}} + \PD{t}{\boldsymbol{\mathcal{D}}}
\spacegrad \cdot \boldsymbol{\mathcal{D}} = \rho
\spacegrad \cdot \boldsymbol{\mathcal{B}} = \rho_m.
Assuming a phasor relationships of the form \( \boldsymbol{\mathcal{E}} =
\text{Real} \lr{ \BE(\Br) e^{j \omega t}} \) for the fields and the currents, these reduce to
\spacegrad \cross \BE = – \BM – j \omega \BB
\spacegrad \cross \BH = \BJ + j \omega \BD
\spacegrad \cdot \BD = \rho
\spacegrad \cdot \BB = \rho_m.
In engineering the fields
• \( \BE \) : Electric field intensity (V/m, Volts/meter).
• \( \BH \) : Magnetic field intensity (A/m, Amperes/meter).
are designated primary fields, whereas
• \( \BD \) : Electric flux density (or displacement vector) (C/m, {Coulombs/meter).
• \( \BB \) : Magnetic flux density (W/m, Webers/meter).
are designated the induced fields. The currents and charges are
• \( \BJ \) : Electric current density (A/m).
• \( \BM \) : Magnetic current density (V/m).
• \( \rho \) : Electric charge density (C/m^3).
• \( \rho_m \) : Magnetic charge density (W/m^3).
Because \( \spacegrad \cdot \lr{ \spacegrad \cross \Bf } = 0 \) for any
(sufficiently continuous) vector \( \Bf \), divergence relations between the
currents and the charges follow from \ref{eqn:chapter3Notes:100}…
= -\spacegrad \cdot \BM – j \omega \spacegrad \cdot \BB
= -\spacegrad \cdot \BM – j \omega \rho_m,
= \spacegrad \cdot \BJ + j \omega \spacegrad \cdot \BD
= \spacegrad \cdot \BJ + j \omega \rho,
These are the phasor forms of the continuity equations
\spacegrad \cdot \BM = – j \omega \rho_m
\spacegrad \cdot \BJ = -j \omega \rho.
Integral forms
The integral forms of Maxwell’s equations follow from Stokes’ theorem and the divergence theorems. Stokes’ theorem is a relation between the integral of the curl and the outwards normal differential
area element of a surface, to the boundary of that surface, and applies to any surface with that boundary
d\BA \cdot \lr{\spacegrad \cross \Bf}
= \oint \Bf \cdot d\Bl.
The divergence theorem, a special case of the general Stokes’ theorem is
\iiint_{V} \spacegrad \cdot \Bf dV
= \iint_{\partial V} \Bf \cdot d\BA,
where the integral is over the surface of the volume, and the area element of the bounding integral has an outwards normal orientation.
See [5] for a derivation of this and various generalizations.
Applying these to Maxwell’s equations gives
\oint d\Bl \cdot \BE = –
\iint d\BA \cdot \lr{
\BM + j \omega \BB
\oint d\Bl \cdot \BH =
\iint d\BA \cdot \lr{
\BJ + j \omega \BD
\iint_{\partial V} d\BA \cdot \BD = \iiint \rho dV
\iint_{\partial V} d\BA \cdot \BB = \iiint \rho_m dV
Constitutive relations
For linear isotropic homogeneous materials, the following constitutive relations apply
• \( \BD = \epsilon \BE \)
• \( \BB = \mu \BH \)
• \( \BJ = \sigma \BE \), Ohm’s law.
• \( \epsilon = \epsilon_r \epsilon_0\), is the permutivity (F/m, Farads/meter ).
• \( \mu = \mu_r \mu_0 \), is the permeability (H/m, Henries/meter), \( \mu_0 = 4 \pi \times 10^{-7} \).
• \( \sigma \), is the conductivity (\( \inv{\Omega m}\), where \( 1/\Omega \) is a Siemens.)
In AM radio, will see ferrite cores with the inductors, which introduces non-unit \( \mu_r \). This is to increase the radiation resistance.
Boundary conditions
For good electric conductor \( \BE = 0 \).
For good magnetic conductor \( \BB = 0 \).
(more on class slides)
Linear time invariant
Linear time invariant meant that the impulse response \( h(t,t’) \) was a function of just the difference in times \( h(t,t’) = h(t-t’) \).
Green’s functions
For electromagnetic problems the impulse function sources \( \delta(\Br – \Br’) \) also has a direction, and can yield any of \( E_x, E_y, E_z \). A tensor impulse response is required.
Some overview of an approach that uses such tensor Green’s functions is outlined on the slides. It gets really messy since we require four tensor Green’s functions to handle electric and magnetic
current and charges. Because of this complexity, we don’t go down this path, and use potentials instead.
In \S 3.5 [1] and the class notes, a verification of the spherical wave form for the Helmholtz Green’s function was developed. This was much simpler than the same verification I did in [4]. Part of
the reason for that was that I worked in Cartesian coordinates, which made things much messier. The other part of the reason, for treating a neighbourhood of \( \Abs{\Br – \Br’} \sim 0 \), I verified
the convolution, whereas Prof. Eleftheriades argues that a verification that \( \int \lr{\spacegrad^2 + k^2} G(\Br, \Br’) dV’ = 1\) is sufficient. Balanis, on the other hand, argues that knowing the
solution for \( k \ne 0 \) must just be the solution for \( k = 0 \) (i.e. the Poisson solution) provided it is multiplied by the \( e^{-j k r} \) factor.
Note that back when I did that derivation, I used a different sign convention for the Green’s function, and in QM we used a positive sign instead of the negative in \( e^{-j k r } \).
• Phasor frequency terms are written as \( e^{j \omega t} \), not \( e^{-j \omega t} \), as done in physics. I didn’t recall that this was always the case in physics, and wouldn’t have assumed it.
This is the case in both [3] and [2]. The latter however, also uses \( \cos(\omega t – k r) \) for spherical waves possibly implying an alternate phasor sign convention in that content, so I’d be
wary about trusting any absolute “engineering” vs. physics sign convention without checking carefully.
• In Green’s functions \( G(\Br, \Br’) \), \( \Br \) is the point of observation, and \( \Br’ \) is the point in the convolution integration space.
• Both \( \BM \) and \( \BJ_m \) are used for magnetic current sources in the class notes.
[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley \& Sons, 3rd edition, 2005.
[2] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics, chapter {Electromagnetic Waves}. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.
[3] JD Jackson. Classical Electrodynamics, chapter {Simple Radiating Systems, Scattering, and Diffraction}. John Wiley and Sons, 2nd edition, 1975.
[4] Peeter Joot. Quantum Mechanics II., chapter {Verifying the Helmholtz Green’s function.} peeterjoot.com, 2011. URL https://peeterjoot.com/archives/math2011/phy456.pdf. [Online; accessed
[5] Peeter Joot. Exploring physics with Geometric Algebra, chapter {Stokes theorem}. peeterjoot.com, 2014. URL https://peeterjoot.com/archives/math2009/gabook.pdf. [Online; accessed 28-January-2015].
|
{"url":"https://peeterjoot.com/2015/01/28/","timestamp":"2024-11-11T04:39:47Z","content_type":"text/html","content_length":"101094","record_id":"<urn:uuid:606838fc-3c5f-4368-be9b-23fa9955b162>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00483.warc.gz"}
|
st: RE: Re: st: Re: st: Re: st: RE: Truncated sample or Heckman selectio
Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: Re: st: Re: st: Re: st: RE: Truncated sample or Heckman selection
From "Millimet, Daniel" <[email protected]>
To "[email protected]" <[email protected]>
Subject st: RE: Re: st: Re: st: Re: st: RE: Truncated sample or Heckman selection
Date Fri, 5 Oct 2012 03:09:15 +0000
The same data-generating process and censoring applies even to variables that "cannot" be, say, less than zero. Suppose we assume labor supply is determined by
Y=xb+e, e~N(0,s2)
But, since labor supply cannot be negative, we call Y in the above DGP, the latent Y*, which can take on any number in the real number line. If we don't relabel Y as Y*, then you need to impose the bound at 0 some way in the assumed DGP. So, now we have
Y*=xb+e, e~N(0,s2)
But the observed Y = Y* if Y*>0 and and Y=0 if Y*<=0.
Basically, the point is that prior to discussing an estimator, you need to be clear on what DGP you assume generates the data such that values below 0 are not feasible. The latent framework that corresponds to the tobit is one such DGP that models the mass at zero, and is consistent with the observed Y being strictly non-negative.
Daniel L. Millimet, Professor
Department of Economics
Box 0496
Dallas, TX 75275-0496
phone: 214.768.3269
fax: 214.768.1821
web: http://faculty.smu.edu/millimet
-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of Joerg Luedicke
Sent: Thursday, October 04, 2012 9:52 PM
To: [email protected]
Subject: st: Re: st: Re: st: Re: st: RE: Truncated sample or Heckman selection
I find it difficult to understand why you would regard a variable as censored, when it actually isn't?
Let's assume the outcome variable (y) is income and the predictor variable (x) is years of education. We generate some data for which the expected income for people without education is 500 and, on average, persons earn 300 more per year of education:
set obs 1000
set seed 1234
gen x=rnormal(10,3)
gen e=rnormal(0,20)
gen y=500+300*x+e
Fitting a linear model to these data yields the expected parameters:
reg y x
Now suppose income was only measured exactly for amounts of 3,000 or more, so in this case y is censored from below at a value of 3,000:
gen cy=y
replace cy=3000 if y<3000
If we fit the simple linear model to these data now, the results are obviously bad:
reg cy x
However, if we use the Tobit model, we can again recover the correct parameters:
tobit cy x, ll(3000)
So the Tobit model makes a lot of sense here and seems useful in an otherwise possibly unpleasant situation, given the censored outcome.
However, if an outcome is simply bounded at zero, like for example expenditure data, then such variables are not censored: a zero is just a zero; not more and not less. So why would it be advisable to use a censored regression model when the outcome is not censored? For me, that would only make sense if, say, the model shares some other hidden qualities and generally does well when analyzing bounded data. But this does not even seem to be the case if we consider Austin Nichols'
(2010) simulation results for nonnegative skewed data.
Nichols , A, 2010. Regression for nonnegative skewed dependent variables, BOS10 Stata Conference 2, Stata Users Group.
URL: http://repec.org/bost10/nichols_boston2010.pdf
On Thu, Oct 4, 2012 at 8:08 PM, Millimet, Daniel <[email protected]> wrote:
> Yes, in my opinion, if you include the zeros, a fractional logit or tobit or censored LAD is appropriate (given the other assumptions implicit in these models). The only issue is whether some Xs are missing for the zeros. That you will have to confront yourself if you have Xs you want to include that are missing from some obs.
> ****************************************************
> Daniel L. Millimet, Professor
> Department of Economics
> Box 0496
> SMU
> Dallas, TX 75275-0496
> phone: 214.768.3269
> fax: 214.768.1821
> web: http://faculty.smu.edu/millimet
> ****************************************************
> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On Behalf Of Ebru Ozturk
> Sent: Thursday, October 04, 2012 5:32 PM
> To: [email protected]
> Subject: RE: st: Re: st: Re: st: RE: Truncated sample or Heckman
> selection
> Thank you. It will be quite complicated for me to understand this e-mail.
> Yes, in my data there is a mass at zero and I include all of them. So you are saying that it is a censoring problem and tobit regression is applicable or a fractional logit model?
> The other issue about Xs. The Xs that I am interested in have not been observed for non-innovator firms but there are other Xs that I use them as control variable have been observed for all firms in the sample.
> Ebru
> ----------------------------------------
>> From: [email protected]
>> To: [email protected]
>> Subject: RE: st: Re: st: Re: st: RE: Truncated sample or Heckman
>> selection
>> Date: Thu, 4 Oct 2012 22:16:57 +0000
>> If you include all firms in a model, with a mass at zero, then is the standard censoring problem. Labor supply models are classic model. Labor supply has a "natural" lower bound at zero, but one does not use OLS. Typically, tobit models are used or semiparametric alternatives like censored LAD or symmetric trimmed least squares. See, for example, Wilhelm (OBES, 2008, "Practical Considerations for Choosing Between Tobit and SCLS or CLAD Estimators for Censored Regression Models with an Application to Charitable Giving"). For percentages, even though these variables are by definition between 0 and 1 (or 100), a fractional logit is the most common model, I believe, if there is a mass at either boundary point.
>> So, in your case, if you include the zeros, yes it is a censoring problem.
>> Th next issue is what Xs you observe for different observations. If all Xs were observed for all obs (0 and positive values), then a fractional logit is the answer (or a tobit or one of the above alternatives). If SOME of the Xs are missing for the obs at zero, then you can (i) drop the zeros and estimate a selection-corrected OLS model - if you ignore the upper limit of 100 - or you can combine the selection correction with a fractional logit/probit model, as long as you are sure the control function term for the correction is correct (this is what some empirical trade papers do when they drop country pairs with zero trade; although it is not recommended), or (ii) include the zeros, but you need two different equations for the zeros and the non-zeros since it sounded like not all Xs are available for the obs at zero. So, something like a hurdle (zero-inflated) model tailored to your example.
>> **********************************************
>> Daniel L. Millimet, Professor
>> Department of Economics
>> Box 0496
>> SMU
>> Dallas, TX 75275-0496
>> phone: 214.768.3269
>> fax: 214.768.1821
>> web: http://faculty.smu.edu/millimet
>> **********************************************
>> ________________________________________
>> From: [email protected]
>> [[email protected]] on behalf of Ebru Ozturk
>> [[email protected]]
>> Sent: Thursday, October 04, 2012 4:53 PM
>> To: [email protected]
>> Subject: RE: st: Re: st: Re: st: RE: Truncated sample or Heckman
>> selection
>> Innovation success is heavily left-censored - many firms do not have any market novelties and thus no sales from this type of innovation (Grimpe & Kaiser, 2010).
>> Is that wrong then?
>> I'm really confused now.
>> Ebru
>> ----------------------------------------
>> > Date: Thu, 4 Oct 2012 16:45:59 -0500
>> > Subject: st: Re: st: Re: st: RE: Truncated sample or Heckman
>> > selection
>> > From: [email protected]
>> > To: [email protected]
>> >
>> > On Thu, Oct 4, 2012 at 4:34 PM, Ebru Ozturk <[email protected]> wrote:
>> > > For Tobit regression, the dependent variable is the percent of total firm sales revenues that derived from the sales of new products. Therefore, it is censored as sales of new products can only be zero or positive.
>> > >
>> > This just isn't a censoring problem. Consider having a look at:
>> >
>> > http://en.wikipedia.org/wiki/Censoring_%28statistics%29
>> >
>> > Joerg
>> > *
>> > * For searches and help try:
>> > * http://www.stata.com/help.cgi?search
>> > * http://www.stata.com/support/faqs/resources/statalist-faq/
>> > * http://www.ats.ucla.edu/stat/stata/
>> *
>> * For searches and help try:
>> * http://www.stata.com/help.cgi?search
>> * http://www.stata.com/support/faqs/resources/statalist-faq/
>> * http://www.ats.ucla.edu/stat/stata/
>> *
>> * For searches and help try:
>> * http://www.stata.com/help.cgi?search
>> * http://www.stata.com/support/faqs/resources/statalist-faq/
>> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/faqs/resources/statalist-faq/
> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/faqs/resources/statalist-faq/
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"https://www.stata.com/statalist/archive/2012-10/msg00220.html","timestamp":"2024-11-13T12:10:56Z","content_type":"text/html","content_length":"28146","record_id":"<urn:uuid:095d9a76-c74c-4587-be87-1b6494e288ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00353.warc.gz"}
|
Generating and visualizing regression residuals
This tutorial builds on the previous Linear Regression tutorial. It is recommended that you complete that tutorial prior to this tutorial. This tutorial demonstrates how to predict outcomes and
generate residuals using the parameter estimates from a linear model. After this tutorial you should be able to
Finding residuals
One convenient method for testing our model is to compare predicted outcomes to the observed outcomes. This is commonly done using the regression residuals.
Finding predicted outcomes
Predicted y values are found using the observed x values and the estimated parameters. In our previous tutorial we found that our estimated $ \alpha = 1.2795 $ and the estimated $ \beta = 5.7218 $.
This means our predicted y values are given by
$$ \hat{y} = \hat{\alpha} + \hat{\beta}x $$ $$ \hat{y} = 1.2795 + 5.7218x $$
// Predicted Y
alpha_hat = 1.2795;
beta_hat = 5.7218;
y_hat = alpha_hat + beta_hat*x;
Computing residuals
The residuals from our regression are found by finding the difference between the predicted dependent variable and the estimated dependent variable
$$ e = y - \hat{y} $$
// Residual
e = y - y_hat;
Plotting residuals
A well performing model will have residuals that center around zero with random fluctuations. While there are many statistical methods for testing residuals, one quick and easy way to examine the
behavior of residuals is to plot them.
** Plot residuals
// Declare plotControl structure and fill
// with default scatter settings
struct plotControl myPlot;
myPlot = plotGetDefaults("scatter");
// Add title to graph
plotSetTitle(&myPlot,"Residual Plot", "Arial", 16);
plotSetYLabel(&myPlot, "Residuals");
// Draw graph
plotScatter(myPlot, x, e);
After running the above code, you should get a graph that looks similar to the image below:
The graph above was created using the
procedure. For more detailed information on plotting data and customizing graphs see our
graph basics tutorial
Adding zero line
It may help us see the size of the residuals more clearly if our graph has to a line at $ y = 0 $. This is easy to do in GAUSS using the plotAddXY command.
// Set up x values for line
x_zero = -4 | 4;
// Add zero line
// Construct vector of zeros
y_zero = zeros(2, 1);
// Fill myPlot with default settings for xy graphs
myPlot = plotGetDefaults("xy");
// Change line color to black
plotSetLineColor(&myPlot, "black");
// Make line 1 pixel thick
plotSetLineThickness(&myPlot, 1);
// Add line to plot
plotAddXY(myPlot, x_zero, y_zero);
After running the above code, your residual plot should now have a black zero line as we see below.
Congratulations! You have:
• Predicted outcomes and calculated residuals.
• Created a scatter plot of residuals.
• Added a zero line to your plot.
The next tutorial examines methods for testing error term normality.
For convenience, the full program text from this tutorial is reproduced below.
// Predicted Y
alpha_hat = 1.2795;
beta_hat = 5.7218;
y_hat = alpha_hat + beta_hat*x;
// Residual
e = y - y_hat;
** Plot residuals
// Declare plotControl structure and fill
// with default scatter settings
struct plotControl myPlot;
myPlot = plotGetDefaults("scatter");
// Add title to graph
plotSetTitle(&myPlot,"Residual Plot", "Arial", 16);
plotSetYLabel(&myPlot, "Residuals");
// Draw graph
plotScatter(myPlot, x, e);
**Add zero line
// Set up x_min and x_max on axis
x_zero = -4 | 4;
// Construct vector of zeros
y_zero = zeros(2, 1);
// Fill myPlot with default settings for xy graphs
myPlot = plotGetDefaults("xy");
// Change line color to black
plotSetLineColor(&myPlot, "black");
// Make line 1 pixel thick
plotSetLineThickness(&myPlot, 1);
// Add line to plot
plotAddXY(myPlot, x_zero, y_zero);
Have a Specific Question?
Get a real answer from a real person
Need Support?
Get help from our friendly experts.
|
{"url":"https://www.aptech.com/resources/tutorials/econometrics/predicting-outcomes/","timestamp":"2024-11-03T19:59:22Z","content_type":"text/html","content_length":"92798","record_id":"<urn:uuid:e77d29ba-d52e-41ca-bd34-9218ed0cb34a>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00268.warc.gz"}
|
Drag Coefficient
Sponsored Links
Any object moving through a fluid experiences drag - the net force in the direction of flow due to pressure and shear stress forces on the surface of the object.
The drag force can be expressed as:
F[d] = c[d] 1/2 ρ v^2 A (1)
F[d] = drag force (N)
c[d] = drag coefficient
ρ = density of fluid (1.2 kg/m^3 for air at NTP)
v = flow velocity (m/s)
A = characteristic frontal area of the body (m^2)
The drag coefficient is a function of several parameters like shape of the body, Reynolds Number for the flow, Froude number, Mach Number and Roughness of the Surface.
The characteristic frontal area - A - depends on the body.
Objects drag coefficients are mostly results of experiments. The drag coefficients for some common bodies are indicated below:
Drag Coefficients vs. Type of Objects
Type of Object Drag Coefficient Frontal Area
- c[d] -
Laminar flat plate (Re=106) 0.001
Dolphin 0.0036 wetted area
Turbulent flat plate (Re=106) 0.005
Subsonic Transport Aircraft 0.012
Supersonic Fighter,M=2.5 0.016
Streamlined body 0.04 π / 4 d2
Airplane wing, normal position 0.05
Sreamlined half-body 0.09
Long stream-lined body 0.1
Bicycle - Streamlined Velomobile 0.12 5 ft^2 (0.47 m^2)
Airplane wing, stalled 0.15
Modern car like a Tesla model 3 or model Y 0.23
Toyota Prius, Tesla model S 0.24 frontal area
Tesla model X
Sports car, sloping rear 0.2 - 0.3 frontal area
Common car like Opel Vectra (class C) 0.29 frontal area
Hollow semi-sphere facing stream 0.38
Bird 0.4 frontal area
Solid Hemisphere 0.42 π / 4 d2
Sphere 0.5
Saloon Car, stepped rear 0.4 - 0.5 frontal area
Bike - Drafting behind an other cyclist 0.5 3.9 ft^2 (0.36 m^2)
Convertible, open top 0.6 - 0.7 frontal area
Bus 0.6 - 0.8 frontal area
Old Car like a T-ford 0.7 - 0.9 frontal area
Cube 0.8 s2
Bike - Racing 0.88 3.9 ft^2 (0.36 m^2)
Bicycle 0.9
Tractor Trailed Truck 0.96 frontal area
Truck 0.8 - 1.0 frontal area
Person standing 1.0 – 1.3
Bike - Upright Commuter 1.1 5.5 ft^2 (0.51 m^2)
Thin Disk 1.1 π / 4 d2
Solid Hemisphere flow normal to flat side 1.17 π / 4 d2
Squared flat plate at 90 deg 1.17
Wires and cables 1.0 - 1.3
Person (upright position) 1.0 - 1.3
Hollow semi-cylinder opposite stream 1.2
Ski jumper 1.2 - 1.3
Hollow semi-sphere opposite stream 1.42
Passenger Train 1.8 frontal area
Motorcycle and rider 1.8 frontal area
Long flat plate at 90 deg 1.98
Rectangular box 2.1
Example - Air Resistance Force acting on a Normal Car
The force required to overcome air resistance for a normal family car with drag coefficient 0.29 and frontal area 2 m^2 in 90 km/h can be calculated as:
F[d] = 0.29 1/2 (1.2 kg/m^3) ((90 km/h) (1000 m/km) / (3600 s/h))^2 (2 m^2)
= 217.5 N
The work done to overcome the air resistance in one hour driving (90 km) can be calculated as
W[d] = (217.5 N) (90 km) (1000 m/km)
= 19575000 (Nm, J)
The power required to overcome the air resistance when driving 90 km/h can be calculated as
P[d] = (217.5 N) (90 km/h) (1000 m/km) (1/3600 h/s)
= 5436 (Nm/s, J/s, W)
= 5.4 (kW)
Sponsored Links
Related Topics
The study of fluids - liquids and gases. Involving velocity, pressure, density and temperature as functions of space and time.
Related Documents
Power, torque, efficiency and wheel force acting on a car.
Calculate fuel consumption in liter per km - consumption chart and calculator.
Calculate and compare the costs between owning a new vs. an old car.
Driving distance between some major European cities.
Friction theory with calculator and friction coefficients for combinations of materials like ice, aluminum, steel, graphite and many more.
Introduction to the Froude Number.
Calculate fuel consumption in miles per gallon - mpg - calculator and consumption charts.
Kinematic viscosities of some common liquids like motor oil, diesel fuel, peanut oil and many more.
Cylinder volume and compression ratios in piston engines.
Calculate piston engine displacement.
Rolling friction and rolling resistance.
Introduction to the target flow meters principles.
Speed (mph) and time (hours) and distance traveled (miles) chart.
Speed (km/h) vs. time (hours) and distance traveled (km).
Convert between viscosity units like Centiposes, milliPascal, CentiStokes and SSU.
Wind load on surface - Wind load calculator.
Sponsored Links
|
{"url":"https://www.engineeringtoolbox.com/amp/drag-coefficient-d_627.html","timestamp":"2024-11-11T10:10:28Z","content_type":"text/html","content_length":"28136","record_id":"<urn:uuid:7b71deb3-06d7-41bf-b3ae-fdf9b750c1f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00114.warc.gz"}
|
Find help from our team of math teachers who are here to help struggling students with their math skills. Find on-demand videos for every second-grade skill to teach students how to best tackle math
problems. Teachers break down the concepts in an easily understandable format for younger students who need help with their math skills.
• Find video tutorials for second-grade math skills in geometry, time, fractions, and others.
• Students learn to use the available tools to best answer the math questions.
• Videos pop-up automatically when a student is having difficulty answering the questions.
|
{"url":"https://in.mathgames.com/video/ii","timestamp":"2024-11-06T21:18:28Z","content_type":"text/html","content_length":"481551","record_id":"<urn:uuid:97581720-ef56-4e63-a1cf-b0f1842a5f8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00612.warc.gz"}
|
pyecsca.sca.trace.process module
Provides functions for sample-wise processing of single traces.
Apply absolute value to samples of trace.
trace (Trace)
Return type:
Invert(negate) the samples of trace.
trace (Trace)
Return type:
threshold(trace, value)[source]
Map samples of the trace to 1 if they are above value or to 0.
Return type:
rolling_mean(trace, window)[source]
Compute the rolling mean of trace using window.
Shortens the trace by window - 1.
☆ trace (Trace)
☆ window (int)
Return type:
offset(trace, offset)[source]
Offset samples of trace by offset, sample-wise.
Adds offset to all samples.
Return type:
Subtract the root mean square of the trace from its samples, sample-wise.
trace (Trace)
Return type:
Normalize a trace by subtracting its mean and dividing by its standard deviation.
trace (Trace)
Return type:
Normalize a trace by subtracting its mean and dividing by a multiple (= len(trace)) of its standard deviation.
trace (Trace)
Return type:
transform(trace, min_value=0, max_value=1)[source]
Scale a trace so that its minimum is at min_value and its maximum is at max_value.
☆ trace (Trace)
☆ min_value (Any)
☆ max_value (Any)
Return type:
|
{"url":"https://pyecsca.org/api/pyecsca.sca.trace.process.html","timestamp":"2024-11-05T13:04:18Z","content_type":"text/html","content_length":"30160","record_id":"<urn:uuid:909a1cc3-f45b-4dcd-b068-637ac9cf7068>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00208.warc.gz"}
|
Construction Permits (Monthly) - Construction - Lebanon - BRITE
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits Area: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits Area: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits Area: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits Area: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits Area: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits Area: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
Source:Economena Analytics, Order of Engineers and Architects of Beirut - Frequency:Monthly - In" Construction Permits Area: Order of Engineers of Beirut"
Start Date:Jan 2011 - End Date:Aug 2024
|
{"url":"https://brite.blominvestbank.com/category/Construction-Permits-Monthly-2187/cat=1486/","timestamp":"2024-11-04T17:48:55Z","content_type":"text/html","content_length":"45666","record_id":"<urn:uuid:b6df95ef-860d-4701-872e-57a5eebde539>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00821.warc.gz"}
|
4.4 Light Emission
The atoms of an object with temperature above absolute zero are moving. In turn, as described by Maxwell’s equations, the motion of atomic particles that hold electrical charges causes objects to
emit electromagnetic radiation over a range of wavelengths. As we will see shortly, at room temperature most of the emission is at infrared frequencies; objects need to be much warmer to emit
meaningful amounts of electromagnetic radiation at visible frequencies.
Many different types of light sources have been invented to convert energy into emitted electromagnetic radiation. An object that emits light is called a lamp or an illuminant, though we avoid the
latter terminology since we generally use “illuminant” to refer to a spectral distribution of emission (Section 4.4.2). A lamp is housed in a luminaire, which consists of all the objects that hold
and protect the light as well as any objects like reflectors or diffusers that shape the distribution of light.
Understanding some of the physical processes involved in emission is helpful for accurately modeling light sources for rendering. A number of corresponding types of lamps are in wide use today:
• Incandescent (tungsten) lamps have a small tungsten filament. The flow of electricity through the filament heats it, which in turn causes it to emit electromagnetic radiation with a distribution
of wavelengths that depends on the filament’s temperature. A frosted glass enclosure is often present to diffuse the emission over a larger area than just the filament and to absorb some of the
wavelengths generated in order to achieve a desired distribution of emission by wavelength. With an incandescent light, much of the emitted power is in the infrared bands, which in turn means
that much of the energy consumed by the light is turned into heat rather than light.
• Halogen lamps also have a tungsten filament, but the enclosure around them is filled with halogen gas. Over time, part of the filament in an incandescent light evaporates when it is heated; the
halogen gas causes this evaporated tungsten to return to the filament, which lengthens the life of the light. Because it returns to the filament, the evaporated tungsten does not adhere to the
bulb surface (as it does with regular incandescent bulbs), which also prevents the bulb from darkening.
• Gas-discharge lamps pass electrical current through hydrogen, neon, argon, or vaporized metal gas, which causes light to be emitted at specific wavelengths that depend on the particular atom in
the gas. (Atoms that emit relatively little of their electromagnetic radiation in the not-useful infrared frequencies are selected for the gas.) Because a broader spectrum of wavelengths is
generally more visually desirable than wavelengths that the chosen atoms generate directly, a fluorescent coating on the bulb’s interior is often used to transform the emitted wavelengths to a
broader range. (The fluorescent coating also improves efficiency by converting ultraviolet wavelengths to visible wavelengths.)
• LED lights are based on electroluminescence: they use materials that emit photons due to electrical current passing through them.
For all of these sources, the underlying physical process is electrons colliding with atoms, which pushes their outer electrons to a higher energy level. When such an electron returns to a lower
energy level, a photon is emitted. There are many other interesting processes that create light, including chemoluminescence (as seen in light sticks) and bioluminescence—a form of chemoluminescence
seen in fireflies. Though interesting in their own right, we will not consider their mechanisms further here.
Luminous efficacy measures how effectively a light source converts power to visible illumination, accounting for the fact that for human observers, emission in non-visible wavelengths is of little
value. Interestingly enough, it is the ratio of a photometric quantity (the emitted luminous flux) to a radiometric quantity (either the total power it uses or the total power that it emits over all
wavelengths, measured in flux):
where is the spectral response curve that was introduced in Section 4.1.4.
Luminous efficacy has units of lumens per watt. If is the power consumed by the light source (rather than the emitted power), then luminous efficacy also incorporates a measure of how effectively the
light source converts power to electromagnetic radiation. Luminous efficacy can also be defined as a ratio of luminous exitance (the photometric equivalent of radiant exitance) to irradiance at a
point on a surface, or as the ratio of exitant luminance to radiance at a point on a surface in a particular direction.
A typical value of luminous efficacy for an incandescent tungsten lightbulb is around . The highest value it can possibly have is 683, for a perfectly efficient light source that emits all of its
light at , the peak of the function. (While such a light would have high efficacy, it would not necessarily be a pleasant one as far as human observers are concerned.)
4.4.1 Blackbody Emitters
A blackbody is a perfect emitter: it converts power to electromagnetic radiation as efficiently as physically possible. While true blackbodies are not physically realizable, some emitters exhibit
near-blackbody behavior. Blackbodies also have a useful closed-form expression for their emission by wavelength as a function of temperature that is useful for modeling non-blackbody emitters.
Blackbodies are so-named because they absorb absolutely all incident power, reflecting none of it. Intuitively, the reasons that perfect absorbers are also perfect emitters stem from the fact that
absorption is the reverse operation of emission. Thus, if time was reversed, all the perfectly absorbed power would be perfectly efficiently re-emitted.
Planck’s law gives the radiance emitted by a blackbody as a function of wavelength and temperature measured in kelvins:
where is the speed of light in the medium ( in a vacuum), is Planck’s constant, , and is the Boltzmann constant, , where kelvin (K) is the unit of temperature. Blackbody emitters are perfectly
diffuse; they emit radiance equally in all directions.
Figure 4.12 plots the emitted radiance distributions of a blackbody for a number of temperatures.
Figure 4.12: Plots of emitted radiance as a function of wavelength for blackbody emitters at a few temperatures, as given by Equation (4.17). Note that as temperature increases, more of the emitted
light is in the visible frequencies (roughly 380 nm–780 nm) and that the spectral distribution shifts from reddish colors to bluish colors. The total amount of emitted energy grows quickly as
temperature increases, as described by the Stefan–Boltzmann law in Equation (4.19).
The Blackbody() function computes emitted radiance at the given temperature T in Kelvin for the given wavelength lambda.
Float Blackbody(Float lambda, Float T) { if (T <= 0) return 0; const Float c = 299792458.f; const Float h = 6.62606957e-34f; const Float kb = 1.3806488e-23f; <<
Return emitted radiance for blackbody at wavelength lambda
Float l = lambda * 1e-9f; Float Le = (2 * h * c * c) / (
<5>(l) * (
((h * c) / (l * kb * T)) - 1)); return Le;
The wavelength passed to Blackbody() is in nm, but the constants for Equation (4.17) are in terms of meters. Therefore, it is necessary to first convert the wavelength to meters by scaling it by .
<<Return emitted radiance for blackbody at wavelength lambda>>=
Float l = lambda * 1e-9f; Float Le = (2 * h * c * c) / (
<5>(l) * (
((h * c) / (l * kb * T)) - 1)); return Le;
The emission of non-blackbodies is described by Kirchhoff’s law, which says that the emitted radiance distribution at any frequency is equal to the emission of a perfect blackbody at that frequency
times the fraction of incident radiance at that frequency that is absorbed by the object. (This relationship follows from the object being assumed to be in thermal equilibrium.) The fraction of
radiance absorbed is equal to 1 minus the amount reflected, and so the emitted radiance is
where is the emitted radiance given by Planck’s law, Equation (4.17), and is the hemispherical-directional reflectance from Equation (4.12).
The Stefan–Boltzmann law gives the radiant exitance (recall that this is the outgoing irradiance) at a point for a blackbody emitter:
where is the Stefan–Boltzmann constant, . Note that the total emission over all frequencies grows very rapidly—at the rate . Thus, doubling the temperature of a blackbody emitter increases the total
energy emitted by a factor of 16.
The blackbody emission distribution provides a useful metric for describing the emission characteristics of non-blackbody emitters through the notion of color temperature. If the shape of the emitted
spectral distribution of an emitter is similar to the blackbody distribution at some temperature, then we can say that the emitter has the corresponding color temperature. One approach to find color
temperature is to take the wavelength where the light’s emission is highest and find the corresponding temperature using Wien’s displacement law, which gives the wavelength where emission of a
blackbody is maximum given its temperature:
where is Wien’s displacement constant, .
Incandescent tungsten lamps are generally around 2700 K color temperature, and tungsten halogen lamps are around 3000 K. Fluorescent lights may range all the way from 2700 K to 6500 K. Generally
speaking, color temperatures over 5000 K are described as “cool,” while 2700–3000 K is described as “warm.”
4.4.2 Standard Illuminants
Another useful way of categorizing light emission distributions is a number of “standard illuminants” that have been defined by Commission Internationale de l’Éclairage (CIE).
The Standard Illuminant A was introduced in 1931 and was intended to represent average incandescent light. It corresponds to a blackbody radiator of about . (It was originally defined as a blackbody
at , but the accuracy of the constants used in Planck’s law subsequently improved. Therefore, the specification was updated to be in terms of the 1931 constants, so that the illuminant was
unchanged.) Figure 4.13 shows a plot of the spectral distribution of the A illuminant.
(The B and C illuminants were intended to model daylight at two times of day and were generated with an A illuminant in combination with specific filters. They are no longer used. The E illuminant is
defined as having a constant spectral distribution and is used only for comparisons to other illuminants.)
The D illuminant describes various phases of daylight. It was defined based on characteristic vector analysis of a variety of daylight spectra, which made it possible to express daylight in terms of
a linear combination of three terms (one fixed and two weighted), with one weight essentially corresponding to yellow-blue color change due to cloudiness and the other corresponding to pink-green due
to water in the atmosphere (from haze, etc.). D65 is roughly color temperature (not —again due to changes in the values used for the constants in Planck’s law) and is intended to correspond to
mid-day sunlight in Europe. (See Figure 4.14.) The CIE recommends that this illuminant be used for daylight unless there is a specific reason not to.
Figure 4.14: Plot of the CIE Standard D65 Illuminant Spectral Distribution as a Function of Wavelength in nm. This illuminant represents noontime daylight at European latitudes and is commonly used
to define the whitepoint of color spaces (Section 4.6.3).
Finally, the F series of illuminants describes fluorescents; it is based on measurements of a number of actual fluorescent lights. Figure 4.15 shows the spectral distributions of two of them.
Figure 4.15: Plots of the F4 and F9 Standard Illuminants as a Function of Wavelength in nm. These represent two fluorescent lights. Note that the distributions are quite different. Spikes in the two
distributions correspond to the wavelengths directly emitted by atoms in the gas, while the other wavelengths are generated by the bulb’s fluorescent coating. The F9 illuminant is a “broadband”
emitter that uses multiple phosphors to achieve a more uniform spectral distribution.
|
{"url":"https://www.pbr-book.org/4ed/Radiometry,_Spectra,_and_Color/Light_Emission","timestamp":"2024-11-02T09:37:23Z","content_type":"text/html","content_length":"139438","record_id":"<urn:uuid:009c5d5e-ea76-4568-a859-3c2b567c5555>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00516.warc.gz"}
|
A Handbook for Beginners to Simplifying Proportional Concepts
04 Jan A Handbook for Beginners to Simplifying Proportional Concepts
Posted at 12:31h
Maths Tuition
A key idea that forms the cornerstone of the relationship between quantities is proportions, intricately woven throughout the complex fabric of mathematics. These proportions reveal equilibrium of
ratios, presenting a world of balance and consistency amidst fluctuating magnitudes. As we navigate the depths of these proportional relationships, we embark on a journey into the enigmatic realms of
third, fourth, and mean proportional, uncovering the profound intricacies nestled within these mathematical phenomena. Exploring these concepts is made accessible and enriching with the guidance of
our Maths tuition, providing a tailored approach to understanding these fundamental principles.
Proportion: What Is It?
Proportion lies at the heart of comparing quantities, establishing harmony between different values. It embodies the relationship where two ratios maintain an equilibrium, signifying that their
relative sizes remain constant even as they change in magnitude. Essentially, it’s the balanced interplay between quantities, revealing how they relate and interact with each other in a consistent
How should a proportion be represented?
Expressing a proportion involves illustrating the relationship between two sets of quantities or ratios. This equality can be depicted in multiple formats:
• a : b :: c : d
• a/b = c/d
• a is to b as c is to d
Here, the ratio of ‘a’ to ‘b’ is equal to the ratio of ‘c’ to ‘d’.
In these representations, the ratios on both sides exhibit an equal relationship, showcasing the proportional balance between the quantities involved.
Formula & Examples of Proportional:
Proportions revolve around the equality of ratios. Imagine two ratios, a/b = c/d. They form a proportion if they’re equivalent. Mathematically, this is expressed as:
• If a/b = c/d, then a is to b as c is to d
Here are a couple of examples illustrating this principle:
• If 2/4 = 6/12, it forms a proportion since both ratios reduce to 1/2.
• However, 3/5 and 6/10 also form a proportion as they both reduce to 3/5.
In essence, when the ratios of two sets of numbers are equal, they establish a proportional relationship, demonstrating a consistent ratio or balance between the quantities involved.
Types of Proportional:
Proportions manifest in diverse ways, showcasing the relationships between quantities. They primarily divide into two fundamental categories: direct and inverse proportion.
Direct Proportion:
In a direct proportion, as one variable increases, the other also increases in a consistent manner. Mathematically, this is expressed as:
If a : b = c : d, then ‘b’ is directly proportional to ‘a’ if b = k x a, where ‘k’ is a constant.
For instance, if the cost of ‘b’ items is directly proportional to the number of items ‘a’ at a constant price ‘k’, then b = k x a.
Inverse Proportion:
Contrarily, an inverse proportion describes a relationship where an increase in one variable results in a decrease in the other, maintaining a constant ratio. Mathematically, this is articulated as:
If a : b = c : d, then ‘b’ is inversely proportional to ‘a’ if b = k/a, where ‘k’ is a constant.
For example, if the time taken ‘b’ to complete a task is inversely proportional to the number of workers ‘a’ allocated to it at a constant efficiency ‘k’, then b = k/a.
Third, Fourth & Mean Proportional:
In the realm of proportions, third, fourth, and mean proportional play distinctive roles, showcasing the interrelatedness of quantities within mathematical equations. Let’s explore these proportional
relationships in-depth.
Third Proportional:
The third proportional in a proportion a : b :: b : x is ‘x’, illustrating a relationship where ‘x’ is the third term proportional to ‘a’ and ‘b’. For instance, if 2 : 3 :: 3 : x, ‘x’ is the third
proportional to 2 and 3, thereby making ‘x’ equal to 4.5. Mathematically, it can be expressed as:
Given a : b :: b : x, then
ð a/b = b/x
ð x = (b^2) / a.
Fourth Proportional:
The fourth proportional in a proportion a : b :: c : x is ‘x’, demonstrating a relationship where ‘x’ is the fourth term proportional to ‘a’, ‘b’, and ‘c’. For example, if 2 : 3 :: 4 : x, ‘x’ is the
fourth proportional to 2, 3, and 4, making ‘x’ equal to 6. The mathematical representation is:
Given a : b :: c : x, then
ð a/b = c/x
ð x = (b x c) / a
Mean Proportional:
The mean proportional between two numbers ‘a’ and ‘b’ is represented as ‘x’, where ‘x’ is the square root of their product. For instance, if ‘a’ and ‘b’ are 4 and 9 respectively, the mean
proportional is √4 x 9 = 6.
In mathematical terms, it can be expressed as:
Given a: x :: x : b, then
ð a/x = x/b
ð x2 = a x b or x = √a x b
Understanding these proportional relationships not only aids in grasping fundamental mathematical concepts but also finds applications in various scientific and practical scenarios.
Properties of Proportion:
• Addendo: In a : b = c : d, sum each pair of terms: a + c : b + d.
• Subtrahendo: In a : b = c : d, find differences: a – c : b – d.
• Dividendo: In a : b = c : d, divide terms: a/b : b = c/d : d.
• Componendo: In a : b = c : d, sum and divide: a + b : b = c + d : d.
• Alternendo: In a : b = c : d, ratios are preserved: a : c = b : d.
• Invertendo: In a : b = c : d, swap and maintain ratios: b : a = d : c.
• Componendo and Dividendo: In a : b = c : d, sum and difference of ratios: a + b : a – b = c + d : c – d.
These properties enable various manipulations and comparisons within proportional relationships, highlighting the consistency and balance among the involved quantities.
How Our Maths Tutors Enhance Your Understanding?
Our expert Math tutors at Miracle Learning Centre, renowned for providing the best Maths Tuition in Singapore, significantly elevate understanding of proportional concepts. Through personalized
guidance and tailored teaching methods, our Math tutors decode the complexities of proportions with ease. They employ interactive sessions, relatable examples, and diverse learning approaches;
ensuring students comprehend the nuances of third, fourth, and mean proportional effectively.
With our specialized Math Tuition, students grasp fundamental principles, master problem-solving techniques, and gain confidence in applying these concepts across various disciplines. At Miracle
Learning Centre, our dedicated Math tutors empower students to excel, fostering a deeper understanding of proportional mathematics and paving the way for academic success.
In conclusion, proportions form vital mathematical relationships. Third, fourth, and mean proportional deepen this understanding, aiding problem-solving and paving the way for advanced concepts. With
adept tutors in Maths tuition in Singapore, exploring proportional becomes an enriching journey, fostering invaluable analytical skills for diverse disciplines.
|
{"url":"https://miraclelearningcentre.com/a-handbook-for-beginners-to-simplifying-proportional-concepts/","timestamp":"2024-11-14T01:56:52Z","content_type":"text/html","content_length":"211257","record_id":"<urn:uuid:6e761dff-fd39-4ced-a697-c2043a94ee84>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00391.warc.gz"}
|
James Graham-Eagle | Mathematics & Statistics | Kennedy College of Sciences
Applied mathematics, combustion theory
Research Interests
Applied mathematics, combustion theory.
• Ph D: Applied Mathematics, (1984), Oxford University - UK
• MS: Applied Mathematics, (1981), Victoria University of Wellington - New Zealand
• BS: Mathematics, (1979), Victoria University of Wellington - New Zealand
Selected Publications
• Christodoulou, D., Katatbeh, Q.D., Graham-Eagle, J. (2016). A criterion for oscillations in the solutions of the polytropic Lane-Emden equations. Journal of Inequalities and Applications, 2016(1)
• Christodoulou, D., Graham-Eagle, J., Katatbeh, Q.D. (2016). A program for predicting the intervals of oscillations in the solutions of ordinary second-order linear homogeneous differential
equations. Advances in Difference Equations, 2016(1) 48.
• Christodoulou, D., Graham-Eagle, J., Katatbeh, Q. (2016). The intervals of oscillations in the solutions of the Legendre differential equations. Advances in Difference Equations, 2016(1) 1 - 10.
• Katatbeh, Q.D., Christodoulou, D., Graham-Eagle, J. (2016). The intervals of oscillations in the solutions of the radial Schrödinger differential equation. Advances in Difference Equations, 2016
(1) 47.
• Joseph, P., Graham-Eagle, J. (2015). Analytical solution of a dynamical systems soil model. Analytical Methods in Petroleum Upstream Applications, 219.
• Christodoulou, D., Graham-Eagle, J., Katatbeh, Q.D. (2014). A Program for Predicting the Intervals of Oscillations in the Solutions of Ordinary Second-Order Linear Homogeneous Differential
• Christodoulou, D., Graham-Eagle, J., Katatbeh, Q.D. (2014). The Intervals of Oscillations in the Solutions of the Legendre Differential Equations.
• Katatbeh, Q.D., Christodoulou, D., Graham-Eagle, J. (2014). The Intervals of Oscillations in the Solutions of the Radial Schrödinger Differential Equation.
• Joseph, P.G., Graham-Eagle, J. (2013). Strain-rate effects in shear highlighted by a dynamical systems model. International Journal of Geomechanics.
• Graham-Eagle, J. (2009). The Draining Cylinder. The College Mathematics Journal, 40(5) 337–344.
• Graham-Eagle, J. (2008). Halting combustion waves with a fire break. Journal of Mathematical Analysis and Applications, 348(1) 116–121.
• Rossi, L., Inyang, H., Graham-Eagle, J., Pennell, S.A. (2005). Closure to “A Model of Coupled Heat and Moisture Transport in an Annular Clay Barrier” by L. Rossi, HI Inyang, J. Graham-Eagle, and
S. Pennell. Journal of Environmental Engineering, 131(11) 1615–1616.
• Rossi, L., Inyang, H., Graham-Eagle, J., Pennell, S.A. (2004). A model of coupled heat and moisture transport in an annular clay barrier. Journal of Environmental Engineering, 130(8) 855–862.
• Beuscher, U., Bayram, S., Broadbridge, P., Driscoll, T., Edwards, D.A., Fehribach, J., Graham-Eagle, J., Haskett, R., Heryudono, A., Huang, H., others, . (2004). Multi-Phase Flow in a Thin Porous
• Graham-Eagle, J., Rossi, L.F. (2002). On the existence of two-dimensional, localized, rotating, self-similar vortical structures. SIAM Journal on Applied Mathematics, 62(6) 2114–2128.
• Byrne, C., Graham-Eagle, J. (2001). A short proof of the convergence of the SART algorithm. Dept. Math. Sci., Univ. Massachusetts, Lowell, Tech. Rep.
• Graham-Eagle, J., Schult, D. (2001). Combustion waves with reactant depletion. The ANZIAM Journal, 43(01) 119–135.
• Kreminski, R., Graham-Eagle, J. (2001). Simpson’s rule for estimating n!(and proving Stirling’s formula, almost). International Journal of Mathematical Education in Science and Technology, 32(3)
• Graham-Eagle, J., Schult, D. (2001). The effect of wind on combustion waves with reactant depletion (457:2014 pp. 2397–2417).
• Graham-Eagle, J., Pennell, S.A. (2000). Contact angle calculations from the contact/maximum diameter of sessile drops. International journal for numerical methods in fluids, 32(7) 851–861.
• Amirfazli, A., Graham-Eagle, J., Pennell, S.A., Neumann, A. (2000). Implementation and examination of a new drop shape analysis algorithm to measure contact angle and surface tension from the
diameters of two sessile drops. Colloids and Surfaces A: Physicochemical and Engineering Aspects, 161(1) 63–74.
• Noushin, A., Fiddy, M., Graham-Eagle, J. (1999). Some new findings on the zeros of band-limited functions. JOSA A, 16(7) 1857–1863.
• Pennell, S.A., Graham-Eagle, J. (1996). Sessile Drops on Slightly Uneven Hydrophilic Surfaces (1:).
• Graham-Eagle, J. (1995). A nonlocal equation arising in the theory of wet combustion. IMA journal of applied mathematics, 54(1) 1–8.
• Gonzalezvelasco, E., Graham-Eagle, J. (1994). Directed Distances with Derivatives. Journal of Mathematical Analysis and Applications, 184(2) 243-255.
• Graham-Eagle, J. (1993). On the Relation between a Nonlinear Elliptic Equation and Its Uniform Approximation. Journal of Mathematical Analysis and Applications, 177(1) 254-262.
• Byrne, C., Graham-Eagle, J. (1992). Convergence properties of the algebraic reconstruction technique (ART) (pp. 1240–1242).
• Graham-Eagle, J. (1990). A variational approach to upper and lower solutions. IMA Journal of Applied Mathematics, 44(2) 181–184.
• Graham-Eagle, J. (1989). Monotone methods for semilinear elliptic equations in unbounded domains. Journal of Mathematical Analysis and Applications, 137(1) 122–131.
• BENJAMIN, T.B., Graham-Eagle, J. (1985). Long Gravity—Capillary Waves with Edge Constraints. IMA journal of applied mathematics, 35(1) 91–114.
• Graham-Eagle, J. (1983). A new method for calculating eigenvalues with applications to gravity-capillary waves with edge constraints (94:03 pp. 553–564).
Selected Presentations
• Extinguishing combustion waves with a fire break - ANZIAM Meeting, January 2010 - New Zealand
• Building a fire only to break it, January 2010 - University of Delaware, DE
• - National Center for Academic Transformation Redesign Alliance Conference, March 2008 - Orlando, FL
• Extinguishing fires with a break, April 2002 - Fitchburg College, MA
• - BRC conference on Math Teaching, February 2002 - Boston, Massachusetts
• - SIAM conference on Geosciences, June 2001 - Boulder, Colorado
• The effect of wind on combustion waves, December 1999 - University of Massachusetts Lowell, MA
• Combustion waves with reactant depletion - Combustion Conference, February 1999 - Sydney, Australia
• Combustion waves, October 1997 - University of Massachusetts Lowell, MA
• Calculating surface tension from drop profiles, March 1996 - University of Massachusetts Lowell, MA
• Computing surface tension from sessile drop profiles - Australia and New Zealand Industrial and Applied Mathematics Conference, February 1996 - Masterton, New Zealand
• - Conference on the Preparation and Professional Development of Mathematics Graduate Teaching Assistants, February 1996 - University of New Hampshire
• Degree theory and nonlocal PDEs, February 1995 - University of Massachusetts Lowell, MA
• - Workshop on Calculus for At-Risk Students, August 1994 - Austin, TX
• Teaching secondary school mathematics to secondary school mathematics teachers - Mathematics Education Seminar, April 1994 - Suffolk University, Boston, MA.
• Congruence arithmetic, December 1993 - University of Massachusetts Lowell, MA
• - IEEE Conference on Medical Imaging, October 1992 - Orlando, FL
• Nonlocal heat equation, November 1991 - University of Massachusetts at Amherst, MA
• Reaction-diffusion with non-local forcing term - Applied Mathematics Seminar, September 1991 - University of Massachusetts Amherst
• - Partial Differential Equations Conference in honor of Lawrence Payne, October 1990 - Cornell, NY
• - Wavelets Conference, June 1990 - Lowell, Massachusetts
• Thermal runaway, October 1989 - University of Massachusetts Lowell, MA
• The Riemann hypothesis, May 1989 - University of Delaware, DE
• A fractional order diffusion equation, April 1989 - University of Massachusetts Lowell, MA
• A free boundary problem arising in the theory of heat conduction, March 1989 - Washington and Lee University, VA
• Automorphic numbers, March 1989 - Muhlenberg College, PA
• Heat conduction with fractional order heat generation, March 1989 - University of Alabama at Birmingham, AL
• Upper and lower solutions for nonlinear elliptic partial differential equations, February 1989 - University of Delaware, DE
• A variational approach to upper and lower solutions. An international conference on differential equations. - Conference on Differential Equations in Honor of Alex McNabb, June 1988 - New Zealand
• Degree Theory, 1987 - University of Delaware, DE
• Monotone methods for the nonlinear partial differential equations of heat conduction, November 1987 - University of Auckland, NZ
• Nonstandard Analysis, June 1987 - University of Delaware, DE
• - Joint American Mathematical Society/Mathematics Association America meeting, January 1987 - San Antonio, TX
• - International Congress of Mathematicians, July 1986 - Berkeley, CA
• The relation between degree theory and monotone methods in nonlinear elliptic partial differential equations, January 1986 - Irvine, CA
• The Morse inequalities for compact manifolds, April 1985 - Victoria University, New Zealand
• Gravity-capillary waves with edge constraints and the propagation of capillary waves in a channel, June 1984 - Oxford University, UK
• - Mathematics in Industry Conference, July 1983 - Oxford, UK
• A new method for calculating eigenvalues with applications to gravity-capillary waves with edge constraints - Applied Mathematics Conference, April 1983 - Oxford, UK
• Gravity-capillary waves with edge constraints, April 1983 - Universite de Paris Sud & L'Ecole Normale, France
• - NATO/London Mathematical Society Conference, July 1982 - Oxford, UK
• Approximating real numbers by rationals, October 1980 - Victoria University, New Zealand
|
{"url":"https://www.uml.edu/sciences/mathematics/people/graham-eagle-james.aspx","timestamp":"2024-11-13T02:12:27Z","content_type":"text/html","content_length":"37914","record_id":"<urn:uuid:46d36d70-a437-4ee7-80b6-4ea03f214a79>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00315.warc.gz"}
|
IQ MacKay Etf Naive Prediction | MMIN- Macroaxis
MMIN Etf USD 24.09 0.05 0.21%
The Naive Prediction forecasted value of IQ MacKay Municipal on the next trading day is expected to be 24.16 with a mean absolute deviation of 0.06 and the sum of the absolute errors of 3.52. MMIN
Etf Forecast is based on your current time horizon.
A naive forecasting model for IQ MacKay is a special case of the moving average forecasting where the number of periods used for smoothing is one. Therefore, the forecast of IQ MacKay Municipal value
for a given trading day is simply the observed value for the previous period. Due to the simplistic nature of the naive forecasting model, it can only be used to forecast up to one period.
IQ MacKay Naive Prediction Price Forecast For the 15th of November 2024
Given 90 days horizon, the Naive Prediction forecasted value of IQ MacKay Municipal on the next trading day is expected to be
with a mean absolute deviation of
, mean absolute percentage error of
, and the sum of the absolute errors of
Please note that although there have been many attempts to predict MMIN Etf prices using its time series forecasting, we generally do not recommend using it to place bets in the real market. The most
commonly used models for forecasting predictions are the autoregressive models, which specify that IQ MacKay's next future price depends linearly on its previous prices and some stochastic term
(i.e., imperfectly predictable multiplier).
IQ MacKay Etf Forecast Pattern
Backtest IQ MacKay IQ MacKay Price Prediction Buy or Sell Advice
IQ MacKay Forecasted Value
In the context of forecasting IQ MacKay's Etf value on the next trading day, we examine the
predictive performance
of the model to find good statistically significant boundaries of downside and upside scenarios. IQ MacKay's downside and upside margins for the forecasting period are
, respectively. We have considered IQ MacKay's daily market price to evaluate the above model's predictive performance. Remember, however, there is no scientific proof or empirical evidence that
traditional linear or nonlinear forecasting models outperform artificial intelligence and frequency domain models to provide accurate forecasts consistently.
23.82 Expected Value 24.49
Downside Target Odds Upside
Model Predictive Factors
The below table displays some essential indicators generated by the model showing the Naive Prediction forecasting method's relative quality and the estimations of the prediction error of IQ MacKay
etf data series using in forecasting. Note that when a statistical model is used to represent IQ MacKay etf, the representation will rarely be exact; so some information will be lost using the model
to explain the process. AIC estimates the relative amount of information lost by a given model: the less information a model loses, the higher its quality.
AIC Akaike Information Criteria 112.858
Bias Arithmetic mean of the errors None
MAD Mean absolute deviation 0.0576
MAPE Mean absolute percentage error 0.0024
SAE Sum of the absolute errors 3.5158
This model is not at all useful as a medium-long range forecasting tool of IQ MacKay Municipal. This model is simplistic and is included partly for completeness and partly because of its simplicity.
It is unlikely that you'll want to use this model directly to predict IQ MacKay. Instead, consider using either the moving average model or the more general weighted moving average model with a
higher (i.e., greater than 1) number of periods, and possibly a different set of weights.
Predictive Modules for IQ MacKay
There are currently many different techniques concerning forecasting the market as a whole, as well as predicting future values of individual securities such as IQ MacKay Municipal. Regardless of
method or technology, however, to accurately forecast the etf market is more a matter of luck rather than a particular technique. Nevertheless, trying to predict the etf market accurately is still an
essential part of the overall investment decision process. Using different forecasting techniques and comparing the results might improve your chances of accuracy even though unexpected events may
often change the market sentiment and impact your forecasting results.
HypePrediction IntrinsicValuation BollingerBand Projection (param)
Low Estimated High Low Real High Low Middle High
23.75 24.09 24.43 22.20 22.54 26.50 23.85 24.04 24.22
Details Details Details
Other Forecasting Options for IQ MacKay
For every potential investor in MMIN, whether a beginner or expert, IQ MacKay's price movement is the inherent factor that sparks whether it is viable to invest in it or hold it better. MMIN Etf
price charts are filled with many 'noises.' These noises can hugely alter the decision one can make regarding investing in MMIN. Basic forecasting techniques help filter out the noise by identifying
IQ MacKay's price trends.
IQ MacKay Related Equities
One of the popular trading techniques among algorithmic traders is to use market-neutral strategies where every trade hedges away some risk. Because there are two separate transactions required, even
if one position performs unexpectedly, the other equity can make up some of the losses. Below are some of the equities that can be combined with IQ MacKay etf to make a market-neutral strategy. Peer
analysis of IQ MacKay could also be used in its relative valuation, which is a method of valuing IQ MacKay by comparing valuation metrics with similar companies.
Risk & Return Correlation
IQ MacKay Municipal Technical and Predictive Analytics
The etf market is financially volatile. Despite the volatility, there exist limitless possibilities of gaining profits and building passive income portfolios. With the complexity of IQ MacKay's price
movements, a comprehensive understanding of forecasting methods that an investor can rely on to make the right move is invaluable. These methods predict trends that assist an investor in predicting
the movement of IQ MacKay's current price.
IQ MacKay Market Strength Events
Market strength indicators help investors to evaluate how IQ MacKay etf reacts to ongoing and evolving market conditions. The investors can use it to
make informed decisions
about market timing, and determine when trading IQ MacKay shares will generate the highest return on investment. By undertsting and applying IQ MacKay etf market strength indicators, traders can
identify IQ MacKay Municipal entry and exit signals to maximize returns.
IQ MacKay Risk Indicators
The analysis of IQ MacKay's basic risk indicators is one of the essential steps in accurately forecasting its future price. The process involves identifying the amount of risk involved in IQ MacKay's
investment and either accepting that risk or mitigating it. Along with some essential techniques for forecasting mmin etf prices, we also provide a set of basic risk indicators that can assist in the
individual investment decision or help in hedging the risk of your existing portfolios.
Mean Deviation 0.2218
Standard Deviation 0.3286
Variance 0.108
Please note, the risk measures we provide can be used independently or collectively to perform a risk assessment. When comparing two potential investments, we recommend comparing similar equities
with homogenous growth potential and valuation from related markets to determine which investment holds the most risk.
Pair Trading with IQ MacKay
One of the main advantages of trading using pair correlations is that every trade hedges away some risk. Because there are two separate transactions required, even if IQ MacKay position performs
unexpectedly, the other equity can make up some of the losses. Pair trading also minimizes risk from directional movements in the market. For example, if an entire industry or sector drops because of
unexpected headlines, the short position in IQ MacKay will appreciate offsetting losses from the drop in the long position's value.
0.96 TFI SPDR Nuveen Bloomberg PairCorr
0.97 PZA Invesco National AMT PairCorr
0.96 MLN VanEck Long Muni PairCorr
0.94 RVNU Xtrackers Municipal PairCorr
0.59 YCS ProShares UltraShort Yen PairCorr
0.59 TBT ProShares UltraShort PairCorr
0.59 TMV Direxion Daily 20 PairCorr
0.47 USD ProShares Ultra Semi PairCorr
0.33 SGG Barclays Capital PairCorr
The ability to find closely correlated positions to IQ MacKay could be a great tool in your tax-loss harvesting strategies, allowing investors a quick way to find a similar-enough asset to replace IQ
MacKay when you sell it. If you don't do this, your portfolio allocation will be skewed against your target asset allocation. So, investors can't just sell and buy back IQ MacKay - that would be a
violation of the tax code under the "wash sale" rule, and this is why you need to find a similar enough asset and use the proceeds from selling IQ MacKay Municipal to buy it.
The correlation of IQ MacKay is a statistical measure of how it moves in relation to other instruments. This measure is expressed in what is known as the correlation coefficient, which ranges between
-1 and +1. A perfect positive correlation (i.e., a correlation coefficient of +1) implies that as IQ MacKay moves, either up or down, the other security will move in the same direction.
Alternatively, perfect negative correlation means that if IQ MacKay Municipal moves in either direction, the perfectly negatively correlated security will move in the opposite direction. If the
correlation is 0, the equities are not correlated; they are entirely random. A correlation greater than 0.8 is generally described as strong, whereas a correlation less than 0.5 is generally
considered weak.
Correlation analysis
and pair trading evaluation for IQ MacKay can also be used as hedging techniques within a particular sector or industry or even over random equities to generate a better risk-adjusted return on your
Pair CorrelationCorrelation Matching
When determining whether IQ MacKay Municipal is a good investment, qualitative aspects like company
, corporate governance, and ethical practices play a significant role. A
comparison with peer companies
also provides context and helps to understand if MMIN Etf is undervalued or overvalued. This multi-faceted approach, blending both quantitative and qualitative analysis, forms a solid foundation for
making an informed investment decision about Iq Mackay Municipal Etf.
Highlighted below are key reports to facilitate an investment decision about Iq Mackay Municipal Etf:
Check out
Historical Fundamental Analysis of IQ MacKay
to cross-verify your projections. You can also try the
Crypto Correlations
module to use cryptocurrency correlation module to diversify your cryptocurrency portfolio across multiple coins.
The market value of IQ MacKay Municipal
is measured differently than its book value, which is the value of MMIN that is recorded on the company's balance sheet. Investors also form their own opinion of IQ MacKay's value that differs from
its market value or its book value, called intrinsic value, which is IQ MacKay's true underlying value. Investors use various methods to calculate intrinsic value and buy a stock when its market
value falls below its intrinsic value. Because IQ MacKay's market value can be influenced by many factors that don't directly affect IQ MacKay's underlying business (such as a pandemic or basic
market pessimism), market value can vary widely from intrinsic value.
BeneishM ScoreDetails FinancialAnalysisDetails Buy or SellAdviceDetails TechnicalAnalysisDetails
Please note, there is a significant difference between IQ MacKay's value and its price as these two are different measures arrived at by different means. Investors typically determine if IQ MacKay is
a good investment by looking at such factors as earnings, sales, fundamental and technical indicators, competition as well as analyst projections. However, IQ MacKay's price is the amount at which it
trades on the open market and represents the number that a seller and buyer find agreeable to each party.
|
{"url":"https://www.macroaxis.com/forecast/MMIN","timestamp":"2024-11-15T03:19:49Z","content_type":"text/html","content_length":"335570","record_id":"<urn:uuid:50adb12d-690f-429d-88b9-1a55b65d58e4>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00151.warc.gz"}
|
Quantum hall transitions: An exact theory based on conformal restriction
We revisit the problem of the plateau transition in the integer quantum Hall effect. Here we develop an analytical approach for this transition, and for other two-dimensional disordered systems,
based on the theory of "conformal restriction." This is a mathematical theory that was recently developed within the context of the Schramm-Loewner evolution which describes the "stochastic geometry"
of fractal curves and other stochastic geometrical fractal objects in two-dimensional space. Observables elucidating the connection with the plateau transition include the so-called point-contact
conductances (PCCs) between points on the boundary of the sample, described within the language of the Chalker-Coddington network model for the transition. We show that the disorder-averaged PCCs are
characterized by a classical probability distribution for certain geometric objects in the plane (which we call pictures), occurring with positive statistical weights, that satisfy the crucial
so-called restriction property with respect to changes in the shape of the sample with absorbing boundaries; physically, these are boundaries connected to ideal leads. At the transition point, these
geometrical objects (pictures) become fractals. Upon combining this restriction property with the expected conformal invariance at the transition point, we employ the mathematical theory of
"conformal restriction measures" to relate the disorder-averaged PCCs to correlation functions of (Virasoro) primary operators in a conformal field theory (of central charge c=0). We show how this
can be used to calculate these functions in a number of geometries with various boundary conditions. Since our results employ only the conformal restriction property, they are equally applicable to a
number of other critical disordered electronic systems in two spatial dimensions, including for example the spin quantum Hall effect, the thermal metal phase in symmetry class D, and classical
diffusion in two dimensions in a perpendicular magnetic field. For most of these systems, we also predict exact values of critical exponents related to the spatial behavior of various
disorder-averaged PCCs.
All Science Journal Classification (ASJC) codes
• Electronic, Optical and Magnetic Materials
• Condensed Matter Physics
Dive into the research topics of 'Quantum hall transitions: An exact theory based on conformal restriction'. Together they form a unique fingerprint.
|
{"url":"https://cris.iucc.ac.il/en/publications/quantum-hall-transitions-an-exact-theory-based-on-conformal-restr","timestamp":"2024-11-06T07:34:15Z","content_type":"text/html","content_length":"54014","record_id":"<urn:uuid:c1c3e60e-df53-4176-8d4c-180661427c8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00494.warc.gz"}
|
Constructive set theory
Axiomatic constructive set theory is an approach to
axiomatic set theory
. The same
language with "
${\displaystyle =}$
" and "
${\displaystyle \in }$
" of classical set theory is usually used, so this is not to be confused with a
constructive types
approach. On the other hand, some constructive theories are indeed motivated by their interpretability in
type theories
In addition to rejecting the
principle of excluded middle
${\displaystyle {\mathrm {PEM} }}$
), constructive set theories often require some logical quantifiers in their axioms to be set
. The latter is motivated by results tied to
Constructive outlook
Preliminary on the use of intuitionistic logic
The logic of the set theories discussed here is
in that it rejects the principle of excluded middle
${\displaystyle {\mathrm {PEM} }}$
, i.e. that the
${\displaystyle \phi \lor eg \phi }$
automatically holds for all propositions
${\displaystyle \phi }$
. This is also often called the
of excluded middle (
${\displaystyle {\mathrm {LEM} }}$
) in contexts where it is assumed. Constructively, as a rule, to prove the excluded middle for a proposition
${\displaystyle P}$
, i.e. to prove the particular disjunction
${\displaystyle P\lor eg P}$
, either
${\displaystyle P}$
${\displaystyle eg P}$
needs to be explicitly proven. When either such proof is established, one says the proposition is decidable, and this then logically implies the disjunction holds. Similarly and more commonly, a
${\displaystyle Q(x)}$
${\displaystyle x}$
in a domain
${\displaystyle X}$
is said to be decidable when the more intricate statement
${\displaystyle \forall (x\in X).{\big (}Q(x)\lor eg Q(x){\big )}}$
is provable. Non-constructive axioms may enable proofs that formally claim decidability of such
${\displaystyle P}$
${\displaystyle Q}$
) in the sense that they prove excluded middle for
${\displaystyle P}$
(resp. the statement using the quantifier above) without demonstrating the truth of either side of the disjunction(s). This is often the case in classical logic. In contrast, axiomatic theories
deemed constructive tend to not permit many classical proofs of statements involving properties that are provenly computationally
The law of noncontradiction is a special case of the propositional form of modus ponens. Using the former with any negated statement ${\displaystyle eg P}$, one valid De Morgan's law thus implies ${\
displaystyle eg eg (P\lor eg P)}$ already in the more conservative minimal logic. In words, intuitionistic logic still posits: It is impossible to rule out a proposition and rule out its negation
both at once, and thus the rejection of any instantiated excluded middle statement for an individual proposition is inconsistent. Here the double-negation captures that the disjunction statement now
provenly can never be ruled out or rejected, even in cases where the disjunction may not be provable (for example, by demonstrating one of the disjuncts, thus deciding ${\displaystyle P}$) from the
assumed axioms.
More generally, constructive mathematical theories tend to prove classically equivalent reformulations of classical theorems. For example, in constructive analysis, one cannot prove the intermediate
value theorem in its textbook formulation, but one can prove theorems with algorithmic content that, as soon as double negation elimination and its consequences are assumed legal, are at once
classically equivalent to the classical statement. The difference is that the constructive proofs are harder to find.
The intuitionistic logic underlying the set theories discussed here, unlike minimal logic, still permits
double negation elimination
for individual propositions
${\displaystyle P}$
for which excluded middle holds. In turn the theorem formulations regarding finite objects tends to not differ from their classical counterparts. Given a model of all natural numbers, the equivalent
for predicates, namely
Markov's principle
, does not automatically hold, but may be considered as an additional principle.
In an inhabited domain and using explosion, the disjunction ${\displaystyle P\lor \exists (x\in X).eg Q(x)}$ implies the existence claim ${\displaystyle \exists (x\in X).(Q(x)\to P)}$, which in turn
implies ${\displaystyle {\big (}\forall (x\in X).Q(x){\big )}\to P}$. Classically, these implications are always reversible. If one of the former is classically valid, it can be worth trying to
establish it in the latter form. For the special case where ${\displaystyle P}$ is rejected, one deals with a counter-example existence claim ${\displaystyle \exists (x\in X).eg Q(x)}$, which is
generally constructively stronger than a rejection claim ${\displaystyle eg \forall (x\in X).Q(x)}$: Exemplifying a ${\displaystyle t}$ such that ${\displaystyle Q(t)}$ is contradictory of course
means it is not the case that ${\displaystyle Q}$ holds for all possible ${\displaystyle x}$. But one may also demonstrate that ${\displaystyle Q}$ holding for all ${\displaystyle x}$ would logically
lead to a contradiction without the aid of a specific counter-example, and even while not being able to construct one. In the latter case, constructively, here one does not stipulate an existence
Imposed restrictions on a set theory
Compared to the classical counterpart, one is generally less likely to prove the existence of relations that cannot be realized. A restriction to the constructive reading of existence apriori leads
to stricter requirements regarding which characterizations of a set ${\displaystyle f\subset X\times Y}$ involving unbounded collections constitute a (mathematical, and so always meaning
Axiom of Choice
is such a non-constructive principle that implies
${\displaystyle {\mathrm {PEM} }}$
for the formulas permitted in one's adopted Separation schema, by
substitute. So a genuinely intuitionistic development of set theory requires the rewording of some standard axioms to classically equivalent ones. Apart from demands for computability and
reservations regrading of impredicativity,
technical question regarding which non-logical axioms effectively extend the underlying logic of a theory is also a research subject in its own right.
With computably undecidable propositions already arising in Robinson arithmetic, even just Predicative separation lets one define elusive subsets easily. In stark contrast to the classical framework,
constructive set theories may be closed under the rule that any property that is decidable for all sets is already equivalent to one of the two trivial ones, ${\displaystyle \top }$ or ${\
displaystyle \bot }$. Also the real line may be taken to be
ordinal numbers
, expressed by the provability and rejection of the clauses in the order defining disjunction
${\displaystyle (\alpha \in \beta )\lor (\alpha =\beta )\lor (\beta \in \alpha )}$
. This determines whether the relation is
. A weakened theory of ordinals in turn affects the proof theoretic strength defined in
ordinal analysis
In exchange, constructive set theories can exhibit attractive disjunction and existence properties, as is familiar from the study of constructive arithmetic theories. These are features of a fixed
theory which metalogically relate judgements of propositions provable in the theory. Particularly well-studied are those such features that can be expressed in Heyting arithmetic, with quantifiers
over numbers and which can often be realized by numbers, as formalized in proof theory. In particular, those are the numerical existence property and the closely related disjunctive property, as well
as being closed under Church's rule, witnessing any given function to be computable.^[2]
A set theory does not only express theorems about numbers, and so one may consider a more general so-called strong existence property that is harder to come by, as will be discussed. A theory has
this property if the following can be established: For any property ${\displaystyle \phi }$, if the theory proves that a set exist that has that property, i.e. if the theory claims the existence
statement, then there is also a property ${\displaystyle \psi }$ that uniquely describes such a set instance. More formally, for any predicate ${\displaystyle \phi }$ there is a predicate ${\
displaystyle \psi }$ so that
${\displaystyle {\mathsf {T}}\vdash \exists x.\phi (x)\implies {\mathsf {T}}\vdash \exists !x.\phi (x)\land \psi (x)}$
The role analogous to that of realized numbers in arithmetic is played here by defined sets proven to exist by (or according to) the theory. Questions concerning the axiomatic set theory's strength
and its relation to term construction are subtle. While many theories discussed tend have all the various numerical properties, the existence property can easily be spoiled, as will be discussed.
Weaker forms of existence properties have been formulated.
Some theories with a classical reading of existence can in fact also be constrained so as to exhibit the strong existence property. In Zermelo–Fraenkel set theory with sets all taken to be
ordinal-definable, a theory denoted ${\displaystyle {\mathsf {ZF}}+({\mathrm {V} }={\mathrm {HOD} })}$, no sets without such definability exist. The property is also enforced via the constructible
universe postulate in ${\displaystyle {\mathsf {ZF}}+({\mathrm {V} }={\mathrm {L} })}$. For contrast, consider the theory ${\displaystyle {\mathsf {ZFC}}}$ given by ${\displaystyle {\mathsf {ZF}}}$
plus the full axiom of choice existence postulate: Recall that this collection of axioms proves the well-ordering theorem, implying well-orderings exists for any set. In particular, this means that
relations ${\displaystyle W\subset {\mathbb {R} }\times {\mathbb {R} }}$ formally exist that establish the well-ordering of ${\displaystyle {\mathbb {R} }}$ (i.e. the theory claims the existence of a
least element for all subsets of ${\displaystyle {\mathbb {R} }}$ with respect to those relations). This is despite the fact that definability of such an ordering is known to be independent of ${\
displaystyle {\mathsf {ZFC}}}$. The latter implies that for no particular formula ${\displaystyle \psi }$ in the language of the theory does the theory prove that the corresponding set is a
well-ordering relation of the reals. So ${\displaystyle {\mathsf {ZFC}}}$ formally proves the existence of a subset ${\displaystyle W\subset {\mathbb {R} }\times {\mathbb {R} }}$ with the property of
being a well-ordering relation, but at the same time no particular set ${\displaystyle W}$ for which the property could be validated can possibly be defined.
Anti-classical principles
As mentioned above, a constructive theory ${\displaystyle {\mathsf {T}}}$ may exhibit the numerical existence property, ${\displaystyle {\mathsf {T}}\vdash \exists e.\psi (e)\implies {\mathsf {T}}\
vdash \psi ({\underline {\mathrm {e} }})}$, for some number ${\displaystyle {\mathrm {e} }}$ and where ${\displaystyle {\underline {\mathrm {e} }}}$ denotes the corresponding numeral in the formal
theory. Here one must carefully distinguish between provable implications between two propositions, ${\displaystyle {\mathsf {T}}\vdash P\to Q}$, and a theory's properties of the form ${\displaystyle
{\mathsf {T}}\vdash P\implies {\mathsf {T}}\vdash Q}$. When adopting a metalogically established schema of the latter type as an inference rule of one's proof calculus and nothing new can be proven,
one says the theory ${\displaystyle {\mathsf {T}}}$ is closed under that rule.
One may instead consider adjoining the rule corresponding to the meta-theoretical property as an implication (in the sense of "${\displaystyle \to }$") to ${\displaystyle {\mathsf {T}}}$, as an axiom
schema or in quantified form. A situation commonly studied is that of a fixed ${\displaystyle {\mathsf {T}}}$ exhibiting the meta-theoretical property of the following type: For an instance from some
collection of formulas of a particular form, here captured via ${\displaystyle \phi }$ and ${\displaystyle \psi }$, one established the existence of a number ${\displaystyle {\mathrm {e} }}$ so that
${\displaystyle {\mathsf {T}}\vdash \phi \implies {\mathsf {T}}\vdash \psi ({\underline {\mathrm {e} }})}$. Here one may then postulate ${\displaystyle \phi \to \exists (e\in {\mathbb {N} }).\psi
(e)}$, where the bound ${\displaystyle e}$ is a number variable in language of the theory. For example, Church's rule is an admissible rule in first-order Heyting arithmetic ${\displaystyle {\mathsf
{HA}}}$ and, furthermore, the corresponding Church's thesis principle ${\displaystyle {\mathrm {CT} }_{0}}$ may consistently be adopted as an axiom. The new theory with the principle added is
anti-classical, in that it may not be consistent anymore to also adopt ${\displaystyle {\mathrm {PEM} }}$. Similarly, adjoining the excluded middle principle ${\displaystyle {\mathrm {PEM} }}$ to
some theory ${\displaystyle {\mathsf {T}}}$, the theory thus obtained may prove new, strictly classical statements, and this may spoil some of the meta-theoretical properties that were previously
established for ${\displaystyle {\mathsf {T}}}$. In such a fashion, ${\displaystyle {\mathrm {CT} }_{0}}$ may not be adopted in ${\displaystyle {\mathsf {HA}}+{\mathrm {PEM} }}$, also known as
Peano arithmetic
${\displaystyle {\mathsf {PA}}}$
The focus in this subsection shall be on set theories with quantification over a fully formal notion of an infinite sequences space, i.e. function space, as it will be introduced further below. A
translation of Church's rule into the language of the theory itself may here read
${\displaystyle \forall (f\in {\mathbb {N} }^{\mathbb {N} }).\exists (e\in {\mathbb {N} }).{\Big (}\forall (n\in {\mathbb {N} }).\exists (w\in {\mathbb {N} }).T(e,n,w)\land U(w,f(n)){\Big )}}$
Kleene's T predicate together with the result extraction expresses that any input number ${\displaystyle n}$ being mapped to the number ${\displaystyle f(n)}$ is, through ${\displaystyle w}$,
witnessed to be a computable mapping. Here ${\displaystyle {\mathbb {N} }}$ now denotes a set theory model of the standard natural numbers and ${\displaystyle e}$ is an index with respect to a fixed
program enumeration. Stronger variants have been used, which extend this principle to functions ${\displaystyle f\in {\mathbb {N} }^{X}}$ defined on domains ${\displaystyle X\subset {\mathbb {N} }}$
of low complexity. The principle rejects decidability for the predicate ${\displaystyle Q(e)}$ defined as ${\displaystyle \exists (w\in {\mathbb {N} }).T(e,e,w)}$, expressing that ${\displaystyle e}$
is the index of a computable function halting on its own index. Weaker, double negated forms of the principle may be considered too, which do not require the existence of a recursive implementation
for every ${\displaystyle f}$, but which still make principles inconsistent that claim the existence of functions which provenly have no recursive realization. Some forms of a Church's thesis as
principle are even consistent with the classical, weak so called second-order arithmetic theory ${\displaystyle {\mathsf {RCA}}_{0}}$, a subsystem of the two-sorted first-order theory ${\displaystyle
{\mathsf {Z}}_{2}}$.
The collection of computable functions is classically
, which classically is the same as being countable. But classical set theories will generally claim that
${\displaystyle {\mathbb {N} }^{\mathbb {N} }}$
holds also other functions than the computable ones. For example there is a proof in
${\displaystyle {\mathsf {ZF}}}$
that total functions (in the set theory sense) do exist that cannot be captured by a
Turing machine
. Taking the computable world seriously as ontology, a prime example of an anti-classical conception related the Markovian school is the permitted subcountability of various uncountable collections.
When adopting the subcountability of the collection of all unending sequences of natural numbers (
${\displaystyle {\mathbb {N} }^{\mathbb {N} }}$
) as an axiom in a constructive theory, the "smallness" (in classical terms) of this collection, in some set theoretical realizations, is then already captured by the theory itself. A constructive
theory may also adopt neither classical nor anti-classical axioms and so stay agnostic towards either possibility.
Constructive principles already prove ${\displaystyle \forall (x\in X).eg eg {\big (}Q(x)\lor eg Q(x){\big )}}$ for any ${\displaystyle Q}$. And so for any given element ${\displaystyle x}$ of ${\
displaystyle X}$, the corresponding excluded middle statement for the proposition cannot be negated. Indeed, for any given ${\displaystyle x}$, by noncontradiction it is impossible to rule out ${\
displaystyle Q(x)}$ and rule out its negation both at once, and the relevant De Morgan's rule applies as above. But a theory may in some instances also permit the rejection claim ${\displaystyle eg \
forall (x\in X).{\big (}Q(x)\lor eg Q(x){\big )}}$. Adopting this does not necessitate providing a particular ${\displaystyle t\in X}$ witnessing the failure of excluded middle for the particular
proposition ${\displaystyle Q(t)}$, i.e. witnessing the inconsistent ${\displaystyle eg {\big (}Q(t)\lor eg Q(t){\big )}}$. Predicates ${\displaystyle Q(x)}$ on an infinite domain ${\displaystyle X}$
correspond to
, one may reject the possibility of decidability of a predicate without also making any existence claim in
${\displaystyle X}$
. As another example, such a situation is enforced in
intuitionistic analysis, in a case where the quantifier ranges over infinitely many
unending binary sequences
${\displaystyle Q(x)}$
states that a sequence
${\displaystyle x}$
is everywhere zero. Concerning this property, of being conclusively identified as the sequence which is forever constant, adopting Brouwer's continuity principle strictly rules out that this could be
proven decidable for all the sequences.
So in a constructive context with a so-called non-classical logic as used here, one may consistently adopt axioms which are both in contradiction to quantified forms of excluded middle, but also
non-constructive in the computable sense or as gauged by meta-logical existence properties discussed previously. In that way, a constructive set theory can also provide the framework to study
non-classical theories, say rings modeling smooth infinitesimal analysis.
History and overview
Historically, the subject of constructive set theory (often also "${\displaystyle {\mathsf {CST}}}$") begun with John Myhill's work on the theories also called ${\displaystyle {\mathsf {IZF}}}$ and $
{\displaystyle {\mathsf {CST}}}$.^[3]^[4]^[5] In 1973, he had proposed the former as a first-order set theory based on intuitionistic logic, taking the most common foundation ${\displaystyle {\mathsf
{ZFC}}}$ and throwing out the Axiom of choice as well as the principle of the excluded middle, initially leaving everything else as is. However, different forms of some of the ${\displaystyle {\
mathsf {ZFC}}}$ axioms which are equivalent in the classical setting are inequivalent in the constructive setting, and some forms imply ${\displaystyle {\mathrm {PEM} }}$, as will be demonstrated. In
those cases, the intuitionistically weaker formulations were consequently adopted. The far more conservative system ${\displaystyle {\mathsf {CST}}}$ is also a first-order theory, but of several
sorts and bounded quantification, aiming to provide a formal foundation for Errett Bishop's program of constructive mathematics.
The main discussion presents a sequence of theories in the same language as ${\displaystyle {\mathsf {ZF}}}$, leading up to Peter Aczel's well studied ${\displaystyle {\mathsf {CZF}}}$,^[6] and
beyond. Many modern results trace back to Rathjen and his students. ${\displaystyle {\mathsf {CZF}}}$ is also characterized by the two features present also in Myhill's theory: On the one hand, it is
using the
Powerset axiom is discarded, generally in favor of related but weaker axioms. The strong form is very casually used in classical
general topology
. Adding
${\displaystyle {\mathrm {PEM} }}$
to a theory even weaker than
${\displaystyle {\mathsf {CZF}}}$
${\displaystyle {\mathsf {ZF}}}$
, as detailed below.
The system, which has come to be known as Intuitionistic Zermelo–Fraenkel set theory (
${\displaystyle {\mathsf {IZF}}}$
), is a strong set theory without
${\displaystyle {\mathrm {PEM} }}$
. It is similar to
${\displaystyle {\mathsf {CZF}}}$
, but less conservative or
. The theory denoted
${\displaystyle {\mathsf {IKP}}}$
is the constructive version of
${\displaystyle {\mathsf {KP}}}$
, the classical
Kripke–Platek set theory
without a form of Powerset and where even the Axiom of Collection is bounded.
Many theories studied in constructive set theory are mere restrictions of Zermelo–Fraenkel set theory (${\displaystyle {\mathsf {ZF}}}$) with respect to their axiom as well as their underlying logic.
Such theories can then also be interpreted in any model of ${\displaystyle {\mathsf {ZF}}}$.
Peano arithmetic
${\displaystyle {\mathsf {PA}}}$
with the theory given by
${\displaystyle {\mathsf {ZF}}}$
minus Infinity and without infinite sets, plus the existence of all
, which is discussed below.) Likewise, constructive arithmetic can also be taken as an apology for most axioms adopted in
${\displaystyle {\mathsf {CZF}}}$
Heyting arithmetic ${\displaystyle {\mathsf {HA}}}$
is bi-interpretable with a weak constructive set theory,
as also described in the article on
${\displaystyle {\mathsf {HA}}}$
. One may arithmetically characterize a membership relation "
${\displaystyle \in }$
" and with it prove - instead of the existence of a set of natural numbers
${\displaystyle \omega }$
- that all sets in its theory are in bijection with a (finite)
von Neumann natural
, a principle denoted
${\displaystyle {\mathrm {V} }={\mathrm {Fin} }}$
. This context further validates Extensionality, Pairing, Union, Binary Intersection (which is related to the
Axiom schema of predicative separation
) and the Set Induction schema. Taken as axioms, the aforementioned principles constitute a set theory that is already identical with the theory given by
${\displaystyle {\mathsf {CZF}}}$
minus the existence of
${\displaystyle \omega }$
but plus
${\displaystyle {\mathrm {V} }={\mathrm {Fin} }}$
as axiom. All those axioms are discussed in detail below. Relatedly,
${\displaystyle {\mathsf {CZF}}}$
also proves that the
hereditarily finite sets
fulfill all the previous axioms. This is a result which persists when passing on to
${\displaystyle {\mathsf {PA}}}$
${\displaystyle {\mathsf {ZF}}}$
minus Infinity.
As far as constructive realizations go there is a relevant realizability theory. Relatedly, Aczel's theory constructive Zermelo-Fraenkel ${\displaystyle {\mathsf {CZF}}}$ has been interpreted in a
Martin-Löf type theories, as sketched in the section on ${\displaystyle {\mathsf {CZF}}}$. In this way, theorems provable in this and weaker set theories are candidates for a computer realization.
Presheaf models for constructive set theories have also been introduced. These are analogous to presheaf models for intuitionistic set theory developed by Dana Scott in the 1980s.^[10]^[11]
Realizability models of ${\displaystyle {\mathsf {CZF}}}$ within the effective topos have been identified, which, say, at once validate full Separation, relativized dependent choice ${\displaystyle
{\mathrm {RDC} }}$, independence of premise ${\displaystyle {\mathrm {IP} }}$ for sets, but also the subcountability of all sets, Markov's principle ${\displaystyle {\mathrm {MP} }}$ and Church's
thesis ${\displaystyle {\mathrm {CT} _{0}}}$ in the formulation for all predicates.^[12]
In an axiomatic set theory, sets are the entities exhibiting properties. But there is then a more intricate relation between the set concept and logic. For example, the property of being a natural
number smaller than 100 may be reformulated as being a member of the set of numbers with that property. The set theory axioms govern set existence and thus govern which predicates can be materialized
as entity in itself, in this sense. Specification is also directly governed by the axioms, as discussed below. For a practical consideration, consider the property of being a sequence of coin flip
outcomes that overall show more heads than tails. This property may be used to separate out a corresponding subset of any set of finite sequences of coin flips. Relatedly, the
is explicitly based around sets and provides many more examples.
This section introduces the object language and auxiliary notions used to formalize this materialization.
The propositional connective symbols used to form syntactic formulas are standard. The axioms of set theory give a means to prove equality "${\displaystyle =}$" of sets and that symbol may, by abuse
of notation, be used for classes. A set in which the equality predicate is decidable is also called discrete. Negation "${\displaystyle eg }$" of equality is sometimes called the denial of equality,
and is commonly written "${\displaystyle eq }$". However, in a context with
apartness relations
, for example when dealing with sequences, the latter symbol is also sometimes used for something different.
The common treatment, as also adopted here, formally only extends the underlying logic by one primitive binary predicate of set theory, "${\displaystyle \in }$". As with equality, negation of
elementhood "${\displaystyle \in }$" is often written "${\displaystyle otin }$".
Below the Greek ${\displaystyle \phi }$ denotes a proposition or predicate variable in axiom schemas and ${\displaystyle P}$ or ${\displaystyle Q}$ is used for particular such predicates. The word
"predicate" is sometimes used interchangeably with "formulas" as well, even in the unary case.
Quantifiers only ever range over sets and those are denoted by lower case letters. As is common, one may use argument brackets to express predicates, for the sake of highlighting particular free
variables in their syntactic expression, as in "${\displaystyle Q(z)}$". Unique existence ${\displaystyle \exists !x.Q(x)}$ here means ${\displaystyle \exists x.\forall y.{\big (}y=x\leftrightarrow Q
(y){\big )}}$.
As is also common, one makes use set builder notation for classes, which, in most contexts, are not part of the object language but used for concise discussion. In particular, one may introduce
notation declarations of the corresponding class via "${\displaystyle A=\{z\mid Q(z)\}}$", for the purpose of expressing any ${\displaystyle Q(a)}$ as ${\displaystyle a\in A}$. Logically equivalent
predicates can be used to introduce the same class. One also writes ${\displaystyle \{z\in B\mid Q(z)\}}$ as shorthand for ${\displaystyle \{z\mid z\in B\land Q(z)\}}$. For example, one may consider
${\displaystyle \{z\in B\mid zotin C\}}$ and this is also denoted ${\displaystyle B\setminus C}$.
One abbreviates ${\displaystyle \forall z.{\big (}z\in A\to Q(z){\big )}}$ by ${\displaystyle \forall (z\in A).Q(z)}$ and ${\displaystyle \exists z.{\big (}z\in A\land Q(z){\big )}}$ by ${\
displaystyle \exists (z\in A).Q(z)}$. The syntactic notion of bounded quantification in this sense can play a role in the formulation of axiom schemas, as seen in the discussion of axioms below.
Express the subclass claim ${\displaystyle \forall (z\in A).z\in B}$, i.e. ${\displaystyle \forall z.(z\in A\to z\in B)}$, by ${\displaystyle A\subset B}$. For a predicate ${\displaystyle Q}$,
trivially ${\displaystyle \forall z.{\big (}(z\in B\land Q(z))\to z\in B{\big )}}$. And so follows that ${\displaystyle \{z\in B\mid Q(z)\}\subset B}$. The notion of subset-bounded quantifiers, as in
${\displaystyle \forall (z\subset A).z\in B}$, has been used in set theoretical investigation as well, but will not be further highlighted here.
If there provenly exists a set inside a class, meaning ${\displaystyle \exists z.(z\in A)}$, then one calls it inhabited. One may also use quantification in ${\displaystyle A}$ to express this as ${\
displaystyle \exists (z\in A).(z=z)}$. The class ${\displaystyle A}$ is then provenly not the empty set, introduced below. While classically equivalent, constructively non-empty is a weaker notion
with two negations and ought to be called not uninhabited. Unfortunately, the word for the more useful notion of 'inhabited' is rarely used in classical mathematics.
Two ways to express that classes are disjoint does capture many of the intuitionistically valid negation rules: ${\displaystyle {\big (}\forall (x\in A).xotin B{\big )}\leftrightarrow eg \exists (x\
in A).x\in B}$. Using the above notation, this is a purely logical equivalence and in this article the proposition will furthermore be expressible as ${\displaystyle A\cap B=\{\}}$.
A subclass ${\displaystyle A\subset B}$ is called detachable from ${\displaystyle B}$ if the relativized membership predicate is decidable, i.e. if ${\displaystyle \forall (x\in B).x\in A\lor xotin
A}$ holds. It is also called decidable if the superclass is clear from the context - often this is the set of natural numbers.
Extensional equivalence
Denote by ${\displaystyle A\simeq B}$ the statement expressing that two classes have exactly the same elements, i.e. ${\displaystyle \forall z.(z\in A\leftrightarrow z\in B)}$, or equivalently ${\
displaystyle (A\subset B)\land (B\subset A)}$. This is not to be conflated with the concept of equinumerosity also used below.
With ${\displaystyle A}$ standing for ${\displaystyle \{z\mid Q(z)\}}$, the convenient notational relation between ${\displaystyle x\in A}$ and ${\displaystyle Q(x)}$, axioms of the form ${\
displaystyle \exists a.\forall z.{\big (}z\in a\leftrightarrow Q(z){\big )}}$ postulate that the class of all sets for which ${\displaystyle Q}$ holds actually forms a set. Less formally, this may be
expressed as ${\displaystyle \exists a.a\simeq A}$. Likewise, the proposition ${\displaystyle \forall a.(a\simeq A)\to P(a)}$ conveys "${\displaystyle P(A)}$ when ${\displaystyle A}$ is among the
theory's sets." For the case where ${\displaystyle P}$ is the trivially false predicate, the proposition is equivalent to the negation of the former existence claim, expressing the non-existence of $
{\displaystyle A}$ as a set.
Further extensions of class comprehension notation as above are in common used in set theory, giving meaning to statements such as "${\displaystyle \{f(z)\mid Q(z)\}\simeq \{\langle x,y,z\rangle \mid
T(x,y,z)\}}$", and so on.
Syntactically more general, a set ${\displaystyle w}$ may also be characterized using another 2-ary predicate ${\displaystyle R}$ trough ${\displaystyle \forall x.x\in w\leftrightarrow R(x,w)}$,
where the right hand side may depend on the actual variable ${\displaystyle w}$, and possibly even on membership in ${\displaystyle w}$ itself.
Subtheories of ZF
Here a series of familiar axioms is presented, or the relevant slight reformulations thereof. It is emphasized how the absence of ${\displaystyle {\mathrm {PEM} }}$ in the logic affects what is
provable and it is highlighted which non-classical axioms are, in turn, consistent.
Using the notation introduced above, the following axiom gives a means to prove equality "${\displaystyle =}$" of two sets, so that through substitution, any predicate about ${\displaystyle x}$
translates to one of ${\displaystyle y}$. By the logical properties of equality, the converse direction of the postulated implication holds automatically.
│Extensionality │
│ │
│${\displaystyle \forall x.\forall y.\ \ x\simeq y\to x=y}$│
In a constructive interpretation, the elements of a subclass ${\displaystyle A=\{z\in B\mid Q(z)\lor eg Q(z)\}}$ of ${\displaystyle B}$ may come equipped with more information than those of ${\
displaystyle B}$, in the sense that being able to judge ${\displaystyle b\in A}$ is being able to judge ${\displaystyle Q(b)\lor eg Q(b)}$. And (unless the whole disjunction follows from axioms) in
the Brouwer–Heyting–Kolmogorov interpretation, this means to have proven ${\displaystyle Q(b)}$ or having rejected it. As ${\displaystyle \{z\in B\mid Q(z)\}}$ may not be detachable from ${\
displaystyle B}$, i.e. as ${\displaystyle Q}$ may be not decidable for all elements in ${\displaystyle B}$, the two classes ${\displaystyle A}$ and ${\displaystyle B}$ must a priori be distinguished.
Consider a predicate ${\displaystyle Q}$ that provenly holds for all elements of a set ${\displaystyle y}$, so that ${\displaystyle y\simeq \{z\in y\mid Q(z)\}}$, and assume that the class on the
right hand side is established to be a set. Note that, even if this set on the right informally also ties to proof-relevant information about the validity of ${\displaystyle Q}$ for all the elements,
the Extensionality axiom postulates that, in our set theory, the set on the right hand side is judged equal to the one on the left hand side. This above analysis also shows that a statement of the
form ${\displaystyle \forall (x\in w).Q(x)}$, which in informal class notation may be expressed as ${\displaystyle w\subset \{x\mid Q(x)\}}$, is then equivalently expressed as ${\displaystyle \{x\in
w\mid Q(x)\}=w}$. This means that establishing such ${\displaystyle \forall }$-theorems (e.g. the ones provable from full mathematical induction) enables substituting the subclass of ${\displaystyle
w}$ on the left hand side of the equality for just ${\displaystyle w}$, in any formula.
Note that adopting "${\displaystyle =}$" as a symbol in a predicate logic theory makes equality of two terms a quantifier-free expression.
Alternative approaches
While often adopted, this axiom has been criticized in constructive thought, as it effectively collapses differently defined properties, or at least the sets viewed as the extension of these
properties, a Fregian notion.
Modern type theories may instead aim at defining the demanded equivalence "${\displaystyle \simeq }$" in terms of functions, see e.g. type equivalence. The related concept of function extensionality
is often not adopted in type theory.
Other frameworks for constructive mathematics might instead demand a particular rule for equality or apartness come for the elements ${\displaystyle z\in x}$ of each and every set ${\displaystyle x}$
discussed. But also in an approach to sets emphasizing apartness may the above definition in terms of subsets be used to characterize a notion of equality "${\displaystyle \simeq }$" of those
subsets. Relatedly, a loose notion of complementation of two
${\displaystyle u\subset x}$
${\displaystyle v\subset x}$
is given when any two members
${\displaystyle s\in u}$
${\displaystyle t\in v}$
are provably apart from each other. The collection of complementing pairs
${\displaystyle \langle u,v\rangle }$
well behaved.
Merging sets
Define class notation for the pairing of a few given elements via disjunctions. E.g. ${\displaystyle z\in \{a,b\}}$ is the quantifier-free statement ${\displaystyle (z=a)\lor (z=b)}$, and likewise $
{\displaystyle z\in \{a,b,c\}}$ says ${\displaystyle (z=a)\lor (z=b)\lor (z=c)}$, and so on.
Two other basic existence postulates given some other sets are as follows. Firstly,
│Pairing │
│ │
│${\displaystyle \forall x.\forall y.\ \ \exists p.\{x,y\}\subset p}$│
Given the definitions above, ${\displaystyle \{x,y\}\subset p}$ expands to ${\displaystyle \forall z.(z=x\lor z=y)\to z\in p}$, so this is making use of equality and a disjunction. The axiom says
that for any two sets ${\displaystyle x}$ and ${\displaystyle y}$, there is at least one set ${\displaystyle p}$, which hold at least those two sets.
With bounded Separation below, also the class ${\displaystyle \{x,y\}}$ exists as a set. Denote by ${\displaystyle \langle x,y\rangle }$ the standard ordered pair model ${\displaystyle \{\{x\},\{x,y
\}\}}$, so that e.g. ${\displaystyle q=\langle x,y\rangle }$ denotes another bounded formula in the formal language of the theory.
And then, using existential quantification and a conjunction,
│Union │
│ │
│${\displaystyle \forall x.\ \ \exists u.\forall z.{\Big (}{\big (}\exists (y\in x).z\in y{\big )}\to z\in u{\Big )}}$│
saying that for any set ${\displaystyle x}$, there is at least one set ${\displaystyle u}$, which holds all the members ${\displaystyle z}$, of ${\displaystyle x}$'s members ${\displaystyle y}$. The
minimal such set is the union.
The two axioms are commonly formulated stronger, in terms of "${\displaystyle \leftrightarrow }$" instead of just "${\displaystyle \to }$", although this is technically redundant in the context of $
{\displaystyle {\mathsf {BCST}}}$: As the Separation axiom below is formulated with "${\displaystyle \leftrightarrow }$", for statements ${\displaystyle \exists t.\forall z.\phi (z)\to z\in t}$ the
equivalence can be derived, given the theory allows for separation using ${\displaystyle \phi }$. In cases where ${\displaystyle \phi }$ is an existential statement, like here in the union axiom,
there is also another formulation using a universal quantifier.
Also using bounded Separation, the two axioms just stated together imply the existence of a binary union of two classes ${\displaystyle a}$ and ${\displaystyle b}$, when they have been established to
be sets, denoted by ${\displaystyle \bigcup \{a,b\}}$ or ${\displaystyle a\cup b}$. For a fixed set ${\displaystyle z}$, to validate membership ${\displaystyle z\in a\cup b}$ in the union of two
given sets ${\displaystyle y=a}$ and ${\displaystyle y=b}$, one needs to validate the ${\displaystyle z\in y}$ part of the axiom, which can be done by validating the disjunction of the predicates
defining the sets ${\displaystyle a}$ and ${\displaystyle b}$, for ${\displaystyle z}$. In terms of the associated sets, it is done by validating the disjunction ${\displaystyle z\in a\lor z\in b}$.
The union and other set forming notations are also used for classes. For instance, the proposition ${\displaystyle z\in A\land zotin C}$ is written ${\displaystyle z\in A\setminus C}$. Let now ${\
displaystyle B\subset A}$. Given ${\displaystyle z\in A}$, the decidability of membership in ${\displaystyle B}$, i.e. the potentially independent statement ${\displaystyle z\in B\lor zotin B}$, can
also be expressed as ${\displaystyle z\in B\cup (A\setminus B)}$. But, as for any excluded middle statement, the double-negation of the latter holds: That union isn't not inhabited by ${\displaystyle
z}$. This goes to show that partitioning is also a more involved notion, constructively.
Set existence
The property that is false for any set corresponds to the empty class, which is denoted by ${\displaystyle \{\}}$ or zero, ${\displaystyle 0}$. That the empty class is a set readily follows from
other existence axioms, such as the Axiom of Infinity below. But if, e.g., one is explicitly interested in excluding infinite sets in one's study, one may at this point adopt the
Introduction of the symbol ${\displaystyle \{\}}$ (as abbreviating notation for expressions in involving characterizing properties) is justified as uniqueness for this set can be proven. As ${\
displaystyle y\in \{\}}$ is false for any ${\displaystyle y}$, the axiom then reads ${\displaystyle \exists x.x\simeq \{\}}$.
Write ${\displaystyle 1}$ for ${\displaystyle S0}$, which equals ${\displaystyle \{\{\}\}}$, i.e. ${\displaystyle \{1\}}$. Likewise, write ${\displaystyle 2}$ for ${\displaystyle S1}$, which equals $
{\displaystyle \{\{\},\{\{\}\}\}}$, i.e. ${\displaystyle \{0,1\}}$. A simple and provenly false proposition then is, for example, ${\displaystyle \{\}\in \{\}}$, corresponding to ${\displaystyle 0<0}
$ in the standard arithmetic model. Again, here symbols such as ${\displaystyle \{\}}$ are treated as convenient notation and any proposition really translates to an expression using only "${\
displaystyle \in }$" and logical symbols, including quantifiers. Accompanied by a metamathematical analysis that the capabilities of the new theories are equivalent in an effective manner, formal
extensions by symbols such as ${\displaystyle 0}$ may also be considered.
More generally, for a set ${\displaystyle x}$, define the successor set ${\displaystyle Sx}$ as ${\displaystyle x\cup \{x\}}$. The interplay of the successor operation with the membership relation
has a recursive clause, in the sense that ${\displaystyle (y\in Sx)\leftrightarrow (y\in x\lor y=x)}$. By reflexivity of equality, ${\displaystyle x\in Sx}$, and in particular ${\displaystyle Sx}$ is
always inhabited.
The following makes use of axiom schemas, i.e. axioms for some collection of predicates. Some of the stated axiom schemas shall allow for any collection of set parameters as well (meaning any
particular named variables ${\displaystyle v_{0},v_{1},\dots ,v_{n}}$). That is, instantiations of the schema are permitted in which the predicate (some particular ${\displaystyle \phi }$) also
depends on a number of further set variables and the statement of the axiom is understood with corresponding extra outer universal closures (as in ${\displaystyle \forall v_{0}.\forall v_{1}.\cdots \
forall v_{n}.}$).
Basic constructive set theory ${\displaystyle {\mathsf {BCST}}}$ consists of several axioms also part of standard set theory, except the so called "full" Separation axiom is weakened. Beyond the four
axioms above, it postulates Predicative Separation as well as the Replacement schema.
│Axiom schema of predicative separation: For any bounded predicate ${\displaystyle \phi }$, with parameters and with set variable ${\displaystyle y}$ not free in it,│
│ │
│${\displaystyle \forall y.\,\exists s.\forall x.{\big (}x\in s\,\leftrightarrow \,(x\in y\land \phi (x)){\big )}}$ │
This axiom amounts to postulating the existence of a set ${\displaystyle s}$ obtained by the intersection of any set ${\displaystyle y}$ and any predicatively described class ${\displaystyle \{x\mid
\phi (x)\}}$. For any ${\displaystyle z}$ proven to be a set, when the predicate is taken as ${\displaystyle \phi (x):=x\in z}$, one obtains the binary intersection of sets and writes ${\displaystyle
s=y\cap z}$. Intersection corresponds to conjunction in an analog way to how union corresponds to disjunction.
When the predicate is taken as the negation ${\displaystyle \phi (x):=xotin z}$, one obtains the difference principle, granting existence of any set ${\displaystyle y\setminus z}$. Note that sets
like ${\displaystyle y\setminus y}$ or ${\displaystyle \{x\in y\mid eg (x=x)\}}$ are always empty. So, as noted, from Separation and the existence of at least one set (e.g. Infinity below) will
follow the existence of the empty set ${\displaystyle \{\}}$ (also denoted ${\displaystyle 0}$). Within this conservative context of ${\displaystyle {\mathsf {BCST}}}$, the Predicative Separation
schema is actually equivalent to Empty Set plus the existence of the binary intersection for any two sets. The latter variant of axiomatization does not make use of a formula schema.
Predicative Separation is a schema that takes into account syntactic aspects of set defining predicates, up to provable equivalence. The permitted formulas are denoted by ${\displaystyle \Delta _{0}}
$, the lowest level in the set theoretical Lévy hierarchy.^[13] General predicates in set theory are never syntactically restricted in such a way and so, in praxis, generic subclasses of sets are
still part of the mathematical language. As the scope of subclasses that are provably sets is sensitive to what sets already exist, this scope is expanded when further set existence postulates added
For a proposition ${\displaystyle P}$, a recurring trope in the constructive analysis of set theory is to view the predicate ${\displaystyle x=0\land P}$ as the subclass ${\displaystyle B:=\{x\in 1\
mid P\}}$ of the second ordinal ${\displaystyle 1:=S0=\{0\}}$. If it is provable that ${\displaystyle P}$ holds, or ${\displaystyle eg P}$, or ${\displaystyle eg eg P}$, then ${\displaystyle B}$ is
inhabited, or empty (uninhabited), or non-empty (not uninhabited), respectively. Clearly, ${\displaystyle P}$ is equivalent to both the proposition ${\displaystyle 0\in B}$, and also ${\displaystyle
B=1}$. Likewise, ${\displaystyle eg P}$ is equivalent to ${\displaystyle B=0}$ and, equivalently, also ${\displaystyle eg (0\in B)}$. So, here, ${\displaystyle B}$ being detachable from ${\
displaystyle 1}$ exactly means ${\displaystyle P\lor eg P}$. In the model of the naturals, if ${\displaystyle B}$ is a number, ${\displaystyle 0\in B}$ also expresses that ${\displaystyle 0}$ is
smaller than ${\displaystyle B}$. The union that is part of the successor operation definition above may be used to express the excluded middle statement as ${\displaystyle 0\in SB}$. In words, ${\
displaystyle P}$ is decidable if and only if the successor of ${\displaystyle B}$ is larger than the smallest ordinal ${\displaystyle 0}$. The proposition ${\displaystyle P}$ is decided either way
through establishing how ${\displaystyle 0}$ is smaller: By ${\displaystyle 0}$ already being smaller than ${\displaystyle B}$, or by ${\displaystyle 0}$ being ${\displaystyle SB}$'s direct
predecessor. Yet another way to express excluded middle for ${\displaystyle P}$ is as the existence of a least number member of the inhabited class ${\displaystyle b:=B\cup \{1\}}$.
If one's separation axiom allows for separation with ${\displaystyle P}$, then ${\displaystyle B}$ is a subset, which may be called the truth value associated with ${\displaystyle P}$. Two truth
values can be proven equal, as sets, by proving an equivalence. In terms of this terminology, the collection of proof values can a priori be understood to be rich. Unsurprisingly, decidable
propositions have one of a binary set of truth values. The excluded middle disjunction for that ${\displaystyle P}$ is then also implied by the global statement ${\displaystyle \forall b.(0\in b)\lor
(0otin b)}$.
No universal set
When using the informal class terminology, any set is also considered a class. At the same time, there do arise so called proper classes that can have no extension as a set. When in a theory there is
a proof of ${\displaystyle eg \exists x.A\subset x}$, then ${\displaystyle A}$ must be proper. (When taking up the perspective of ${\displaystyle {\mathsf {ZF}}}$ on sets, a theory which has full
Separation, proper classes are generally thought of as those that are "too big" to be a set. More technically, they are subclasses of the cumulative hierarchy that extend beyond any ordinal bound.)
By a remark in the section on merging sets, a set cannot consistently ruled out to be a member of a class of the form ${\displaystyle A\cup \{x\mid xotin A\}}$. A constructive proof that it is in
that class contains information. Now if ${\displaystyle A}$ is a set, then the class ${\displaystyle \{x\mid xotin A\}}$ is provably proper. The following demonstrates this in the special case when $
{\displaystyle A}$ is empty, i.e. when the right side is the universal class. Being negative results, it reads as in the classical theory.
The following holds for any relation ${\displaystyle E}$. It gives a purely logical condition such that two terms ${\displaystyle s}$ and ${\displaystyle y}$ cannot be ${\displaystyle E}$-related to
one another.
${\displaystyle {\big (}\forall x.xEs\leftrightarrow (xEy\land eg xEx){\big )}\to eg (yEs\lor sEs\lor sEy)}$
Most important here is the rejection of the final disjunct, ${\displaystyle eg sEy}$. The expression ${\displaystyle eg (x\in x)}$ does not involve unbounded quantification and is thus allowed in
Russel's construction
in turn shows that
${\displaystyle \{x\in y\mid xotin x\}otin y}$
. So for any set
${\displaystyle y}$
, Predicative Separation alone implies that there exists a set which is not a member of
${\displaystyle y}$
. In particular, no
universal set
can exist in this theory.
In a theory further adopting the axiom of regularity, like ${\displaystyle {\mathsf {ZF}}}$, provenly ${\displaystyle x\in x}$ is false for any set ${\displaystyle x}$. There, this then means that
the subset ${\displaystyle \{x\in y\mid xotin x\}}$ is equal to ${\displaystyle y}$ itself, and that the class ${\displaystyle \{x\mid x\in x\}}$ is the empty set.
For any ${\displaystyle E}$ and ${\displaystyle y}$, the special case ${\displaystyle s=y}$ in the formula above gives
${\displaystyle eg {\big (}\forall x.xEy\leftrightarrow eg xEx{\big )}}$
This already implies that no set ${\displaystyle y}$ equals the subclass ${\displaystyle \{x\mid xotin x\}}$ of the universal class is, i.e. that subclass is a proper one as well. But even in ${\
displaystyle {\mathsf {ZF}}}$ without Regularity it is consistent for there to be a proper class of singletons which each contain exactly themselves.
As an aside, in a theory with stratification like Intuitionistic New Foundations, the syntactic expression ${\displaystyle x\in x}$ may be disallowed in Separation. In turn, the above proof of
negation of the existence of a universal set cannot be performed, in that theory.
The axiom schema of Predicative Separation is also called ${\displaystyle \Delta _{0}}$-Separation or Bounded Separation, as in Separation for set-bounded quantifiers only. (Warning note: The Lévy
hierarchy nomenclature is in analogy to ${\displaystyle \Delta _{0}^{0}}$ in the arithmetical hierarchy, albeit comparison can be subtle: The arithmetic classification is sometimes expressed not
syntactically but in terms of subclasses of the naturals. Also, the bottom level of the arithmetical hierarchy has several common definitions, some not allowing the use of some total functions. A
similar distinction is not relevant on the level ${\displaystyle \Sigma _{1}^{0}}$ or higher. Finally note that a ${\displaystyle \Delta _{0}}$ classification of a formula may be expressed up to
equivalence in the theory.)
The schema is also the way in which
${\displaystyle {\mathsf {Z}}}$
, for mathematical foundations related to
Kripke-Platek set theory
The restriction in the axiom is also gatekeeping impredicative definitions: Existence should at best not be claimed for objects that are not explicitly describable, or whose definition involves
themselves or reference to a proper class, such as when a property to be checked involves a universal quantifier. So in a constructive theory without Axiom of power set, when ${\displaystyle R}$
denotes some 2-ary predicate, one should not generally expect a subclass ${\displaystyle s}$ of ${\displaystyle y}$ to be a set, in case that it is defined, for example, as in
${\displaystyle \{x\in y\mid \forall t.{\big (}(t\subset y)\to R(x,t){\big )}\}}$,
or via a similar definitions involving any quantification over the sets ${\displaystyle t\subset y}$. Note that if this subclass ${\displaystyle s}$ of ${\displaystyle y}$ is provenly a set, then
this subset itself is also in the unbounded scope of set variable ${\displaystyle t}$. In other words, as the subclass property ${\displaystyle s\subset y}$ is fulfilled, this exact set ${\
displaystyle s}$, defined using the expression ${\displaystyle R(x,s)}$, would play a role in its own characterization.
While predicative Separation leads to fewer given class definitions being sets, it may be emphasized that many class definitions that are classically equivalent are not so when restricting oneself to
the weaker logic. Due to the potential undecidability of general predicates, the notion of subset and subclass is automatically more elaborate in constructive set theories than in classical ones. So
in this way one has obtained a broader theory. This remains true if full Separation is adopted, such as in the theory ${\displaystyle {\mathsf {IZF}}}$, which however spoils the
existence property
as well as the standard type theoretical interpretations, and in this way spoils a bottom-up view of constructive sets. As an aside, as
constructive type theory
, constructive set theory can be said to quite differ from that framework.
Next consider the
│Axiom schema of Replacement: For any predicate ${\displaystyle \phi }$ with set variable ${\displaystyle r}$ not free in it, │
│ │
│${\displaystyle \forall d.\ \ \forall (x\in d).\exists !y.\phi (x,y)\to \exists r.\forall y.{\big (}y\in r\leftrightarrow \exists (x\in d).\phi (x,y){\big )}}$│
It is granting existence, as sets, of the range of function-like predicates, obtained via their domains. In the above formulation, the predicate is not restricted akin to the Separation schema, but
this axiom already involves an existential quantifier in the antecedent. Of course, weaker schemas could be considered as well.
Via Replacement, the existence of any pair ${\displaystyle \{x,y\}}$ also follows from that of any other particular pair, such as ${\displaystyle \{0,1\}=2=SS0}$. But as the binary union used in ${\
displaystyle S}$ already made use of the Pairing axiom, this approach then necessitates postulating the existence of ${\displaystyle 2}$ over that of ${\displaystyle 0}$. In a theory with the
impredicative Powerset axiom, the existence of ${\displaystyle 2\subset {\mathcal {P}}{\mathcal {P}}0}$ can also be demonstrated using Separation.
With the Replacement schema, the theory outlined thus far proves that the
, holding all pairs of elements of two sets, is a set. In turn, for any fixed number (in the metatheory), the corresponding product expression, say
${\displaystyle x\times x\times x\times x}$
, can be constructed as a set. The axiomatic requirements for sets recursively defined in the language are discussed further below. A set
${\displaystyle x}$
is discrete, i.e. equality of elements inside a set
${\displaystyle x}$
is decidable, if the corresponding relation as a subset of
${\displaystyle x\times x}$
is decidable.
Replacement is relevant for function comprehension and can be seen as a form of comprehension more generally. Only when assuming ${\displaystyle {\mathrm {PEM} }}$ does Replacement already imply full
Separation. In ${\displaystyle {\mathsf {ZF}}}$, Replacement is mostly important to prove the existence of sets of high rank, namely via instances of the axiom schema where ${\displaystyle \phi
(x,y)}$ relates relatively small set ${\displaystyle x}$ to bigger ones, ${\displaystyle y}$.
Constructive set theories commonly have Axiom schema of Replacement, sometimes restricted to bounded formulas. However, when other axioms are dropped, this schema is actually often strengthened - not
beyond ${\displaystyle {\mathsf {ZF}}}$, but instead merely to gain back some provability strength. Such stronger axioms exist that do not spoil the strong
existence properties
of a theory, as discussed further below.
If ${\displaystyle i_{X}}$ is provenly a function on ${\displaystyle X}$ and it is equipped with a codomain ${\displaystyle Y}$ (all discussed in detail below), then the image of ${\displaystyle i_
{X}}$ is a subset of ${\displaystyle Y}$. In other approaches to the set concept, the notion of subsets is defined in terms of "operations", in this fashion.
Hereditarily finite sets
Pendants of the elements of the class of
hereditarily finite sets
${\displaystyle H_{\aleph _{0}}}$
can be implemented in any common programming language. The axioms discussed above abstract from common operations on the
set data type
: Pairing and Union are related to nesting and
, or taken together concatenation. Replacement is related to
and Separation is then related to the often simpler
. Replacement together with
Set Induction
(introduced below) suffices to axiomize
${\displaystyle H_{\aleph _{0}}}$
constructively and that theory is also studied without Infinity.
A sort of blend between pairing and union, an axiom more readily related to the successor is the Axiom of adjunction.^[14]^[15] Such principles are relevant for the standard modeling of individual
Neumann ordinals. Axiom formulations also exist that pair Union and Replacement in one. While postulating Replacement is not a necessity in the design of a weak constructive set theory that is
bi-interpretable with Heyting arithmetic ${\displaystyle {\mathsf {HA}}}$, some form of induction is. For comparison, consider the very weak classical theory called General set theory that interprets
the class of natural numbers and their arithmetic via just Extensionality, Adjunction and full Separation.
The discussion now proceeds with axioms granting existence of objects which, in different but related form, are also found in
dependent type theories, namely
and the collection of natural numbers as a completed set. Infinite sets are particularly handy to reason about operations applied to sequences defined on unbounded index
, say the formal differentiation of a
generating function
or the addition of two Cauchy sequences.
For some fixed predicate ${\displaystyle I}$ and a set ${\displaystyle a}$, the statement ${\displaystyle I(a)\land {\big (}\forall y.I(y)\to a\subset y{\big )}}$ expresses that ${\displaystyle a}$
is the smallest (in the sense of "${\displaystyle \subset }$") among all sets ${\displaystyle y}$ for which ${\displaystyle I(y)}$ holds true, and that it is always a subset of such ${\displaystyle
y}$. The aim of the axiom of infinity is to eventually obtain unique smallest inductive set.
In the context of common set theory axioms, one statement of infinitude is to state that a class is inhabited and also includes a chain of membership (or alternatively a chain of supersets). That is,
${\displaystyle {\big (}\exists z.z\in A{\big )}\land \forall (x\in A).\exists (s\in A).x\in s}$.
More concretely, denote by ${\displaystyle \mathrm {Ind} _{A}}$ the inductive property,
${\displaystyle (0\in A)\land \forall (x\in A).Sx\in A}$.
In terms of a predicate ${\displaystyle Q}$ underlying the class so that ${\displaystyle \forall x.(x\in A)\leftrightarrow Q(x)}$, the latter translates to ${\displaystyle Q(0)\land \forall x.{\big
(}Q(x)\to Q(Sx){\big )}}$.
Write ${\displaystyle \bigcap B}$ for the general intersection ${\displaystyle \{x\mid \forall (y\in B).x\in y\}}$. (A variant of this definition may be considered which requires ${\displaystyle \cap
B\subset \cup B}$, but we only use this notion for the following auxiliary definition.)
One commonly defines a class ${\displaystyle \omega =\bigcap \{y\mid \mathrm {Ind} _{y}\}}$, the intersection of all inductive sets. (Variants of this treatment may work in terms of a formula that
depends on a set parameter ${\displaystyle w}$ so that ${\displaystyle \omega \subset w}$.) The class ${\displaystyle \omega }$ exactly holds all ${\displaystyle x}$ fulfilling the unbounded property
${\displaystyle \forall y.\mathrm {Ind} _{y}\to x\in y}$. The intention is that if inductive sets exist at all, then the class ${\displaystyle \omega }$ shares each common natural number with them,
and then the proposition ${\displaystyle \omega \subset A}$, by definition of "${\displaystyle \subset }$", implies that ${\displaystyle Q}$ holds for each of these naturals. While bounded separation
does not suffice to prove ${\displaystyle \omega }$ to be the desired set, the language here forms the basis for the following axiom, granting natural number induction for predicates that constitute
a set.
The elementary constructive Set Theory ${\displaystyle {\mathsf {ECST}}}$ has the axiom of ${\displaystyle {\mathsf {BCST}}}$ as well as the postulate
│Strong Infinity │
│ │
│${\displaystyle \exists w.{\Big (}\mathrm {Ind} _{w}\,\land \,{\big (}\forall y.\mathrm {Ind} _{y}\to w\subset y{\big )}{\Big )}}$│
Going on, one takes the symbol ${\displaystyle \omega }$ to denote the now unique smallest inductive set, an unbounded
von Neumann ordinal
. It contains the empty set and, for each set in
${\displaystyle \omega }$
, another set in
${\displaystyle \omega }$
that contains one element more.
Symbols called zero and successor are in the
. In
${\displaystyle {\mathsf {BCST}}}$
, the above defined successor of any number also being in the class
${\displaystyle \omega }$
follow directly from the characterization of the natural naturals by our von Neumann model. Since the successor of such a set contains itself, one also finds that no successor equals zero. So two of
Peano axioms
regarding the symbols zero and the one regarding closedness of
${\displaystyle S}$
come easily. Fourthly, in
${\displaystyle {\mathsf {ECST}}}$
, where
${\displaystyle \omega }$
is a set,
${\displaystyle S}$
${\displaystyle \omega }$
can be proven to be an injective operation.
For some predicate of sets ${\displaystyle P}$, the statement ${\displaystyle \forall S.(S\subset \omega \to P(S))}$ claims ${\displaystyle P}$ holds for all subsets of the set of naturals. And the
axiom now proves such sets do exist. Such quantification is also possible in second-order arithmetic.
The pairwise order "${\displaystyle <}$" on the naturals is captured by their membership relation "${\displaystyle \in }$". The theory proves the order as well as the equality relation on this set to
be decidable. Not only is no number smaller than ${\displaystyle 0}$, but induction implies that among subsets of ${\displaystyle \omega }$, it is exactly the empty set which has no least member. The
contrapositive of this proves the double-negated
least number existence
for all non-empty subsets of
${\displaystyle \omega }$
. Another valid principle also classically equivalent to it is least number existence for all inhabited detachable subsets. That said, the bare existence claim for the inhabited subset
${\displaystyle b:=\{z\in 1\mid P\}\cup \{1\}}$
${\displaystyle \omega }$
is equivalent to excluded middle for
${\displaystyle P}$
, and a constructive theory will therefore not prove
${\displaystyle \omega }$
to be
Weaker formulations of infinity
Should it need motivation, the handiness of postulating an unbounded set of numbers in relation to other inductive properties becomes clear in the discussion of arithmetic in set theory further
below. But as is familiar from classical set theory, also weak forms of Infinity can be formulated. For example, one may just postulate the existence of some inductive set, ${\displaystyle \exists y.
\mathrm {Ind} _{y}}$ - such an existence postulate suffices when full Separation may then be used to carve out the inductive subset ${\displaystyle w}$ of natural numbers, the shared subset of all
inductive classes. Alternatively, more specific mere existence postulates may be adopted. Either which way, the inductive set then fulfills the following ${\displaystyle \Delta _{0}}$ predecessor
existence property in the sense of the von Neumann model:
${\displaystyle \forall m.(m\in w)\leftrightarrow {\big (}m=0\lor \exists (p\in w).Sp=m{\big )}}$
Without making use of the notation for the previously defined successor notation, the extensional equality to a successor ${\displaystyle Sp=m}$ is captured by ${\displaystyle \forall n.(n\in m)\
leftrightarrow (n=p\lor n\in p)}$. This expresses that all elements ${\displaystyle m}$ are either equal to ${\displaystyle 0}$ or themselves hold a predecessor set ${\displaystyle p\in w}$ which
shares all other members with ${\displaystyle m}$.
Observe that through the expression "${\displaystyle \exists (p\in w)}$" on the right hand side, the property characterizing ${\displaystyle w}$ by its members ${\displaystyle m}$ here syntactically
again contains the symbol ${\displaystyle w}$ itself. Due to the bottom-up nature of the natural numbers, this is tame here. Assuming ${\displaystyle \Delta _{0}}$-set induction on top of ${\
displaystyle {\mathsf {ECST}}}$, no two different sets have this property. Also note that there are also longer formulations of this property, avoiding "${\displaystyle \exists (p\in w)}$" in favor
unbounded quantifiers.
Number bounds
Adopting an Axiom of Infinity, the set-bounded quantification legal in predicates used in ${\displaystyle \Delta _{0}}$-Separation then explicitly permits numerically unbounded quantifiers - the two
meanings of "bounded" should not be confused. With ${\displaystyle \omega }$ at hand, call a class of numbers ${\displaystyle I\subset \omega }$ bounded if the following existence statement holds
${\displaystyle \exists (m\in \omega ).\forall (n\in \omega ).(n\in I\to n<m)}$
This is a statements of finiteness, also equivalently formulated via ${\displaystyle m\leq n\to notin I}$. Similarly, to reflect more closely the discussion of functions below, consider the above
condition in the form ${\displaystyle \exists (m\in \omega ).\forall (n\in I).(n<m)}$. For decidable properties, these are ${\displaystyle \Sigma _{2}^{0}}$-statements in arithmetic, but with the
Axiom of Infinity, the two quantifiers are set-bound.
For a class ${\displaystyle C}$, the logically positive unboundedness statement
${\displaystyle \forall (k\in \omega ).\exists (j\in \omega ).(k\leq j\land j\in C)}$
is now also one of infinitude. It is ${\displaystyle \Pi _{2}^{0}}$ in the decidable arithmetic case. To validate infinitude of a set, this property even works if the set holds other elements besides
infinitely many of members of ${\displaystyle \omega }$.
Moderate induction in ECST
In the following, an initial segment of the natural numbers, i.e. ${\displaystyle \{n\in \omega \mid n<m\}}$ for any ${\displaystyle m\in \omega }$ and including the empty set, is denoted by ${\
displaystyle \{0,1,\dots ,m-1\}}$. This set equals ${\displaystyle m}$ and so at this point "${\displaystyle m-1}$" is mere notation for its predecessor (i.e. not involving subtraction function).
It is instructive to recall the way in which a theory with set comprehension and extensionality ends up encoding predicate logic. Like any class in set theory, a set can be read as corresponding to
predicates on sets. For example, an integer is even if it is a member of the set of even integers, or a natural number has a successor if it is a member of the set of natural numbers that have a
successor. For a less primitive example, fix some set ${\displaystyle y}$ and let ${\displaystyle Q(n)}$ denote the existential statement that the function space on the finite ordinal into ${\
displaystyle y}$ exist. The predicate will be denoted ${\displaystyle \exists h.h\simeq y^{\{0,1,\dots ,n-1\}}}$ below, and here the existential quantifier is not merely one over natural numbers, nor
is it bounded by any other set. Now a proposition like the finite exponentiation principle ${\displaystyle \forall (n\in \omega ).Q(n)}$ and, less formally, the equality ${\displaystyle \omega =\{n\
in \omega \mid Q(n)\}}$ are just two ways of formulating the same desired statement, namely an ${\displaystyle n}$-indexed conjunction of existential propositions where ${\displaystyle n}$ ranges
over the set of all naturals. Via extensional identification, the second form expresses the claim using notation for subclass comprehension and the bracketed object on the right hand side may not
even constitute a set. If that subclass is not provably a set, it may not actually be used in many set theory principles in proofs, and establishing the universal closure ${\displaystyle \forall (n\
in \omega ).Q(n)}$ as a theorem may not be possible. The set theory can thus be strengthened by more set existence axioms, to be used with predicative bounded Separation, but also by just postulating
stronger ${\displaystyle \forall }$-statements.
The second universally quantified conjunct in the strong axiom of Infinity expresses mathematical induction for all ${\displaystyle y}$ in the universe of discourse, i.e. for sets. This is because
the consequent of this clause, ${\displaystyle \omega \subset y}$, states that all ${\displaystyle n\in \omega }$ fulfill the associated predicate. Being able to use predicative separation to define
subsets of ${\displaystyle \omega }$, the theory proves induction for all predicates ${\displaystyle \phi (n)}$ involving only set-bounded quantifiers. This role of set-bounded quantifiers also means
that more set existence axioms impact the strength of this induction principle, further motivating the function space and collection axioms that will be a focus of the rest of the article. Notably, $
{\displaystyle {\mathsf {ECST}}}$ already validates induction with quantifiers over the naturals, and hence induction as in the first-order arithmetic theory ${\displaystyle {\mathsf {HA}}}$. The so
called axiom of full mathematical induction for any predicate (i.e. class) expressed through set theory language is far stronger than the bounded induction principle valid in ${\displaystyle {\mathsf
{ECST}}}$. The former induction principle could be directly adopted, closer mirroring second-order arithmetic. In set theory it also follows from full (i.e. unbounded) Separation, which says that all
predicates on ${\displaystyle \forall }$ are sets. Mathematical induction is also superseded by the (full) Set induction axiom.
Warning note: In naming induction statements, one must take care not to conflate terminology with arithmetic theories. The first-order induction schema of natural number arithmetic theory claims
induction for all predicates definable in the language of
first-order arithmetic
, namely predicates of just numbers. So to interpret the axiom schema of
${\displaystyle {\mathsf {HA}}}$
, one interprets these arithmetical formulas. In that context, the
bounded quantification
specifically means quantification over a finite range of numbers. One may also speak about the induction in the first-order but two-sorted theory of so-called second-order arithmetic
${\displaystyle {\mathsf {Z}}_{2}}$
, in a form explicitly expressed for subsets of the naturals. That class of subsets can be taken to correspond to a richer collection of formulas than the first-order arithmetic definable ones. In
the program of
reverse mathematics
, all mathematical objects discussed are encoded as naturals or subsets of naturals. Subsystems of
${\displaystyle {\mathsf {Z}}_{2}}$
with very low
comprehension studied in that framework have a language that does not merely express
arithmetical sets
, while all sets of naturals particular such theories prove to exist are just
computable sets
. Theorems therein can be a relevant reference point for weak set theories with a set of naturals, predicative separation and only some further restricted form of induction. Constructive reverse
mathematics exists as a field but is less developed than its classical counterpart.
^[16] ${\displaystyle {\mathsf {Z}}_{2}}$
shall moreover not be confused with the second-order formulation of Peano arithmetic
${\displaystyle {\mathsf {PA}}_{2}}$
. Typical set theories like the one discussed here are also first-order, but those theories are not arithmetics and so formulas may also quantify over the subsets of the naturals. When discussing the
strength of axioms concerning numbers, it is also important to keep in mind that the arithmetical and the set theoretical framework do not share a common
. Likewise, care must always be taken with insights about
of functions. In
computability theory
, the
μ operator
enables all partial
general recursive functions
(or programs, in the sense that they are Turing computable), including ones e.g. non-primitive recursive but
${\displaystyle {\mathsf {PA}}}$
-total, such as the
Ackermann function
. The definition of the operator involves predicates over the naturals and so the theoretical analysis of functions and their totality depends on the formal framework and proof calculus at hand.
General note on programs and functions
Naturally, the meaning of existence claims is a topic of interest in constructivism, be it for a theory of sets or any other framework. Let ${\displaystyle R}$ express a property such that a
mathematical framework validates what amounts to the statement
${\displaystyle \forall (a\in A).\exists (c\in C).R(a,c)}$
A constructive proof calculus may validate such a judgement in terms of programs on represented domains and some object representing a concrete assignment ${\displaystyle a\mapsto c_{a}}$, providing
a particular choice of value in ${\displaystyle C}$ (a unique one), for each input from ${\displaystyle A}$. Expressed through the rewriting ${\displaystyle \forall (a\in A).R(a,c_{a})}$, this
function object may be understood as witnessing the proposition. Consider for example the notions of proof in through realizability theory or function terms in a type theory with a notion of
quantifiers. The latter captures proof of logical proposition through programs via the Curry–Howard correspondence.
Depending on the context, the word "function" may be used in association with a particular
partial functions, and not just "total functions". The scare quotes are used for clarity here, as in a set theory context there is technically no need to speak of
total functions
, because this requirement is part of the definition of a set theoretical function and partial function spaces can be modeled via unions. At the same time, when combined with a formal arithmetic,
partial function programs provides one particularly sharp notion of totality for functions. By
Kleene's normal form theorem
, each partial recursive function on the naturals computes, for the values where it terminates, the same as
${\displaystyle a\mapsto U(\mu w.T_{1}(e,a,w))}$
, for some partial function program index
${\displaystyle e\in {\mathbb {N} }}$
, and any index will constitute some partial function. A program can be associated with a
${\displaystyle e}$
and may be said to be
${\displaystyle T_{1}}$
-total whenever a theory proves
${\displaystyle \forall a.\exists w.T_{1}(e,a,w)}$
, where
${\displaystyle T_{1}}$
amounts to a primitive recursive program and
${\displaystyle w}$
is related to the execution of
${\displaystyle e}$
proved that the class of partial recursive functions proven
${\displaystyle T_{1}}$
-total by
${\displaystyle {\mathsf {HA}}}$
is not enriched when
${\displaystyle {\mathrm {PEM} }}$
is added.
As a predicate in
${\displaystyle e}$
, this totality constitutes an
subset of indices, highlighting that the recursive world of functions between the naturals is already captured by a set dominated by
${\displaystyle {\mathbb {N} }}$
. As a third warning, note that this notion is really about programs and several indices will in fact constitute the same function, in the
A theory in first-order logic, such as the axiomatic set theories discussed here, comes with a joint notion of total and functional for a binary predicate ${\displaystyle R}$, namely ${\displaystyle
\forall a.\exists !c.R(a,c)}$. Such theories relate to programs only indirectly. If ${\displaystyle S}$ denotes the successor operation in a formal language of a theory being studied, then any
number, e.g. ${\displaystyle {\mathrm {SSS0} }}$ (the number three), may metalogically be related to the standard numeral, e.g. ${\displaystyle {\underline {\mathrm {SSS0} }}=SSS0}$. Similarly,
programs in the partial recursive sense may be unrolled to predicates and weak assumptions suffice so that such a translation respects equality of their return values. Among finitely axiomizable
sub-theories of ${\displaystyle {\mathsf {PA}}}$, classical Robinson arithmetic ${\displaystyle {\mathsf {Q}}}$ exactly fulfills this. Its existence claims are intended to only concern natural
numbers and instead of using the full mathematical induction schema for arithmetic formulas, the theories' axioms postulate that every number is either zero or that there exists a predecessor number
to it. Focusing on ${\displaystyle T_{1}}$-total recursive functions here, it is a meta-theorem that the language of arithmetic expresses them by ${\displaystyle \Sigma _{1}}$-predicates ${\
displaystyle G}$ encoding their graph such that ${\displaystyle {\mathsf {Q}}}$ represents them, in the sense that it correctly proves or rejects ${\displaystyle G({\underline {\mathrm {a} }},{\
underline {\mathrm {c} }})}$ for any input-output pair of numbers ${\displaystyle \mathrm {a} }$ and ${\displaystyle \mathrm {c} }$ in the meta-theory. Now given a correctly representing ${\
displaystyle G}$, the predicate ${\displaystyle G_{\mathrm {least} }(a,c)}$ defined by ${\displaystyle G(a,c)\land \forall (n<c).eg G(a,n)}$ represents the recursive function just as well, and as
this explicitly only validates the smallest return value, the theory also proves functionality for all inputs ${\displaystyle {\mathrm {a} }}$ in the sense of ${\displaystyle {\mathsf {Q}}\vdash \
exists !c.G_{\mathrm {least} }({\underline {\mathrm {a} }},c)}$. Given a representing predicate, then at the cost of making use of ${\displaystyle {\mathrm {PEM} }}$, one can always also
systematically (i.e. with a ${\displaystyle \forall a.}$) prove the graph to be total functional.^[18]
Which predicates are provably functional for various inputs, or even total functional on their domain, generally depends on the adopted axioms of a theory and proof calculus. For example, for the
diagonal halting problem, which cannot have a ${\displaystyle T_{1}}$-total index, it is ${\displaystyle {\mathsf {HA}}}$-independent whether the corresponding graph predicate on ${\displaystyle {\
mathbb {N} }\times \{0,1\}}$ (a decision problem) is total functional, but ${\displaystyle {\mathrm {PEM} }}$ implies that it is. Proof theoretical function hierarchies provide examples of predicates
proven total functional in systems going beyond ${\displaystyle {\mathsf {PA}}}$. Which sets proven to exist do constitute a total function, in the sense introduced next, also always depends on the
axioms and the proof calculus. Finally, note that the soundness of halting claims is a metalogical property beyond consistency, i.e. a theory may be consistent and from it one may prove that some
program will eventually halt, despite this never actually occurring when said program is run. More formally, assuming consistency of a theory does not imply it is also arithmetically ${\displaystyle
\Sigma _{1}}$-sound.
Total functional relations
In set theory language here, speak of a function class when ${\displaystyle f\subset A\times C}$ and provenly
${\displaystyle \forall (a\in A).\,\exists !(c\in C).\langle a,c\rangle \in f}$.
Notably, this definition involves quantifier explicitly asking for existence - an aspect which is particularly important in the constructive context. In words: For every ${\displaystyle a}$, it
demands the unique existence of a ${\displaystyle c}$ so that ${\displaystyle \langle a,c\rangle \in f}$. In the case that this holds one may use function application bracket notation and write ${\
displaystyle f(a)=c}$. The above property may then be stated as ${\displaystyle \forall (a\in A).\,\exists !(c\in C).f(a)=c}$. This notation may be extended to equality of function values. Some
notational conveniences involving function application will only work when a set has indeed been established to be a function. Let ${\displaystyle C^{A}}$ (also written ${\displaystyle ^{A}C}$)
denote the class of sets that fulfill the function property. This is the class of functions from ${\displaystyle A}$ to ${\displaystyle C}$ in a pure set theory. Below the notation ${\displaystyle x\
to y}$ is also used for ${\displaystyle y^{x}}$, for the sake of distinguishing it from ordinal exponentiation. When functions are understood as just function graphs as here, the membership
proposition ${\displaystyle f\in C^{A}}$ is also written ${\displaystyle f\colon A\to C}$. The Boolean-valued ${\displaystyle \chi _{B}\colon A\to \{0,1\}}$ are among the classes discussed in the
next section.
By construction, any such function respects equality in the sense that ${\displaystyle (x=y)\to f(x)=f(y)}$, for any inputs from ${\displaystyle A}$. This is worth mentioning since also more broader
concepts of "assignment routines" or "operations" exist in the mathematical literature, which may not in general respect this. Variants of the functional predicate definition using
have been defined as well. A subset of a function is still a function and the function predicate may also be proven for enlarged chosen codomain sets. As noted, care must be taken with nomenclature
"function", a word which sees use in most mathematical frameworks. When a function set itself is not tied to a particular codomain, then this set of pairs is also member of a function space with
larger codomain. This do not happen when by the word one denotes the subset of pairs paired with a codomain set, i.e. a formalization in terms of
${\displaystyle (A\times C)\times \{C\}}$
. This is mostly a matter of bookkeeping, but affects how other predicates are defined, question of size. This choice is also just enforced by some mathematical frameworks. Similar considerations
apply to any treatment of
partial functions
and their domains.
If both the domain ${\displaystyle A}$ and considered codomain ${\displaystyle C}$ are sets, then the above function predicate only involves bounded quantifiers. Common notions such as
surjectivity can be expressed in a bounded fashion as well, and thus so is
. Both of these tie in to notions of size. Importantly, injection existence between any two sets provides a
. A power class does not inject into its underlying set and the latter does not map onto the former. Surjectivity is formally a more complex definition. Note that injectivity shall be defined
positively, not by its contrapositive, which is common practice in classical mathematics. The version without negations is sometimes called weakly injective. The existence of value collisions is a
strong notion of non-injectivity. And regarding surjectivity, similar considerations exist for outlier-
in the codomain.
Whether a subclass (or predicate for that matter) can be judged to be a function set, or even total functional to begin with, will depend on the strength of the theory, which is to say the axioms one
adopts. And notably, a general class could also fulfill the above defining predicate without being a subclass of the product ${\displaystyle A\times C}$, i.e. the property is expressing not more or
less than functionality w.r.t. the inputs from ${\displaystyle A}$. Now if the domain is a set, the function comprehension principle, also called axiom of unique choice or non-choice, says that a
function as a set, with some codomain, exists well. (And this principle is valid in a theory like ${\displaystyle {\mathsf {CZF}}}$. Also compare with the
Replacement axiom
.) That is, the mapping information exists as set and it has a pair for each element in the domain. Of course, for any set from some class, one may always associate unique element of the singleton
${\displaystyle 1}$
, which shows that merely a chosen range being a set does not suffice to be granted a function set. It is a metatheorem for theories containing
${\displaystyle {\mathsf {BCST}}}$
that adding a function symbol for a provenly total class function is a conservative extension, despite this formally changing the scope of
bounded Separation
. In summary, in the set theory context the focus is on capturing particular
total relations
that are functional. To delineate the notion of
in the theories of the previous subsection (a 2-ary logical predicate defined to express a functions graph, together with a proposition that it is total and functional) from the "material" set
theoretical notion here, one may explicitly call the latter
graph of a function
set function
. The axiom schema of Replacement can also be formulated in terms of the ranges of such set functions.
One defines three distinct notions involving surjections. For a general set to be (
, and only those, while it may not be decidable whether repetition occurred. Thirdly, call a set
if it is the subset of a finite set, which thus injects into that finite set. Here, a for-loop will access all of the set's members, but also possibly others. For another combined notion, one weaker
than finitely indexed, to be
subfinitely indexed
means to be in the surjective image of a subfinite set, and in
${\displaystyle {\mathsf {ETCS}}}$
this just means to be the subset of a finitely indexed set, meaning the subset can also be taken on the image side instead of the domain side. A set exhibiting either of those notions can be
understood to be majorized by a finite set, but in the second case the relation between the sets members is not necessarily fully understood. In the third case, validating membership in the set is
generally more difficult, and not even membership of its member with respect to some superset of the set is necessary fully understood. The claim that being finite is equivalent to being subfinite,
for all sets, is equivalent to
${\displaystyle {\mathrm {PEM} }}$
. More finiteness properties for a set
${\displaystyle X}$
can be defined, e.g. expressing the existence of some large enough natural such that a certain class of functions on the naturals always fail to map to distinct elements in
${\displaystyle X}$
. One definition considers some notion of non-injectivity into
${\displaystyle X}$
. Other definitions consider functions to a fixed superset of
${\displaystyle X}$
with more elements.
Terminology for conditions of finiteness and infinitude may vary. Notably, subfinitely indexed sets (a notion necessarily involving surjections) are sometimes called subfinite (which can be defined
without functions). The property of being finitely indexed could also be denoted "finitely countable", to fit the naming logic, but is by some authors also called finitely enumerable (which might be
confusing as this suggest an injection in the other direction). Relatedly, the existence of a bijection with a finite set has not established, one may say a set is not finite, but this use of
language is then weaker than to claim the set to be non-finite. The same issue applies to countable sets (not proven countable vs. proven non-countable), et cetera. A surjective map may also be
called an enumeration.
The set ${\displaystyle \omega }$ itself is clearly unbounded. In fact, for any surjection from a finite range onto ${\displaystyle \omega }$, one may construct an element that is different from any
element in the functions range. Where needed, this notion of infinitude can also be expressed in terms of an apartness relation on the set in question. Being not Kuratowski-finite implies being
non-finite and indeed the naturals shall not be finite in any sense. Commonly, the word infinite is used for the negative notion of being non-finite. Further, observe that ${\displaystyle \omega }$,
unlike any of its members, can be put in bijection with some of its proper unbounded subsets, e.g. those of the form ${\displaystyle w_{m}:=\{k\in \omega \mid k>m\}}$ for any ${\displaystyle m\in \
omega }$. This validates the formulations of
. So more generally than the property of infinitude in the previous section on number bounds, one may call a set infinite in the logically positive sense if one can inject ${\displaystyle \omega }$
into it. A set that is even in bijection with ${\displaystyle \omega }$ may be called countably infinite. A set is Tarski-infinite if there is a chain of ${\displaystyle \subset }$-increasing subsets
of it. Here each set has new elements compared to its predecessor and the definition does not speak of sets growing rank. There are indeed plenty of properties characterizing infinitude even in
classical ${\displaystyle {\mathsf {ZF}}}$ and that theory does not prove all non-finite sets to be infinite in the injection existence sense, albeit it there holds when further assuming countable
choice. ${\displaystyle {\mathsf {ZF}}}$ without any choice even permits cardinals aside the
aleph numbers
, and there can then be sets that negate both of the above properties, i.e. they are both non-Dedekind-infinite and non-finite (also called Dedekind-finite infinite sets).
Call an inhabited set countable if there exists a surjection from ${\displaystyle \omega }$ onto it and
if this can be done from some subset of ${\displaystyle \omega }$. Call a set enumerable if there exists an injection to ${\displaystyle \omega }$, which renders the set discrete. Notably, all of
these are function existence claims. The empty set is not inhabited but generally deemed countable too, and note that the successor set of any countable set is countable. The set ${\displaystyle \
omega }$ is trivially infinite, countable and enumerable, as witnessed by the identity function. Also here, in strong classical theories many of these notions coincide in general and, as a result,
the naming conventions in the literature are inconsistent. An infinite, countable set is equinumeros to ${\displaystyle \omega }$.
There are also various ways to characterize logically negative notion. The notion of uncountability, in the sense of being not countable, is also discussed in conjunction with the exponentiation
axiom further below. Another notion of uncountability of ${\displaystyle X}$ is given when one can produce a member in the compliment of any of ${\displaystyle X}$'s countable subsets. More
properties of finiteness may be defined as negations of such properties, et cetera.
Characteristic functions
Separation lets us cut out subsets of products ${\displaystyle A\times C}$, at least when they are described in a bounded fashion. Given any ${\displaystyle B\subset A}$, one is now led to reason
about classes such as
${\displaystyle X_{B}:={\big \{}\langle x,y\rangle \in A\times \{0,1\}\mid (x\in B\land y=1)\lor (xotin B\land y=0){\big \}}.}$
Since ${\displaystyle eg (0=1)}$, one has
${\displaystyle {\big (}a\in B\ \leftrightarrow \,\langle a,1\rangle \in X_{B}{\big )}\,\land \,{\big (}aotin B\ \leftrightarrow \,\langle a,0\rangle \in X_{B}{\big )}}$
and so
${\displaystyle {\big (}a\in B\lor aotin B{\big )}\ \leftrightarrow \ \exists !(y\in \{0,1\}).\langle a,y\rangle \in X_{B}}$.
But be aware that in absence of any non-constructive axioms ${\displaystyle a\in B}$ may in generally not be decidable, since one requires an explicit proof of either disjunct. Constructively, when $
{\displaystyle \exists (y\in \{0,1\}).\langle x,y\rangle \in X_{B}}$ cannot be witnessed for all the ${\displaystyle x\in A}$, or uniqueness of the terms ${\displaystyle y}$ associated with each ${\
displaystyle x}$ cannot be proven, then one cannot judge the comprehended collection to be total functional. Case in point: The classical derivation of Schröder–Bernstein relies on case analysis -
but to constitute a function, particular cases shall actually be specifiable, given any input from the domain. It has been established that Schröder–Bernstein cannot have a proof on the base of ${\
displaystyle {\mathsf {IZF}}}$ plus constructive principles.^[19] So to the extent that intuitionistic inference does not go beyond what is formalized here, there is no generic construction of a
bijection from two injections in opposing directions.
But being compatible with ${\displaystyle {\mathsf {ZF}}}$, the development in this section still always permits "function on ${\displaystyle \omega }$" to be interpreted as a completed object that
is also not necessarily given as lawlike sequence. Applications may be found in the common models for claims about probability, e.g. statements involving the notion of "being given" an unending
random sequence of coin flips, even if many predictions can also be expressed in terms of spreads.
If indeed one is given a function ${\displaystyle \chi _{B}\colon A\to \{0,1\}}$, it is the characteristic function actually deciding membership in some detachable subset ${\displaystyle B\subset A}$
${\displaystyle B=\{n\in \omega \mid \chi _{B}(n)=1\}.}$
Per convention, the detachable subset ${\displaystyle B}$, ${\displaystyle \chi _{B}}$ as well as any equivalent of the formulas ${\displaystyle n\in B}$ and ${\displaystyle \chi _{B}(n)=1}$ (with $
{\displaystyle n}$ free) may be referred to as a decidable property or set on ${\displaystyle A}$.
One may call a collection ${\displaystyle A}$ searchable for ${\displaystyle \chi _{B}}$ if existence is actually decidable,
${\displaystyle \exists (x\in A).\chi _{B}(x)=1\ \lor \ \forall (x\in A).\chi _{B}(x)=0.}$
Now consider the case ${\displaystyle A=\omega }$. If ${\displaystyle \chi _{B}(0)=0}$, say, then the range ${\displaystyle \{0\}\subset R\subset \{0,1\}}$ of ${\displaystyle \chi _{B}}$ is an
inhabited, counted set, by Replacement. However, the ${\displaystyle R}$ need not be again a decidable set itself, since the claim ${\displaystyle R=\{0\}}$ is equivalent to the rather strong ${\
displaystyle \forall n.\chi _{B}(n)=0}$. Moreover, ${\displaystyle R=\{0\}}$ is also equivalent to ${\displaystyle B=\{\}}$ and so one can state undecidable propositions about ${\displaystyle B}$
also when membership in ${\displaystyle B}$ is decidable. This also plays out like this classically in the sense that statements about ${\displaystyle B}$ may be independent, but any classical theory
then nonetheless claims the joint proposition ${\displaystyle B=\{\}\lor eg (B=\{\})}$. Consider the set ${\displaystyle B}$ of all indices of proofs of an inconsistency of the theory at hand, in
which case the universally closed statement ${\displaystyle B=\{\}}$ is a consistency claim. In terms of arithmetic principles, assuming decidability of this would be ${\displaystyle \Pi _{1}^{0}}$-$
{\displaystyle {\mathrm {PEM} }}$ or arithmetic ${\displaystyle \forall }$-${\displaystyle {\mathrm {PEM} }}$. This and the stronger related ${\displaystyle {\mathrm {LPO} }}$, or arithmetic ${\
displaystyle \exists }$-${\displaystyle {\mathrm {PEM} }}$, is discussed below.
Witness of apartness
The identity of indiscernibles, which in the first-order context is a higher order principle, holds that the equality ${\displaystyle x=y}$ of two terms ${\displaystyle x}$ and ${\displaystyle y}$
necessitates that all predicates ${\displaystyle P}$ agree on them. And so if there exists a predicate ${\displaystyle P}$ that distinguishes two terms ${\displaystyle x}$ and ${\displaystyle y}$ in
the sense that ${\displaystyle P(x)\land eg P(y)}$, then the principle implies that the two terms do not coincide. A form of this may be expressed set theoretically: ${\displaystyle x,y\in A}$ may be
deemed apart if there exists a subset ${\displaystyle B\subset A}$ such that one is a member and the other is not. Restricted to detachable subsets, this may also be formulated concisely using
characteristic functions ${\displaystyle \chi _{B}\in \{0,1\}^{A}}$. Indeed, the latter does not actually depend on the codomain being a binary set: Equality is rejected, i.e. ${\displaystyle xeq y}$
is proven, as soon it is established that not all functions ${\displaystyle f}$ on ${\displaystyle A}$ validate ${\displaystyle f(x)=f(y)}$, a logically negative condition.
One may on any set ${\displaystyle A}$ define the logically positive apartness relation
${\displaystyle x\,\#_{A}\,y\,:=\,\exists (f\in {\mathbb {N} }^{A}).f(x)eq f(y)}$
As the naturals are discrete, for these functions the negative condition is equivalent to the (weaker) double-negation of this relation. Again in words, equality of ${\displaystyle x}$ and ${\
displaystyle y}$ implies that no coloring ${\displaystyle f\in {\mathbb {N} }^{A}}$ can distinguish them - and so to rule out the former, i.e. to prove ${\displaystyle xeq y}$, one must merely rule
out the latter, i.e. merely prove ${\displaystyle eg eg (x\,\#_{A}\,y)}$.
Computable sets
Going back to more generality, given a general predicate ${\displaystyle Q}$ on the numbers (say one defined from Kleene's T predicate), let again
${\displaystyle B:=\{n\in \omega \mid Q(n)\}.}$
Given any natural ${\displaystyle n\in \omega }$, then
${\displaystyle {\big (}Q(n)\lor eg Q(n){\big )}\leftrightarrow {\big (}n\in B\lor notin B{\big )}.}$
In classical set theory, ${\displaystyle \forall (n\in \omega ).Q(n)\lor eg Q(n)}$ by ${\displaystyle {\mathrm {PEM} }}$ and so excluded middle also holds for subclass membership. If the class ${\
displaystyle B}$ has no numerical bound, then successively going through the natural numbers ${\displaystyle n}$, and thus "listing" all numbers in ${\displaystyle B}$ by simply skipping those with $
{\displaystyle notin B}$, classically always constitutes an increasing surjective sequence ${\displaystyle b\colon \omega \twoheadrightarrow B}$. There, one can obtain a bijective function. In this
way, the class of functions in typical classical set theories is provenly rich, as it also contains objects that are beyond what we know to be
effectively computable
, or programmatically listable in praxis.
In computability theory, the computable sets are ranges of non-decreasing total functions in the recursive sense, at the level ${\displaystyle \Sigma _{1}^{0}\cap \Pi _{1}^{0}=\Delta _{1}^{0}}$ of
the arithmetical hierarchy, and not higher. Deciding a predicate at that level amounts to solving the task of eventually finding a certificate that either validates or rejects membership. As not
every predicate ${\displaystyle Q}$ is computably decidable, also the theory ${\displaystyle {\mathsf {CZF}}}$ alone will not claim (prove) that all unbounded ${\displaystyle B\subset \omega }$ are
the range of some bijective function with domain ${\displaystyle \omega }$. See also Kripke's schema. Note that bounded Separation nonetheless proves the more complicated arithmetical predicates to
still constitute sets, the next level being the computably enumerable ones at ${\displaystyle \Sigma _{1}^{0}}$.
There is a large corpus of computability theory notions regarding how general subsets of naturals relate to one another. For example, one way to establish a bijection of two such sets is by relating
them through a computable isomorphism, which is a computable permutation of all the naturals. The latter may in turn be established by a pair of particular injections in opposing directions.
Boundedness criteria
Any subset ${\displaystyle B\subset \omega }$ injects into ${\displaystyle \omega }$. If ${\displaystyle B}$ is decidable and inhabited with ${\displaystyle y_{0}\in B}$, the sequence
${\displaystyle q:={\big \{}\langle x,y\rangle \in \omega \times B\mid (x\in B\land y=x)\lor (xotin B\land y=y_{0}){\big \}}}$
${\displaystyle q(x):={\begin{cases}x&x\in B\\y_{0}&xotin B\\\end{cases}}}$
is surjective onto ${\displaystyle B}$, making it a counted set. That function also has the property ${\displaystyle \forall (x\in B).q(x)=x}$.
Now consider a countable set ${\displaystyle R\subset \omega }$ that is bounded in the sense defined previously. Any sequence taking values in ${\displaystyle R}$ is then numerically capped as well,
and in particular eventually does not exceed the identity function on its input indices. Formally,
${\displaystyle \forall (r\colon \omega \to R).\exists (m\in \omega ).\forall (k\in \omega ).k>m\to r(k)<k}$
A set ${\displaystyle I}$ such that this loose bounding statement holds for all sequences taking values in ${\displaystyle I}$ (or an equivalent formulation of this property) is called pseudo-bounded
. The intention of this property would be to still capture that ${\displaystyle I\subset \omega }$ is eventually exhausted, albeit now this is expressed in terms of the function space ${\displaystyle
I^{\omega }}$ (which is bigger than ${\displaystyle I}$ in the sense that ${\displaystyle I}$ always injects into ${\displaystyle I^{\omega }}$). The related notion familiar from topological vector
space theory is formulated in terms of ratios going to zero for all sequences (${\displaystyle {\tfrac {r(k)}{k}}}$ in the above notation). For a decidable, inhabited set, validity of
pseudo-boundedness, together with the counting sequence defined above, grants a bound for all the elements of ${\displaystyle I}$.
The principle that any inhabited, pseudo-bounded subset of ${\displaystyle \omega }$ that is just countable (but not necessarily decidable) is always also bounded is called ${\displaystyle \mathrm
{BD} }$-${\displaystyle {\mathbb {N} }}$. The principle also holds generally in many constructive frameworks, such as the Markovian base theory ${\displaystyle {\mathsf {HA}}+{\mathrm {ECT} }_{0}+{\
mathrm {MP} }}$, which is a theory postulating exclusively lawlike sequences with nice number search termination properties. However, ${\displaystyle \mathrm {BD} }$-${\displaystyle {\mathbb {N} }}$
is independent of ${\displaystyle {\mathsf {IZF}}}$.
Choice functions
Not even classical ${\displaystyle {\mathsf {ZF}}}$ proves each union of a countable set of two-element sets to be countable again. Indeed, models of ${\displaystyle {\mathsf {ZF}}}$ have been
countable choice
rules out that model as an interpretation of the resulting theory. This principle is still independent of
${\displaystyle {\mathsf {ZF}}}$
- A naive proof strategy for that statement fails at the accounting of infinitely many
existential instantiations
A choice principle postulates that certain selections can always be made in a joint fashion in the sense that they are also manifested as a single set function in the theory. As with any independent
axiom, this raises the proving capabilities while restricting the scope of possible (model-theoretic) interpretations of the (syntactic) theory. A function existence claim can often be translated to
the existence of inverses, orderings, and so on. Choice moreover implies statements about cardinalities of different sets, e.g. they imply or rule out countability of sets. Adding full choice to ${\
displaystyle {\mathsf {ZF}}}$ does not prove any new ${\displaystyle \Pi _{4}^{1}}$-theorems, but it is strictly non-constructive, as shown below. The development here proceeds in a fashion agnostic
to any of the variants described next.^[20]
• Axiom of countable choice ${\displaystyle {\mathrm {AC} _{\omega }}}$ (or ${\displaystyle {\mathrm {CC} }}$): If ${\displaystyle g\colon \omega \to z}$, one can form the one-to-many relation-set
${\displaystyle \{\langle n,u\rangle \mid n\in \omega \land u\in g(n)\}}$. The axiom of countable choice would grant that whenever ${\displaystyle \forall (n\in \omega ).\exists u.u\in g(n)}$,
one can form a function mapping each number to a unique value. The existence of such sequences is not generally provable on the base of ${\displaystyle {\mathsf {ZF}}}$ and countable choice is
not ${\displaystyle \Sigma _{4}^{1}}$-conservative over that theory. Countable choice into general sets can also be weakened further. One common consideration is to restrict the possible
cardinalities of the range of ${\displaystyle g}$, giving the weak countable choice into countable, finite or even just binary sets (${\displaystyle {\mathrm {AC} _{\omega ,2}}}$). One may
consider the version of countable choice for functions into ${\displaystyle \omega }$ (called ${\displaystyle {\mathrm {AC} _{\omega ,\omega }}}$ or ${\displaystyle {\mathrm {AC} _{00}}}$), as is
implied by the constructive Church's thesis principle, i.e. by postulating that all total arithmetical relations are recursive. ${\displaystyle {\mathrm {CT} _{0}}}$ in arithmetic may be
understood as a form of choice axiom. Another means of weakening countable choice is by restricting the involved definitions w.r.t. their place in the syntactic hierarchies (say ${\displaystyle \
Pi _{1}^{0}}$-${\displaystyle {\mathrm {AC} _{\omega ,2}}}$). The weak Kőnig's lemma ${\displaystyle {\mathrm {WKL} }}$, which breaks strictly recursive mathematics as further discussed below, is
stronger than ${\displaystyle \Pi _{1}^{0}}$-${\displaystyle {\mathrm {AC} _{\omega ,2}}}$ and is itself sometimes viewed as capturing a form of countable choice. In the presence of a weak form
of countable choice, the lemma becomes equivalent to the non-constructive principle of more logical flavor, ${\displaystyle {\mathrm {LLPO} }}$. Constructively, a weak form of choice is required
for well-behaved Cauchy reals. Countable choice is not valid in the internal logic of a general topos, which can be seen as models of constructive set theories.
• Axiom of dependent choice ${\displaystyle {\mathrm {DC} }}$: Countable choice is implied by the more general axiom of dependent choice, extracting a sequence in an inhabited ${\displaystyle z}$,
given any entire relation ${\displaystyle R\subset z\times z}$. In set theory, this sequence is again an infinite set of pairs, a subset of ${\displaystyle \omega \times z}$. So one is granted to
pass from several existence statements to function existence, itself granting unique-existence statements, for every natural. An appropriate formulation of dependent choice is adopted in several
constructive frameworks, e.g., by some schools that understand unending sequences as ongoing constructions instead of completed objects. At least those cases seem benign where, for any ${\
displaystyle x\in z}$, next value existence ${\displaystyle \exists (y\in z).xRy}$ can be validated in a computable fashion. The corresponding recursive function ${\displaystyle \omega \to z}$,
if it exists, is then conceptualized as being able to return a value at infinitely many potential inputs ${\displaystyle n\in \omega }$, but these do not have to be evaluated all together at
once. It also holds in many realizability models. In the condition of the formally similar recursion theorem, one is already given a unique choice at each step, and that theorem lets one combine
them to a function on ${\displaystyle \omega }$. So also with ${\displaystyle {\mathrm {DC} }}$ one may consider forms of the axiom with restrictions on ${\displaystyle R}$. Via the bounded
separation axiom in ${\displaystyle {\mathsf {ECST}}}$, the principle also is equivalent to a schema in two bounded predicate variables: Keeping all quantifiers ranging over ${\displaystyle z}$,
one may further narrow this set domain using a unary ${\displaystyle \Delta _{0}}$-predicate variable, while also using any 2-ary ${\displaystyle \Delta _{0}}$-predicate instead of the relation
set ${\displaystyle R}$.
• Relativized dependent choice ${\displaystyle {\mathrm {RDC} }}$: This is the schema just using two general classes, instead of requiring ${\displaystyle z}$ and ${\displaystyle R}$ be sets. The
domain of the choice function granted to exist is still just ${\displaystyle \omega }$. Over ${\displaystyle {\mathsf {ECST}}}$, it implies full mathematical induction, which, in turn allows for
function definition on ${\displaystyle \omega }$ through the recursion schema. When ${\displaystyle {\mathrm {RDC} }}$ is restricted to ${\displaystyle \Delta _{0}}$-definitions, it still implies
mathematical induction for ${\displaystyle \Sigma _{1}}$-predicates (with an existential quantifier over sets) as well as ${\displaystyle {\mathrm {DC} }}$. In ${\displaystyle {\mathsf {ZF}}}$,
the schema ${\displaystyle {\mathrm {RDC} }}$ is equivalent to ${\displaystyle {\mathrm {DC} }}$.
• ${\displaystyle \Pi \Sigma }$-${\displaystyle \mathrm {AC} }$: A family of sets is better controllable if it comes indexed by a function. A set ${\displaystyle b}$ is a base if all indexed
families of sets ${\displaystyle i_{s}\colon b\to s}$ over it, have a choice function ${\displaystyle f_{s}}$, i.e. ${\displaystyle \forall (x\in b).f_{s}(x)\in i_{s}(x)}$. A collection of sets
holding ${\displaystyle \omega }$ and its elements and which is closed by taking indexed sums and products (see dependent type) is called ${\displaystyle \Pi \Sigma }$-closed. While the axiom
that all sets in the smallest ${\displaystyle \Pi \Sigma }$-closed class are a base does need some work to formulate, it is the strongest choice principle over ${\displaystyle {\mathsf {CZF}}}$
that holds in the type theoretical interpretation ${\displaystyle {\mathsf {ML_{1}V}}}$.
• Axiom of choice ${\displaystyle {\mathrm {AC} }}$: This is the "full" choice function postulate concerning domains that are general sets ${\displaystyle \{z,\dots \}}$ containing inhabited sets,
with the codomain given as their general union. Given a collection of sets such that the logic allows to make a choice in each, the axiom grants that there exists a set function that jointly
captures a choice in all. It is typically formulated for all sets but has also been studied in classical formulations for sets only up to any particular cardinality. A standard example is choice
in all inhabited subsets of the reals, which classically equals the domain ${\displaystyle {\mathcal {P}}_{\mathbb {R} }\setminus 1}$. For this collection there can be no uniform element
selection prescription that provably constitutes a choice function on the base of ${\displaystyle {\mathsf {ZF}}}$. Also, when restricted to the
Borel algebra
of the reals, ${\displaystyle {\mathsf {ZF}}}$ alone does not prove the existence of a function selecting a member from each non-empty such Lebesgue-measurable subset. (The set ${\displaystyle {\
mathcal {B}}({\mathbb {R} })}$ is the σ-algebra generated by the intervals ${\displaystyle I:=\{(x,y\,]\mid x,y\in {\mathbb {R} }\}}$. It strictly includes those intervals, in the sense of ${\
displaystyle I\subsetneq {\mathcal {B}}({\mathbb {R} })\subsetneq {\mathcal {P}}_{\mathbb {R} }}$, but in ${\displaystyle {\mathsf {ZF}}}$ also only has the cardinality of the reals itself.)
Striking existence claims implied by the axiom are abound. ${\displaystyle {\mathsf {ECST}}}$ proves ${\displaystyle \omega }$ exists and then the axiom of choice also implies dependent choice.
Critically in the present context, it moreover also implies instances of ${\displaystyle {\mathrm {PEM} }}$ via Diaconescu's theorem. For ${\displaystyle {\mathsf {ECST}}}$ or theories extending
it, this means full choice at the very least proves ${\displaystyle {\mathrm {PEM} }}$ for all ${\displaystyle \Delta _{0}}$-formulas, a non-constructive consequence not acceptable, for example,
from a computability standpoint. Note that constructively, Zorn's lemma does not imply choice: When membership in function domains fails to be decidable, the extremal function granted by that
principle is not provably always a choice function on the whole domain.
Diaconescu's theorem
To highlight the strength of full Choice and its relation to matters of intentionality, one should consider the classes
${\displaystyle a=\{u\in \{0,1\}\mid (u=0)\lor P\}}$
${\displaystyle b=\{u\in \{0,1\}\mid (u=1)\lor P\}}$
from the proof of Diaconescu's theorem. They are as contingent as the proposition ${\displaystyle P}$ involved in their definition and they are not proven finite. Nonetheless, the setup entails
several consequences. Referring back to the introductory elaboration on the meaning of such convenient class notation, as well as to the principle of distributivity, ${\displaystyle t\in a\
leftrightarrow {\big (}t=0\lor (t=1\land P){\big )}}$. So unconditionally, ${\displaystyle 0\in a}$ as well as ${\displaystyle 1\in b}$, and in particular they are inhabited. As ${\displaystyle eg (0
=1)}$ in any model of Heyting arithmetic, using the disjunctive syllogism both ${\displaystyle 0\in b}$ and ${\displaystyle 1\in a}$ each imply ${\displaystyle P}$. The two statements are indeed
equivalent to the proposition, as clearly ${\displaystyle P\to (a=\{0,1\}\land b=\{0,1\})}$. The latter also says that validity of ${\displaystyle P}$ means ${\displaystyle a}$ and ${\displaystyle b}
$ share all members, and there are two of these. As ${\displaystyle a}$ are ${\displaystyle b}$ are then sets, also ${\displaystyle P\to (a=b\land \{a,b\}=\{a\})}$ by extensionality. Conversely,
assuming they are equal means ${\displaystyle x\in a\leftrightarrow x\in b}$ for any ${\displaystyle x}$, validating all membership statements. So both the membership statements as well as the
equalities are found to be equivalent to ${\displaystyle P}$. Using the
results in the weaker equivalence of disjuncts
${\displaystyle (P\lor eg P)\leftrightarrow (a=b\lor eg (a=b))}$
. Of course, explicitly
${\displaystyle eg P\to (a=\{0\}\land b=\{1\})}$
and so one actually finds in which way the sets can end up being different. As functions preserves equality by definition,
${\displaystyle eg {\big (}g(a)=g(b){\big )}\to eg P}$
indeed holds for any
${\displaystyle g}$
with domain
${\displaystyle \{a,b\}}$
In the following assume a context in which ${\displaystyle a,b}$ are indeed established to be sets, and thus subfinite sets. The general axiom of choice claims existence of a function ${\displaystyle
f\colon \{a,b\}\to a\cup b}$ with ${\displaystyle f(z)\in z}$. It is important that the elements ${\displaystyle a,b}$ of the function's domain are different than the natural numbers ${\displaystyle
0,1}$ in the sense that a priori less is known about the former. When forming then union of the two classes, ${\displaystyle u=0\lor u=1}$ is a necessary but then also sufficient condition. Thus ${\
displaystyle a\cup b=\{0,1\}}$ and one is dealing with functions ${\displaystyle f}$ into a set of two distinguishable values. With choice come the conjunction ${\displaystyle f(a)\in a\land f(b)\in
b}$ in the codomain of the function, but the possible function return values are known to be just ${\displaystyle 0}$ or ${\displaystyle 1}$. Using the distributivity, there arises a list of
conditions, another disjunction. Expanding what is then established, one finds that either both ${\displaystyle P}$ as well as the sets equality holds, or that the return values are different and ${\
displaystyle P}$ can be rejected. The conclusion is that the choice postulate actually implies ${\displaystyle P\lor eg P}$ whenever a Separation axiom allows for set comprehension using undecidable
proposition ${\displaystyle P}$.
Analysis of Diaconescu's theorem
So full choice is non-constructive in set theory as defined here. The issue is that when propositions are part of set comprehension (like when ${\displaystyle P}$ is used to separate, and thereby
define, the classes ${\displaystyle a}$ and ${\displaystyle b}$ from ${\displaystyle \{0,1\}}$), the notion of their truth values are ramified into set terms of the theory. Equality defined by the
set theoretical axiom of extensionality, which itself is not related to functions, in turn couples knowledge about the proposition to information about function values. To recapitulate the final step
in terms function values: On the one hand, witnessing ${\displaystyle f(a)=1}$ implies ${\displaystyle P}$ and ${\displaystyle a=b}$ and this conclusion independently also applies to witnessing ${\
displaystyle f(b)=0}$. On the other hand, witnessing ${\displaystyle f(a)=0\land f(b)=1}$
|
{"url":"https://findatwiki.com/Constructive_set_theory","timestamp":"2024-11-08T18:01:08Z","content_type":"text/html","content_length":"1049575","record_id":"<urn:uuid:fb096b73-2224-4165-a25a-b567d0434855>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00860.warc.gz"}
|
What’s the Market Going to Do?
Humans are captivated by stories, but largely oblivious to data. In addition, we really want certainty and conclusions when generally all that is available is uncertainty and probabilities.
For example, people frequently want a prediction of what the market will do this year, and I think there are two reasonable answers based on history:
1. Most likely between a 29% loss and a 53% gain, but there is about a 1-in-20 chance it could be outside that range. (The average 12-month return from 1926-2018 for U.S. stocks was 12.05% with a
standard deviation of 20.90%. 95% would be within 1.96 standard deviations so 12.05% +/- 40.96% is a range of -28.91% to +53.02%.)
2. Most likely between a 20% loss and a 45% gain, but there is about a 1-in-20 chance it could be outside that range. (If you assume that the world is safer or different now so post-WWII numbers are
a better estimate of the future, the average 12-month return from 1946-2018 for U.S. stocks was 12.21% with a standard deviation of 16.52%. 95% would be within 1.96 standard deviations so 12.21%
+/- 32.37% is a range of -20.16% to +44.58%.
You could also argue that equity returns will be lower by some amount – maybe 1% lower because of lower inflation and another 2-3% lower from a lower ERP (Equity Risk Premium) going forward so the
whole distribution is shifted down by that amount. If so you can adjust the ranges down by 3-4%. I also do think that starting post-WWII is too aggressive, but I can understand the logic of someone
using it and I wouldn’t say they are wrong. I would point out though, if that is the correct distribution then 2008 was a huge outlier. If we use from 1926 it was fairly normal. (The worst 12-months
in that debacle was March 2008 to February 2009, which had a 42.48% loss – a rare but reasonable 2.53 standard deviation event (1 in 175) if we use from 1926 to the month prior to that period, but an
improbable 3.26 standard deviations (1 in 1795) if we start in 1946.) So, my best answer would be: “Most likely between a 33% loss and a 50% gain, but there is about a 1-in-20 chance it could be
outside that range.”
Also, if you want to know the 100-year-flood number that would be 2.58 standard deviations. 12.05% minus a 3.5% adjustment for lower returns in the future is 8.55% minus 2.58*20.90% = -45.36%. (Of
course, there is also a 1-in-100 chance of a positive 62.47%) Keep in mind, the worst-case scenario that has ever happened (in any area, not just market returns) was not the worst-case just prior to
it happening. Think about that for a while.
I am anticipating some questions, here are the answers:
1. You undoubtedly think those answers are wrong – you just really don’t think the range is that high. I feel the same way, but I know I’m wrong…
2. Clients must be profoundly unhappy with an answer like that. I know, but it is what it is. If I could improve on those figures I would be running a hedge fund engaged in market timing.
3. I used the CRSP 1-10 figures, not the S&P 500 because the question is “what do you think the market will do?” not “what do you think the S&P 500 will do?” Most people think it is the same thing,
and substantially they really are, the correlation is above 99%, the difference in geometric returns has been 25 basis points (advantage S&P500) and average annualized difference in standard
deviation was 34 basis points (advantage CRSP 1-10). So, I wouldn’t really quibble if someone used the S&P500 to do these calculations, but I didn’t.
4. I rounded off to a reasonable number of decimal places as I typed this up, but all the calculations used all the decimal points I had available – just in case you are following my math and find
something slightly off.
6. The correct returns to use for this exercise are arithmetic, not geometric. If you want to convert, the rough estimate (but it’s pretty good) is given by squaring the standard deviation (to get
the variance), then subtracting half of that from the return. For example, I said, “The average 12-month return from 1926-2018 for U.S. stocks was 12.05% with a standard deviation of 20.90%.”
0.2090^2= 0.0437 That divided by 2 equals 0.0218. 12.05% minus 2.18% is 9.87% geometric return, which is the figure you are more accustomed to seeing. For more on this topic you can see my
calculator here.
7. I used 12-month periods, the maximum drawdown to expect is higher because it can go on for longer than 12-months. For example, from October 2007 through February 2009 was a 50.19% decline, but
2008 was just 36.71%, and as mentioned above, March 2008 to February 2009 had a 42.48% loss.
8. I used a normal distribution rather than a log-normal one because for a one-year period they are trivially different. There was already more than enough math here to make most people’s heads hurt
without introducing that complication.
|
{"url":"https://www.financialarchitectsllc.com/blog/878","timestamp":"2024-11-02T23:37:14Z","content_type":"text/html","content_length":"57523","record_id":"<urn:uuid:b17a2114-5356-4a8c-ac7d-b172e57eebd4>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00341.warc.gz"}
|
Venn Diagrams
A Venn diagram is a schematic diagram (special case of Euler diagram) which depicts a finite collection of the sets and all their possible mathematical and logical relations, their unions and
intersections. Venn diagrams are constructed on the plane as the set of overlapping simple closed curves, commonly these are the circles. Points that lie inside different areas of the diagram
represent the elements of the respective sets which are usually denoted with capital letters of the Latin alphabet. The circles are in a specific position relatively each to other, in the most cases
there is an intersection. The overlapping areas are visually highlighted on the chart, usually using a hatch pattern or a bright color.
Venn Diagrams are actively used to illustrate simple set relationships in set theory and probability theory, logic and statistics, mathematics and computer science, linguistics, sociology, and
marketing. Venn Diagrams are also often used to visually summarize the status and future viability of a project.
The Venn Diagrams solution extends ConceptDraw DIAGRAM with Venn diagram maker capabilities, templates, Venn diagram examples, samples, and library with variety of predesigned Venn diagrams vector
shapes offered with different quantity of sets and in various color palettes.
There is 1 library containing 12 vector objects in the Venn Diagrams solution.
There are a few samples that you see on this page which were created in the ConceptDraw DIAGRAM application by using the Venn Diagrams solution. Some of the solution's capabilities as well as the
professional results which you can achieve are all demonstrated here on this page.
All source documents are vector graphic documents which are always available for modifying, reviewing and/or converting to many different formats, such as MS PowerPoint, PDF file, MS Visio, and many
other graphic ones from the ConceptDraw Solution Park or ConceptDraw STORE. The Venn Diagrams solution is available to all ConceptDraw DIAGRAM users to get installed and used while working in the
ConceptDraw DIAGRAM diagramming and drawing software.
This diagram was created in ConceptDraw DIAGRAM using the Venn Diagrams Library from the Venn Diagrams Solution. An experienced user spent 5 minutes creating this sample.
This Venn Diagram shows the names of the professions that have important skill intersections for internet marketing. The Venn Diagrams are widely used because it is convenient tool to visualize all
possible logical intersections between several sets.
This diagram was created in ConceptDraw DIAGRAM using the Venn Diagrams Library from the Venn Diagrams Solution. An experienced user spent 10 minutes creating this sample.
This Venn Diagram shows the prerequisites for sustainable development, as it is influenced by economic, social and environmental factors. For clarity the circles have different colors. The
intersection of three circles is brightly highlighted with the lightest color.
This sample shows a Venn Diagram, which displays the components that make up relationship marketing. It’s incredibly convenient, simple, and quick to draw a Venn Diagram using ConceptDraw DIAGRAM
with its predesigned objects.
This Venn Diagram visually shows the attributes and components of a Photooxygenation reaction. This diagram is bright, attractive, and professional looking, so it can be used in presentations,
reportages, reports, and scientific reviews.
This sample shows a Venn Diagram that visualizes the intersection between two sets. The sets are equal in color value, but the intersection is visually highlighted to attract attention.
After ConceptDraw DIAGRAM is installed, the Venn Diagrams solution can be purchased either from the Diagrams area of ConceptDraw STORE itself or from our online store. Thus, you will be able to use
the Venn Diagrams solution straight after.
First of all, make sure that both ConceptDraw STORE and ConceptDraw DIAGRAM applications are downloaded and installed on your computer. Next, install the Venn Diagrams solution from the ConceptDraw
STORE to use it in the ConceptDraw DIAGRAM application.
Start using the Venn Diagrams solution to make the professionally looking diagrams by adding the design elements taken from the stencil libraries and editing the pre-made examples that can be found
A Venn diagram, sometimes referred to as a set diagram, is a diagramming style used to show all the possible logical relations between a finite amount of sets. In mathematical terms, a set is a
collection of distinct objects gathered together into a group, which can then itself be termed as a single object. Venn diagrams represent these objects on a page as circles or ellipses, and their
placement in relation to each other describes the relationships between them.
Simple Venn diagram showing two sets and their intersection made with ConceptDraw DIAGRAM
Commonly a Venn diagram will compare two sets with each other. In such a case, two circles will be used to represent the two sets, and they are placed on the page in such a way as that there is an
overlap between them. This overlap, known as the intersection, represents the connection between sets — if for example the sets are 'mammals' and 'sea life', then the intersection will be 'marine
mammals', e.g. dolphins or whales. Each set is taken to contain every instance possible of its class; everything outside the 'union' of sets (union is the term for the combined scope of all sets and
intersections) is implicitly not any of those things — not a mammal, doesn't live underwater, etc.
The structure of this humble diagram was formally developed by the mathematician John Venn, but its roots go back as far as the 13th Century, and includes many stages of evolution dictated by a
number of noted logicians and philosophers. The earliest indications of similar diagram theory came from the writer Ramon Llull, who's initial work would later inspire the German polymath Leibnez.
Leibnez was exploring early ideas regarding computational sciences and diagrammatic reasoning, using a style of diagram that would eventually be formalized by another famous mathematician. This was
Leonhard Euler, the creator of the Euler diagram.
Euler diagrams are similar to Venn diagrams, in that both compare distinct sets using logical connections. Where they differ is that a Venn diagram is bound to show every possible intersection
between sets, whether objects fall into that class or not; a Euler diagram only shows actually possible intersections within the given context. Sets can exist entirely within another, termed as a
subset, or as a separate circle on the page without any connections — this is known as a disjoint. Furthering the example outlined previously, if a new set was introduced — 'birds' — this would be
shown as a circle entirely within the confines of the 'mammals' set (but not overlapping 'sea life'). A fourth set of 'trees' would be a disjoint — a circle without any connections or intersections.
Logician John Venn developed the Venn diagram in complement to Euler's concept. His diagram rules were more rigid than Euler's — each set must show its connection with all other sets within the
union, even if no objects fall into this category. This is why Venn diagrams often only contain 2 or 3 sets, any more and the diagram can lose its symmetry and become overly complex. Venn made
allowances for this by trading circles for ellipses and arcs, ensuring all connections are accounted for whilst maintaining the aesthetic of the diagram.
Usage for Venn diagrams has evolved somewhat since their inception. Both Euler and Venn diagrams were used to logically and visually frame a philosophical concept, taking phrases such as some of x is
y, all of y is z and condensing that information into a diagram that can be summarized at a glance. They are used in, and indeed were formed as an extension of, set theory — a branch of mathematical
logic that can describe object's relations through algebraic equation.
Now the Venn diagram is so ubiquitous and well ingrained a concept that you can see its use far outside mathematical confines. The form is so recognizable that it can shown through mediums such as
advertising or news broadcast and the meaning will immediately be understood. They are used extensively in teaching environments — their generic functionality can apply to any subject and focus on my
facet of it. Whether creating a business presentation, collating marketing data, or just visualizing a strategic concept, the Venn diagram is a quick, functional, and effective way of exploring
logical relationships within a context.
As ever with any common (and not-so-common) diagramming technique, CS Odessa want to provide you with the tools needed to produce highly effective, professional diagrams, in the style that suits you.
The Venn Diagrams solution for ConceptDraw DIAGRAM contains a library of 12 pre-built Venn diagrams, with options for 2, 3, 4 or 5 set variants, displayed using a range of color palettes. Just pick
your style, fill in your text, and your work is almost done. From there you can choose to export your diagram into a presentation, open using other diagramming software, or use in conjunction with
other services from the ConceptDraw Office suite. Help is on hand if you need it, from How-To guides, to video tutorials, not forgetting our huge online help desk.
|
{"url":"https://www.conceptdraw.com/solution-park/DGRM_TOOL_VENNDIAGRMS","timestamp":"2024-11-12T03:08:52Z","content_type":"text/html","content_length":"624491","record_id":"<urn:uuid:4ad5f4bf-4953-460a-adb9-5fd21de4c1a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00732.warc.gz"}
|
Problem: Find and return the indices of the two numbers in an array that sum up to the target number.
target = 9
arr = [2, 7, 11, 13]
Brute Force
This involves iterating over each item in the array, finding the sum of the item and other individual items in the array and comparing it to the target.
This approach incolves a nested for loop which means a time complexity of O(n^2)
This is how it looks like in python:
my_list = [3, 2, 9, 8, 13, 7]
target = 9
#Brute Force Method
def two_sum(my_list, target):
for i in range(len(my_list)):
for j in range(i+1, len(my_list)):
result = my_list[i] + my_list[j]
if result == target:
return [i, j]
two_sum(my_list, target) #Result: [1,5]
Using a Hashmap
This method involves passing each item through a hash function and storing it in a hashmap. Here's a step by step of how I solved this:
1. Create a hashmap
2. Find the difference of each item from the target value. The difference is the second value we need.
3. Check if the difference is already in the hashmap. If it isn't, add the number(not the difference) to the table, and move on to the next number in the list.
4. If the difference exists in the hash table, return its index and the index of the number in an array.
Now, let's see how it looks in code:
my_list = [3, 2, 9, 8, 13, 7]
target = 9
#Using a Hash Table
def two_sum(my_list, target):
prevMap = {} #Create a Hashmap
for i, n in enumerate(my_list):
diff = target - n
if diff in prevMap:
return[prevMap[diff], i]
prevMap[n] = i
two_sum(my_list, target) #Result: [1,5]
This method has a time complexity of O(n)
|
{"url":"https://deecaulcrick.com/snippets/two-sum","timestamp":"2024-11-06T12:30:21Z","content_type":"text/html","content_length":"25844","record_id":"<urn:uuid:dd1c28c0-3d8e-4fb1-8cc4-91f57dd6577a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00140.warc.gz"}
|
Re: Adaptive FunctionInterpolation[] ?
• To: mathgroup at smc.vnet.net
• Subject: [mg33261] Re: Adaptive FunctionInterpolation[] ?
• From: "Carl K. Woll" <carlw at u.washington.edu>
• Date: Tue, 12 Mar 2002 05:09:10 -0500 (EST)
• Organization: University of Washington
• References: <a6ci4l$e3a$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Hi Frank,
If you're interested, I can give you the workaround that I used when I came
across this problem. My solution used the fact that NIntegrate must
adaptively choose its integration points in order to accurately determine
the numerical integral. Hence, if I use NIntegrate, and remember the data
points which NIntegrate uses in its routines, I can use these data points to
interpolate the desired function. I have no idea how good this approach is
compared to other adaptive routines, but it required very little work from
me to develop (that is, I didn't need to figure out an adaptive routine).
Putting the above idea into a function yields the following:
AdaptiveFunctionInterpolation[expr_, x__, opts___?OptionQ] :=
Module[{acc, g, gridpts, iarrays, niacc, pts, vars, vol},
acc = AccuracyGoal /. {opts} /. AccuracyGoal -> 6;
vol = Times @@ ({x}[[All,3]] - {x}[[All,2]]);
niacc = -Log[10, 10^-acc vol];
vars = {x}[[All,1]];
pts = {};
With[{v = vars}, Evaluate[g @@ (Pattern[#, _] & ) /@ vars] := Module[{}, pts
= {v, pts}; expr]];
NIntegrate[g @@ vars, x, AccuracyGoal -> niacc];
pts = Partition[Flatten[pts], Length[vars]];
gridpts = (Union[pts[[All,#1]], {x}[[1,{2, 3}]]] & ) /@ Range[Length[vars]];
iarrays = expr /. Outer[Thread[vars -> {##1}] & , Sequence @@ gridpts];
ListInterpolation[iarrays, gridpts]
Some comments are probably in order.
1. I decided that an AccuracyGoal was more appropriate for function
interpolation. If the desired accuracy goal of the approximate function was
acc, then the total error in integrating this approximation would be 10^-acc
volume. Hence, we want to use NIntegrate with an AccuracyGoal of niacc,
where 10^-niacc equals 10^-acc volume. This explains where niacc comes from.
2. The With statement creates a new function g, which remembers the points
where the desired function expr is evaluated.
3. Once the desired grid spacings are determined, I use ListInterpolation to
create the InterpolatingFunction.
4. The only supported option is AccuracyGoal.
5. In the case where there is only a single argument, the above function
should work, but it makes much more sense to use FunctionInterpolation
instead, as FunctionInterpolation does use an adaptive routine in the case
of a function of a single argument.
Below, I give an example of AdaptiveFunctionInterpolation in use:
f[x_,y_]:=E^(-x-10y) Sin[x+20y]
{1.047 Second, Null}
{1.89 Second, Null}
{4.641 Second, Null}
{39.891 Second, Null}
{4.437 Second, Null}
The functions a# are interpolating function with the accuracy specified by
the number. The function i uses the standard FunctionInterpolation function
with a uniform grid spacing containing 200 points on each axis.
-1.03315 10
-4.87857 10
-4.8787 10
-5.4758 10
2.93463 10
-5.43843 10
It's a bit hard to decipher, but the a# converge to the correct answer, with
accuracies as expected. The function i yields an answer which is worse than
Again, the functions a# produce answers with the accuracies expected, with
a6 already better than the function i.
I hope you find the above useful.
Carl Woll
Physics Dept
U of Washington
"Frank J. Iannarilli, Jr." <franki at aerodyne.com> wrote in message
news:a6ci4l$e3a$1 at smc.vnet.net...
> Hi,
> I too have experienced Carl Woll's finding (referenced below) that
> FunctionInterpolation[] does NOT adaptively sample the underlying
> (exact) function in determining its sampling grid.
> Various related newsgroup postings have suggested manually combining
> sets of InterpolationFunctions, each computed for a differing region
> of the domain at appropriate (fixed) grid density. But of course,
> this is a pain, and not at all a general solution. MathSource
> apparently does not contain any adaptive FunctionInterpolation[]
> package.
> So this is a cry to WRI or anyone who may already have the goods to
> come forward with this capability...it's really a good-to-have in
> furthering Mathematica's appeal. I know one could craft it based
> around ListInterpolation[], but I'm looking to save this effort (if
> possible).
> Regards,
> Frank J. Iannarilli
> +++++++++++++
> Back in 1999, Carl Woll wrote:
> I am trying to use FunctionInterpolation to approximate a function
> <snip>
> (given the default option InterpolationPoints -> 11 ...) In
> investigating
> the behavior of FunctionInterpolation, I discovered that it calculates
> points on an 11x11 grid only. When I increase the number of
> interpolation points, FunctionInterpolation calculates points on a
> finer
> grid. However, there is only one region of space where the answer is
> poor, so I only want to increase the number of points used in the
> region
> where the answer is poor, not everywhere. How can I do this?
> Since FunctionInterpolation has a MaxRecursion option, I figured that
> FunctionInterpolation used an adaptive procedure to select points.
> This
> is apparently not so. Is this really true, or is there a way to force
> FunctionInterpolation to select points adaptively. If not, what does
> the
> option MaxRecursion do?
> --
> Carl Woll
> Dept of Physics
> U of Washington
|
{"url":"https://forums.wolfram.com/mathgroup/archive/2002/Mar/msg00208.html","timestamp":"2024-11-07T20:35:46Z","content_type":"text/html","content_length":"36371","record_id":"<urn:uuid:65deaa8e-32c8-4b0d-ba76-14a8fd1a532f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00361.warc.gz"}
|
Year 5 Daily Maths Review Set Two — Teachie Tings
Make reviewing Australian Curriculum Year 5 Maths fun with these engaging review and warm-up activities! This set (SET 2) includes 50 Year 5 daily maths practise slides perfect for reviewing the
concepts covered in Year Five maths.
You’ll get ten weeks of maths review slides covering grade 5 number, algebra, measurement, space, statistics, and probability. Each day is a mix of grade 5 computation practise and other skills.
Super easy to set up and use, these slides are perfect for daily practise and revision, maths warm-ups, maths groups or substitute days, too!
Here’s what you’ll get:
DAILY MATHS WARM-UP / REVIEW SLIDES – YEAR FIVE, SET TWO.
• PowerPoint presentation, ready to use.
• 50 daily maths slides (TEN WEEKS!) – full colour and ready to use – no entering your own numbers, just display and go!
• Tips on how to implement these in your classroom.
Your students will love the easy format and predictability of your new maths warm-up/ review routine. Each day begins with number and place value, reinforcing the basics and then moving into the
application of those concepts and other strands of maths.
You will cover the concepts covered in the Year FIVE Australian Maths Curriculum including:
1. Number and Algebra
Place value, identifying multiples, factors, rounding to 10, 100 & 1000, addition and subtraction round and estimate, adding on number line, split strategy, partitioning, compensation, vertical
addition, subtraction and multiplication, multiply by 10, 100, 1000, short division, order fractions, decimals and percentages on a number line, short division with remainders, compare and order
fractions, add fractions with the same denominator, subtract fractions with the same denominator, represent percentages, add decimals, number patters, finding unknown quantities, financial
2. Measurement and Geometry
Perimeter of rectangles, area of regular and composite shapes, convert between common units of measurement, 12 and 24-hour time, lines of symmetry, rotational symmetry, 3D objects, grid reference
systems, describing routes using landmarks and directional language, translations, reflections and rotations, enlargement of 2D shapes, angles, calculate volume using unit cubes, calculate volume
using a formula, capacity, and mass.
3. Statistics and Probability
Represent probability of outcomes, recognise that probability ranges from 0-1, represent probabilities as fractions, collect numerical data, describe and interpret data
This is SET TWO
|
{"url":"https://teachietings.com/product/year-5-daily-maths-review-set-two/","timestamp":"2024-11-04T10:47:05Z","content_type":"text/html","content_length":"140217","record_id":"<urn:uuid:d05f1bfa-8907-441f-be29-a9aae6a90542>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00705.warc.gz"}
|
The quasi-equilibrium framework revisited: analyzing long-term CO2 enrichment responses in plant–soil models
Articles | Volume 12, issue 5
© Author(s) 2019. This work is distributed under the Creative Commons Attribution 4.0 License.
The quasi-equilibrium framework revisited: analyzing long-term CO[2] enrichment responses in plant–soil models
Elevated carbon dioxide (CO[2]) can increase plant growth, but the magnitude of this CO[2] fertilization effect is modified by soil nutrient availability. Predicting how nutrient availability affects
plant responses to elevated CO[2] is a key consideration for ecosystem models, and many modeling groups have moved to, or are moving towards, incorporating nutrient limitation in their models. The
choice of assumptions to represent nutrient cycling processes has a major impact on model predictions, but it can be difficult to attribute outcomes to specific assumptions in complex ecosystem
simulation models. Here we revisit the quasi-equilibrium analytical framework introduced by Comins and McMurtrie (1993) and explore the consequences of specific model assumptions for ecosystem net
primary productivity (NPP). We review the literature applying this framework to plant–soil models and then analyze the effect of several new assumptions on predicted plant responses to elevated CO[2]
. Examination of alternative assumptions for plant nitrogen uptake showed that a linear function of the mineral nitrogen pool or a linear function of the mineral nitrogen pool with an additional
saturating function of root biomass yield similar CO[2] responses at longer timescales (>5 years), suggesting that the added complexity may not be needed when these are the timescales of interest. In
contrast, a saturating function of the mineral nitrogen pool with linear dependency on root biomass yields no soil nutrient feedback on the very-long-term (>500 years), near-equilibrium timescale,
meaning that one should expect the model to predict a full CO[2] fertilization effect on production. Secondly, we show that incorporating a priming effect on slow soil organic matter decomposition
attenuates the nutrient feedback effect on production, leading to a strong medium-term (5–50 years) CO[2] response. Models incorporating this priming effect should thus predict a strong and
persistent CO[2] fertilization effect over time. Thirdly, we demonstrate that using a “potential NPP” approach to represent nutrient limitation of growth yields a relatively small CO[2] fertilization
effect across all timescales. Overall, our results highlight the fact that the quasi-equilibrium analytical framework is effective for evaluating both the consequences and mechanisms through which
different model assumptions affect predictions. To help constrain predictions of the future terrestrial carbon sink, we recommend the use of this framework to analyze likely outcomes of new model
assumptions before introducing them to complex model structures.
Received: 18 Nov 2018 – Discussion started: 07 Dec 2018 – Revised: 01 May 2019 – Accepted: 14 May 2019 – Published: 28 May 2019
Predicting how plants respond to atmospheric carbon dioxide (CO[2]) enrichment (eCO[2]) under nutrient limitation is fundamental for an accurate estimate of the global terrestrial carbon (C) budget
in response to climate change. There is now ample evidence that the response of terrestrial vegetation to eCO[2] is modified by soil nutrient availability (Fernández-Martínez et al., 2014; Norby et
al., 2010; Reich and Hobbie, 2012; Sigurdsson et al., 2013). Over the past decade, land surface models have developed from C-only models to carbon–nitrogen (CN) models (Gerber et al., 2010; Zaehle
and Friend, 2010). The inclusion of CN biogeochemistry has been shown to be essential to capture the reduction in the CO[2] fertilization effect with declining nutrient availability and therefore its
implications for climate change (Zaehle et al., 2015). However, it has also been shown that models incorporating different assumptions predict very different vegetation responses to eCO[2]
(Lovenduski and Bonan, 2017; Medlyn et al., 2015). Careful examination of model outputs has provided insight into the reasons for the different model predictions (De Kauwe et al., 2014; Medlyn et
al., 2016; Walker et al., 2014, 2015; Zaehle et al., 2014), but it is generally difficult to attribute outcomes to specific assumptions in these plant–soil models that differ in structural complexity
and process feedbacks (Lovenduski and Bonan, 2017; Medlyn et al., 2015; Thomas et al., 2015).
Understanding the mechanisms underlying predictions of ecosystem carbon cycle processes is fundamental for the validity of prediction across space and time. Comins and McMurtrie (1993) developed an
analytical framework, the “quasi-equilibrium” approach, to make model predictions traceable to their underlying mechanisms. The approach is based on the two-timing approximation method (Ludwig et
al., 1978) and makes use of the fact that ecosystem models typically represent a series of pools with different equilibration times. The method involves the following: (1) choosing a time interval (τ
) such that the model variables can be divided into “fast” pools (which approach effective equilibrium at time τ) and “slow” pools (which change only slightly at time τ); (2) holding the slow pools
constant and calculating the equilibria of the fast pools (an effective equilibrium as this is not a true equilibrium of the entire system); and (3) substituting the fast pool effective equilibria
into the original differential equations to give simplified differential equations for the slow pools at time τ.
In a CN model, plant net primary production (NPP) can be estimated from two constraints based on equilibration of the C balance (the “photosynthetic constraint”) and the N balance (the “nitrogen
recycling constraint”) (Comins and McMurtrie, 1993). Both constraints link NPP with leaf chemistry (i.e., N:C ratio) (derivation in Sect. 3.1). The simulated production occurs at the intersection
of these two constraint curves (shown graphically in Fig. 1). To understand behavior on medium and long timescales (e.g., wood and slow and passive soil organic pools in Fig. 2; 20–200 years), one
can assume that plant pools with shorter equilibration times in the model (e.g., foliage, fine-root, or active soil organic pools in Fig. 2) have reached quasi-equilibrium, and model dynamics are
thus driven by the behavior of the longer-timescale pools.
The recent era of model development has seen some significant advances in representing complex plant–soil interactions, but models still diverge in future projections of CO[2] fertilization effects
on NPP (Friend et al., 2014; Koven et al., 2015; Walker et al., 2015). A recent series of multi-model intercomparison studies has demonstrated the importance of understanding underlying response
mechanisms in determining model response to future climate change (Medlyn et al., 2015), but this can be difficult to achieve in complex global models. The quasi-equilibrium framework is a relatively
simple but quantitative method to examine the effect of different assumptions on model predictions. As such, it complements more computationally expensive sensitivity analyses and can be used as an
effective tool to provide a priori evaluation of both the consequence and mechanism through which different new model implementations affect model predictions.
Here, by constructing a quasi-equilibrium framework based on the structure of the Generic Decomposition And Yield (G'DAY) model (Comins and McMurtrie, 1993), we evaluate the effects on plant
responses to eCO[2] of some recently developed model assumptions incorporated into ecosystem models, for example the Community Land Model (CLM) (Oleson et al., 2004), the Community
Atmosphere–Biosphere Land Exchange (CABLE) model (Kowalczyk et al., 2006), the Lund–Potsdam–Jena (LPJ) model (Smith et al., 2001), the JSBACH model (Goll et al., 2017b), and the O-CN model (Zaehle et
al., 2010). Specifically, we test how different functions affecting plant N uptake influence NPP responses to eCO[2] at various quasi-equilibrium time steps. The present study is a continuation of
the series of quasi-equilibrium studies reviewed in Sect. 2, with a general aim of helping researchers to understand the similarities and differences of predictions made by different process-based
models, as demonstrated in Sect. 3.
Many of the assumptions currently being incorporated into CN models have previously been explored using the quasi-equilibrium framework; here we provide a brief literature review describing the
outcomes of this work (Table 1). Firstly, the flexibility of plant and soil stoichiometry has recently been highlighted as a key assumption (Stocker et al., 2016; Zaehle et al., 2014). A key finding
from early papers applying the quasi-equilibrium framework was that model assumptions about the flexibility of the plant wood N:C ratio (Comins, 1994; Comins and McMurtrie, 1993; Dewar and
McMurtrie, 1996; Kirschbaum et al., 1994, 1998; McMurtrie and Comins, 1996; Medlyn and Dewar, 1996) and soil N:C ratio (McMurtrie and Comins, 1996; McMurtrie et al., 2001; Medlyn et al., 2000) were
critical determinants of the magnitude of the transient (10 to >100 years) plant response to eCO[2] (Fig. 1). Different to the effect of foliar N:C ratio flexibility, which has an instantaneous
effect on photosynthesis, the flexibility of the wood N:C ratio controls the flexibility of nutrient storage per unit biomass accumulated in the slow turnover pool. Therefore, a constant wood N:C
ratio, such as was assumed in CLM4 (Thornton et al., 2007; Yang et al., 2009), means that effectively a fixed amount of N is locked away from the active processes such as photosynthesis on the
timescale of the life span of the woody tissue. In contrast, a flexible wood N:C ratio, such as was tested in O-CN (Meyerholt and Zaehle, 2015), allows variable N storage in the woody tissue and
consequently more nutrients available for C uptake on the transient timescale. Similarly, flexibility in the soil N:C ratio determines the degree of the soil N cycle feedback (e.g., N
immobilization and mineralization) and therefore its effect on plant response to eCO[2]. A large response to eCO[2] occurs when the soil N:C ratio is allowed to vary, whereas there could be little
or no response if the soil N:C ratio is assumed to be inflexible (McMurtrie and Comins, 1996).
Changes in plant allocation with eCO[2] are also a source of disagreement among current models (De Kauwe et al., 2014). The quasi-equilibrium framework has been used to investigate a number of
different plant C allocation schemes (Comins and McMurtrie, 1993; Kirschbaum et al., 1994; Medlyn and Dewar, 1996). For example, Medlyn and Dewar (1996) suggested that plant long-term growth
responses to eCO[2] depend strongly on the extent to which stem and foliage allocations are coupled. With no coupling (i.e., fixed allocation of C and N to stemwood), plant growth was not responsive
to eCO[2]; with linear coupling (i.e., allocation to stemwood proportional to foliage allocation), a significant long-term increase in total growth following eCO[2] was found (Fig. S1 in the
Supplement). The reason for this is similar to the argument behind wood N:C ratio flexibility: decreasing C allocation to wood decreases the rate of N removal per unit of C invested in growth. In
contrast, Kirschbaum et al. (1994) found that changes in allocation between different parts of a plant only marginally changed the CO[2] sensitivity of production at different timescales. The
fundamental difference between the two allocation schemes was that Kirschbaum et al. (1994) assumed that the root allocation coefficient was determined by a negative relationship with the foliar
N:C ratio, meaning that the increase in foliar N:C ratio would lead to a decreased root allocation and increased wood and foliage allocation, whereas Medlyn and Dewar (1996) investigated
stem–foliage allocation coupling without introducing a feedback via the foliar N:C ratio. The comparison of the two allocation schemes is indicative of the underlying causes of model prediction
divergence in recent inter-model comparisons (De Kauwe et al., 2014; Walker et al., 2015).
Another hypothesis currently being explored in models is the idea that increased belowground allocation can enhance nutrient availability under elevated CO[2] (Dybzinski et al., 2014; Guenet et al.,
2016). Comins (1994) argued that the N deficit induced by CO[2] fertilization could be eliminated by the stimulation of N fixation. This argument was explored in more detail by McMurtrie et al.
(2000), who assumed that eCO[2] led to a shift in allocation from wood to root exudation, which resulted in enhanced N fixation. They showed that, although the increase in N fixation could induce a
large eCO[2] response in NPP over the long term, a slight decrease in NPP was predicted over the medium term. This decrease occurred because increased exudation at eCO[2] increased soil C input,
causing increased soil N sequestration and lowering the N available for plant uptake. Over the long term, however, both NPP and C storage were greatly enhanced because the sustained small increase in
N input led to a significant build-up in total ecosystem N on this timescale.
The interaction between rising CO[2] and warming under nutrient limitation is of key importance for future simulations. Medlyn et al. (2000) demonstrated that short-term plant responses to warming,
such as physiological acclimation, are overridden by the positive effects of warming on soil nutrient availability in the medium to long term. Similarly, McMurtrie et al. (2001) investigated how the
flexibility of the soil N:C ratio affects predictions of the future C sink under elevated temperature and CO[2]. They showed that assuming an inflexible soil N:C ratio with elevated temperature
would mean a release of nitrogen with enhanced decomposition, leading to a large plant uptake of N to enhance growth. In contrast, an inflexible soil N:C ratio would mean that the extra N
mineralized under elevated temperature is largely immobilized in the soil and there is hence a smaller increase in C storage. This effect of soil N:C stoichiometry on the response to warming is
opposite to the effect on eCO[2] described above. Therefore, under a scenario in which both temperature and CO[2] increase, the C sink strength is relatively insensitive to soil N:C variability,
but the relative contributions of temperature and CO[2] to this sink differ under different soil N:C ratio assumptions (McMurtrie et al., 2001). This outcome may explain the results observed by
Bonan and Levis (2010) when comparing coupled carbon cycle–climate simulations. The Terrestrial Ecosystem Model (TEM; Sokolov et al., 2008) and CLM (Thornton et al., 2009), which assumed inflexible
stoichiometry, had a large climate–carbon feedback but a small CO[2] concentration-carbon feedback, contrasting with the O-CN model (Zaehle et al., 2010), which assumed flexible stoichiometry and had
a small climate–carbon feedback and a large CO[2] concentration–carbon feedback. Variations among models in this stoichiometric flexibility assumption could also potentially explain the trade-off
between CO[2] and temperature sensitivities observed by Huntzinger et al. (2017).
This section combines both methods and results together because equation derivation is fundamental to the analytical and graphic interpretation of model performance within the quasi-equilibrium
framework. Below we first describe the baseline simulation model and derivation of the quasi-equilibrium constraints (Sect. 3.1); we then follow with analytical evaluations of new model assumptions
using the quasi-equilibrium framework (Sect. 3.2). Within each subsection (Sect. 3.2.1 to 3.2.3), we first provide key equations for each assumption and the derivation of the quasi-equilibrium
constraints with these new assumptions; we then provide our graphic interpretations and analyses to understand the effect of the model assumption on plant NPP responses to eCO[2].
More specifically, we tested alternative model assumptions for three processes that affect plant carbon–nitrogen cycling: (1) Sect. 3.2.1 evaluates different ways of representing plant N uptake,
namely plant N uptake as a fixed fraction of mineral N pools, as a saturating function of the mineral N pool linearly depending on root biomass (Zaehle and Friend, 2010), or as a saturating function
of root biomass linearly depending on the mineral N pool (McMurtrie et al., 2012); (2) Sect. 3.2.2 tests the effect the potential NPP approach that downregulates potential NPP to represent N
limitation (Oleson et al., 2004); and (3) Sect. 3.2.3 evaluates root exudation and its effect on the soil organic matter decomposition rate (i.e., priming effect). The first two assumptions have been
incorporated into some existing land surface model structures (e.g., CLM, CABLE, O-CN, LPJ), whereas the third is a framework proposed following the observation that models did not simulate some key
characteristic observations of the DukeFACE experiment (Walker et al., 2015; Zaehle et al., 2014) and therefore could be of importance in addressing some model limitations in representing soil
processes (van Groenigen et al., 2014; Zaehle et al., 2014). It is our purpose to demonstrate how one can use this analytical framework to provide an a priori and generalizable understanding of the
likely impact of new model assumptions on model behavior without having to run a complex simulation model. Here we do not target specific ecosystems to parameterize the model but anticipate the
analytical interpretation of the quasi-equilibrium framework to be of general applicability for woody-dominated ecosystems. One could potentially adopt the quasi-equilibrium approach to provide
case-specific evaluations of model behavior against observations (e.g., constraining the likely range of wood N:C ratio flexibility).
3.1Baseline model and derivation of the quasi-equilibrium constraints
Our baseline simulation model is similar in structure to G'DAY (Generic Decomposition And Yield; Comins and McMurtrie, 1993), a generic ecosystem model that simulates biogeochemical processes (C, N,
and H[2]O) at daily or sub-daily time steps. A simplified G'DAY model version that simulates plant–soil C–N interactions at a weekly time step was developed for this study (Fig. 2). In G'DAY, plants
are represented by three stoichiometrically flexible pools: foliage, wood, and roots. Each pool turns over at a fixed rate. Litter enters one of four litter pools (metabolic and structural
aboveground and belowground) and decomposes at a rate dependent on the litter N:C ratio, soil moisture, and temperature. Soil organic matter (SOM) is represented as active, slow, and passive pools,
which decay according to first-order decay functions with different rate constants. Plants access nutrients from the mineral N pool, which is an explicit pool supplied by SOM decomposition and an
external input, which is assumed to be constant, as a simplified representation of fixation and atmospheric deposition.
The baseline simulation model further assumes the following: (1) gross primary production (GPP) is a function of a light-use efficiency (LUE), which depends on the foliar N:C ratio (n[f]) and
atmospheric CO[2] concentration (C[a]) (Appendix A1); (2) carbon use efficiency (the ratio NPP:GPP) is constant; (3) allocation of newly fixed carbon among foliage (a[f]), wood (a[w]), and root (a
[r]) pools is constant; (4) foliage (n[f]), wood (n[w]), and root N:C (n[r]) ratios are flexible; (5) wood and root N:C ratios are proportional to the foliar N:C ratio, with constants of
proportionality r[w] and r[r], respectively; (6) a constant proportion (t[f]) of foliage N is retranslocated before leaves senesce; (7) active, slow, and passive SOM pools have fixed N:C ratios;
and (8) an N uptake constant determines the plant N uptake rate. Definitions of the parameters and forcing variables are summarized in Table 2. For all simulations, the ambient CO[2] concentration (
aCO[2]) was set at 400ppm and eCO[2] at 800ppm.
We now summarize the key derivation of the two quasi-equilibrium constraints, the photosynthetic constraint, and the nutrient cycling constraint from our baseline simulation model (details provided
in Appendix A1 and A2). The derivation follows Comins and McMurtrie (1993), which is further elaborated in work by McMurtrie et al. (2000) and Medlyn and Dewar (1996) and evaluated by Comins (1994).
First, the photosynthetic constraint is derived by assuming that the foliage C pool (C[f]) has equilibrated. Following the GPP and CUE assumptions (see above) and the detailed derivations made in
Appendix A1, there is an implicit relationship between NPP and n[f]:
$\begin{array}{}\text{(1)}& \mathrm{NPP}=\mathrm{LUE}\left({n}_{\mathrm{f}},{C}_{\mathrm{a}}\right)\cdot {I}_{\mathrm{0}}\cdot \left(\mathrm{1}-{e}^{-k\mathit{\sigma }{a}_{\mathrm{f}}\mathrm{NPP}/{s}
_{\mathrm{f}}}\right)\cdot \mathrm{CUE},\end{array}$
where I[0] is the incident radiation, k is the canopy light extinction coefficient, and σ is the specific leaf area. This equation is the photosynthetic constraint, which relates NPP to n[f].
Secondly, the nitrogen cycling constraint is derived by assuming that nitrogen inputs to and outputs from the equilibrated pools are equal. Based on the assumed residence times of the passive SOM (
∼400 years), slow SOM (15 years), and woody biomass (50 years) pools, we can calculate the nutrient recycling constraint at three different timescales (conceptualized in Fig. 3): very long (VL, >500
years, all pools equilibrated), long (L, 100–500 years, all pools equilibrated except the passive pool), or medium (M, 5–50 years, all pools equilibrated except slow, passive, and wood pools). In
the VL term, we have
$\begin{array}{}\text{(2)}& {N}_{\mathrm{in}}={N}_{\mathrm{loss}},\end{array}$
where N[in] is the total N input into the system, and N[loss] is the total N lost from the system via leaching and volatilization. Analytically, with some assumptions about plant N uptake
(Appendix A2), we can transform Eq. (2) into a relationship between NPP and n[f], expressed as
$\begin{array}{}\text{(3)}& \mathrm{NPP}=\frac{{N}_{\mathrm{in}}\left(\mathrm{1}-{l}_{\mathrm{n}}\right)}{{l}_{\mathrm{n}}\left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm{w}}+{a}_
where l[n] is the fraction of N mineralization that is lost, a[f]; a[w] and a[r] are the allocation coefficients for foliage, wood, and roots, respectively, and n[fl], n[w], and n[r] are the N:C
ratios for foliage litter, wood, and roots, respectively. Since n[w] and n[r] are assumed proportional to n[f] (Table 2), the nutrient recycling constraint also links NPP and n[f]. The intersection
with the photosynthetic constraint yields the very-long-term equilibria of both NPP and n[f]. Similarly, we can write the nitrogen recycling constraint in the L term and M term as a function between
NPP and n[f] (details explained in Appendix A2). Their respective interaction with the photosynthetic constraint yields the L-term and M-term equilibria points of both NPP and n[f] (Figs. 1 and 3).
Essentially, at each timescale, there are two unknowns (NPP and n[f]) to be resolved via both the nitrogen recycling constraint and the photosynthetic constraint equations. Based on this set of
analytical equations, one can evaluate how different assumptions affect the behavior of the model quantitatively. Below, we describe how different new model assumptions affect the predicted plant
response to a doubling of the CO[2] concentration at various timescales.
3.2Evaluations of new model assumptions based on the quasi-equilibrium framework
3.2.1Explicit plant N uptake
We now move to considering new model assumptions. We first consider different representations of plant N uptake. In the baseline model, the mineral N pool (N[min]) is implicit, as we assumed that all
mineralized N in the soil is either taken up by plants (N[U]) or lost from the system (N[loss]). Here, we evaluate three alternative model representations in which plant N uptake depends on an
explicit N[min] pool and their effects on plant responses to eCO[2]. We consider plant N uptake as (1) a fixed coefficient of the mineral N pool, (2) a saturating function of root biomass and a
linear function of the mineral N pool (McMurtrie et al., 2012), and (3) a saturating function of the mineral N pool and a linear function of root biomass. The last function has been incorporated into
some land surface models, for example, O-CN (Zaehle and Friend, 2010) and CLM (Ghimire et al., 2016), while the first two have been incorporated into G'DAY (Corbeels et al., 2005).
A mineral N pool was made explicit by specifying a constant coefficient (u) to regulate the plant N uptake rate (i.e., $\cdot {N}_{\mathrm{U}}=u\phantom{\rule{0.125em}{0ex}}{N}_{\mathrm{min}}$). N
lost from the system is a function of the mineral N pool (N[min]) regulated by a loss rate (l[n,rate], yr^−1). For the VL-term equilibrium, we have N[in]=N[loss], which means ${N}_{\mathrm{min}}=\
frac{{N}_{\mathrm{in}}}{{l}_{\mathrm{n},\phantom{\rule{0.125em}{0ex}}\mathrm{rate}}}$, and hence
$\begin{array}{}\text{(4)}& {N}_{\mathrm{loss}}=\frac{{l}_{\mathrm{n},\phantom{\rule{0.125em}{0ex}}\mathrm{rate}}}{u}\cdot \mathrm{NPP}\cdot \left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}
where n[fl] is the foliage litter N:C ratio, which is proportional to n[f] (Table 2). At the VL equilibrium, we can rearrange the above equation to relate NPP to n[f]:
$\begin{array}{}\text{(5)}& \mathrm{NPP}=\frac{u\phantom{\rule{0.125em}{0ex}}{N}_{\mathrm{in}}}{{l}_{\mathrm{n}}\cdot \left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm{w}}+{a}_{\
which indicates that the N cycling constraint for NPP is inversely dependent on n[f].
The second function represents plant N uptake as a saturating function of root biomass (C[r]) and a linear function of the mineral N pool (McMurtrie et al., 2012), expressed as
$\begin{array}{}\text{(6)}& {N}_{\mathrm{U}}=\frac{{C}_{\mathrm{r}}}{{C}_{\mathrm{r}}+{K}_{\mathrm{r}}}\cdot {N}_{\mathrm{min}},\end{array}$
where K[r] is a constant. At the VL equilibrium, we have ${N}_{\mathrm{in}}={N}_{\mathrm{loss}}={l}_{\mathrm{n},\phantom{\rule{0.125em}{0ex}}\mathrm{rate}}{N}_{\mathrm{min}}$ and ${C}_{\mathrm{r}}=\
frac{\mathrm{NPP}\cdot {a}_{\mathrm{r}}}{{s}_{\mathrm{r}}}$, where s[r] is the lifetime of the root. Substituting for C[r] in Eq. (6), we relate N[U] to NPP:
$\begin{array}{}\text{(7)}& {N}_{\mathrm{U}}=\frac{\mathrm{NPP}\cdot {a}_{\mathrm{r}}}{\mathrm{NPP}\cdot {a}_{\mathrm{r}}+{K}_{\mathrm{r}}\cdot {s}_{\mathrm{r}}}\cdot \frac{{N}_{\mathrm{in}}}{{l}_{\
Since N[U] is also a function of NPP, we can rearrange and get
$\begin{array}{}\text{(8)}& \mathrm{NPP}=\frac{{N}_{\mathrm{in}}}{{l}_{\mathrm{n},\phantom{\rule{0.125em}{0ex}}\mathrm{rate}}\left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm{w}}+
Comparing with Eq. (5), here NPP is also inversely dependent on n[f] but with an additional negative offset of $\frac{{K}_{\mathrm{r}}{s}_{\mathrm{r}}}{{a}_{\mathrm{r}}}$. The third approach to
represent N uptake (e.g., O-CN and CLM) expresses N uptake as a saturating function of mineral N also linearly depending on root biomass (Zaehle and Friend, 2010), according to
$\begin{array}{}\text{(9)}& {N}_{\mathrm{U}}=\frac{{N}_{\mathrm{min}}}{{N}_{\mathrm{min}}+K}\cdot {C}_{\mathrm{r}}\cdot {V}_{max},\end{array}$
where K is a constant coefficient, and V[max] is the maximum root N uptake capacity simplified as a constant here. Since N[U] is also a function of NPP, we get
$\begin{array}{}\text{(10)}& {N}_{\mathrm{min}}=K\cdot \frac{\left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm{w}}+{a}_{\mathrm{r}}{n}_{\mathrm{r}}\right)}{{V}_{max}\cdot \frac{{a}
This equation sets a limit to possible values of n[f]. In equilibrium, for N[min] to be nonzero, we need $\left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm{w}}+{a}_{\mathrm{r}}{n}_
{\mathrm{r}}\right)<{V}_{max}\frac{{a}_{\mathrm{r}}}{{s}_{\mathrm{r}}}$. The N loss rate is still proportional to the mineral N pool, so N[loss] is given by
$\begin{array}{}\text{(11)}& {N}_{\mathrm{loss}}={l}_{\mathrm{n},\phantom{\rule{0.125em}{0ex}}\mathrm{rate}}\cdot K\cdot \frac{\left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm
{wl}}+{a}_{\mathrm{r}}{n}_{\mathrm{rl}}\right)}{{V}_{max}\cdot \frac{{a}_{\mathrm{r}}}{{s}_{\mathrm{r}}}-\left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm{wl}}+{a}_{\mathrm{r}}{n}_
The above equation provides an N[loss] term that no longer depends on NPP but only on n[f]. If the N leaching loss is the only system N loss, the VL-term nutrient constraint no longer involves NPP,
implying that the full photosynthetic CO[2] fertilization effect is realized. The L- and M-term nutrient recycling constraints, however, are still NPP dependent due to feedbacks from the slowly
recycling wood and SOM pools (e.g., Eq. A11–A15).
The impacts of these alternative representations of N uptake are shown in Fig. 4. First, the explicit consideration of the mineral N pool with a fixed uptake constant (u) of 1yr^−1 has little impact
on the transient response to eCO[2] when compared to the baseline model (Figs. 4a, 1a, Table 3). Varying u does not strongly (<5%) affect plant responses to CO[2] fertilization at different time
steps (Fig. S2). This is because u is only a scaling factor of NPP, meaning it affects NPP but not its response to eCO[2] (Table 4), as depicted by Eq. (5).
Moreover, the approach that assumes N uptake as a saturating function of root biomass linearly depending on the mineral P pool (McMurtrie et al., 2012) has comparable eCO[2] effects on production to
the baseline and the fixed uptake coefficient models (Fig. 4b, Table 3). Essentially, if $\frac{{K}_{\mathrm{r}}{s}_{\mathrm{r}}}{{a}_{\mathrm{r}}}$ is small, we can approximate NPP by $\frac{{N}_{\
mathrm{in}}}{{l}_{\mathrm{n},\mathrm{rate}}\left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm{w}}+{a}_{\mathrm{r}}{n}_{\mathrm{r}}\right)}$, which shares a similar structure to the
baseline and fixed uptake coefficient models (Eqs. 8, 5, and A10). Furthermore, Eq. (8) also depicts the fact that an increase in a[r] should lead to higher NPP, and an increase in s[r] or K[r]
should lead to decreased NPP. However, these predictions depend on assumptions of l[n,rate] and n[f]. If l[n,rate] or n[f] is small, NPP would be relatively less sensitive to a[r],K[r], or s[r].
By comparison, representing N uptake as a saturating function of mineral N linearly depending on root biomass (Ghimire et al., 2016; Zaehle and Friend, 2010) no longer involves the VL-term nutrient
recycling constraint on production (Fig. 4c), which is predicted by Eq. (11). Actual VL-term NPP is determined only by n[f] along with the photosynthetic constraint, meaning that the full CO[2]
fertilization effect on production is realized with the increase in CO[2]. The magnitudes of the CO[2] fertilization effect at other time steps are comparable to those of the baseline model (Table 3)
because the N[loss] term is smaller than the N[w],N[Sp], or N[Ss] terms, meaning it has a relatively smaller effect on NPP at equilibrium. However, steeper nutrient recycling constraint curves are
observed (Fig. 4c), indicating a stronger sensitivity of the NPP response to changes in n[f].
3.2.2Potential NPP
In several vegetation models, including CLM-CN, CABLE, and JSBACH, potential (non-nutrient-limited) NPP is calculated from light, temperature, and water limitations. Actual NPP is then calculated by
downregulating the potential NPP to match nutrient supply. Here we term this the potential NPP approach. We examine this assumption in the quasi-equilibrium framework following the implementation of
this approach adopted in CLM-CN (Bonan and Levis, 2010; Thornton et al., 2007). The potential NPP is reduced if mineral N availability cannot match the demand from plant growth:
$\begin{array}{}\text{(12)}& {P}_{\mathrm{dem}}={\mathrm{NPP}}_{\mathrm{pot}}\left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm{w}}+{a}_{\mathrm{r}}{n}_{\mathrm{r}}\right),\end
where P[dem] is the plant N demand, and NPP[pot] is the potential NPP of the plant. Writing $\left({a}_{\mathrm{f}}{n}_{\mathrm{f}}+{a}_{\mathrm{w}}{n}_{\mathrm{w}}+{a}_{\mathrm{r}}{n}_{\mathrm{r}}\
right)$ as n[plant], the whole-plant N:C ratio, and the whole-soil N:C ratio as n[soil], we can calculate the immobilization N demand as
$\begin{array}{}\text{(13)}& {I}_{\mathrm{dem}}=f{C}_{\mathrm{lit}}{s}_{\mathrm{t}}\left({n}_{\mathrm{soil}}-{n}_{\mathrm{plant}}\right),\end{array}$
where f is the fraction of litter C that becomes soil C, C[lit] is the total litter C pool, and s[t] is the turnover time of the litter pool. Actual plant N uptake is expressed as
$\begin{array}{}\text{(14)}& {P}_{\mathrm{act}}=min\left(\frac{{N}_{\mathrm{min}}{P}_{\mathrm{dem}}}{{I}_{\mathrm{dem}}+{P}_{\mathrm{dem}}},{P}_{\mathrm{dem}}\right).\end{array}$
Actual NPP is expressed as
$\begin{array}{}\text{(15)}& {\mathrm{NPP}}_{\mathrm{act}}={\mathrm{NPP}}_{\mathrm{pot}}\frac{{P}_{\mathrm{act}}}{{P}_{\mathrm{dem}}}.\end{array}$
For the VL constraint, we have N[in]=N[loss]. We can calculate NPP[pot] as
$\begin{array}{}\text{(16)}& {\mathrm{NPP}}_{\mathrm{pot}}=\frac{{N}_{\mathrm{in}}\left(\mathrm{1}-{l}_{\mathrm{n}}\right)}{{l}_{\mathrm{n}}{n}_{\mathrm{plant}}}.\end{array}$
For an actual NPP, we need to consider the immobilization demand. Rearranging the above, we get
$\begin{array}{}\text{(17)}& {\mathrm{NPP}}_{\mathrm{act}}=\frac{{N}_{\mathrm{in}}\left(\mathrm{1}-{l}_{\mathrm{n}}\right)}{{l}_{\mathrm{n}}\left[{n}_{\mathrm{plant}}+f\left({n}_{\mathrm{soil}}-{n}_
This equation removes the NPP[act] dependence on NPP[pot]. It can be shown that the fraction of ${P}_{\mathrm{dem}}/\left({I}_{\mathrm{dem}}+{P}_{\mathrm{dem}}$) depends only on the N:C ratios and
f, not on NPP[pot]. This means that there will be no eCO[2] effect on NPP[act].
As shown in Fig. 5a, the potential NPP approach results in relatively flat nutrient recycling constraint curves, suggesting that the CO[2] fertilization effect is only weakly influenced by soil N
availability. Despite a sharp instantaneous NPP response, CO[2] fertilization effects on NPP[act] are small on the M-, L-, and VL-term timescales (Table 3). This outcome can be understood from the
governing equation for the nutrient recycling constraint, which removes NPP[act] dependence on NPP[pot] (Eq. 17). Although in the first instance, the plant can increase its production, over time the
litter pool increases in size proportionally to NPP[pot], meaning that immobilization demand increases to match the increased plant demand, which leads to no overall change in the relative demands
from the plant and the litter. This pattern is similar under alternative wood N:C ratio assumptions (Fig. 5b, Table 3).
3.2.3Root exudation to prime N mineralization
The priming effect is described as the stimulation of the decomposition of native soil organic matter caused by larger soil carbon input under eCO[2] (van Groenigen et al., 2014). Experimental
studies suggest that this phenomenon is widespread and persistent (Dijkstra and Cheng, 2007), but this process has not been incorporated by most land surface models (Walker et al., 2015). Here we
introduce a novel framework to induce the priming effect on soil decomposition and test its effect on plant production response to eCO[2] within the quasi-equilibrium framework.
To account for the effect of priming on decomposition of SOM, we first introduce a coefficient to determine the fraction of root growth allocated to exudates, a[rhizo]. Here we assumed that the N:C
ratio of rhizodeposition is the same as the root N:C ratio. The coefficient a[rhizo] is estimated by a function dependent on foliar N:C:
$\begin{array}{}\text{(18)}& {a}_{\mathrm{rhizo}}={a}_{\mathrm{0}}+{a}_{\mathrm{1}}\cdot \frac{\mathrm{1}/{n}_{\mathrm{f}}-\mathrm{1}/{n}_{\mathrm{ref}}}{\mathrm{1}/{n}_{\mathrm{ref}}},\end{array}$
where n[ref] is a reference foliar N:C ratio to induce plant N stress (0.04), and a[0] and a[1] are tuning coefficients (0.01 and 1, respectively). Within the quasi-equilibrium framework, for the
VL soil constraint we now have
$\begin{array}{ll}\mathrm{NPP}=& \phantom{\rule{0.125em}{0ex}}\frac{{N}_{\mathrm{in}}}{\left[{a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm{w}}+{a}_{\mathrm{r}}{a}_{\mathrm{rhizo}}{n}
_{\mathrm{r}}+{a}_{\mathrm{r}}\left(\mathrm{1}-{a}_{\mathrm{rhizo}}\right){n}_{\mathrm{r}}\right]}\\ \text{(19)}& & \frac{{l}_{\mathrm{n}}}{\mathrm{1}-{l}_{\mathrm{n}}}.\end{array}$
To introduce an effect of root exudation on the turnover rate of the slow SOM pool, rhizodeposition is transferred into the active SOM pool according to a microbial use efficiency parameter (${f}_{\
mathrm{cue},\phantom{\rule{0.125em}{0ex}}\mathrm{rhizo}}=\mathrm{0.3}$). The extra allocation of NPP into the active SOM is therefore
$\begin{array}{}\text{(20)}& {C}_{\mathrm{rhizo}}=\mathrm{NPP}\cdot {a}_{\mathrm{r}}\cdot {a}_{\mathrm{rhizo}}\cdot {f}_{\mathrm{cue},\phantom{\rule{0.125em}{0ex}}\mathrm{rhizo}}.\end{array}$
The increased active SOM pool N demand is associated with the degradation rate of the slow SOM pool, expressed as
$\begin{array}{}\text{(21)}& {k}_{\mathrm{slow},\mathrm{new}}={k}_{\mathrm{slow}}\cdot \left(\mathrm{1}+{k}_{\mathrm{m}}\right)\cdot \frac{{C}_{\mathrm{rhizo}}}{{C}_{\mathrm{rhizo}}+{k}_{\mathrm
where k[slow] is the original decomposition rate of the slow SOM pool, and k[m] is a sensitivity parameter. The decomposition rate of the slow SOM pool affects N[Rs], the amount of N released from
the slow SOM pools, as
$\begin{array}{}\text{(22)}& {N}_{\mathrm{Rs}}={k}_{\mathrm{slow},\mathrm{new}}{C}_{\mathrm{s}}\left[{n}_{\mathrm{s}}\left(\mathrm{1}-{\mathrm{\Omega }}_{\mathrm{ss}}\right)-{n}_{\mathrm{p}}{\mathrm
{\Omega }}_{\mathrm{ps}}\right],\end{array}$
where C[s] is the slow SOM pool, and Ω[ss] and Ω[ps] represent the proportion of C released through the decomposition of the slow and passive SOM pools that subsequently enters the slow SOM pool,
Root exudation and the associated priming effect result in a strong M-term plant response to eCO[2] when compared to the baseline model (Fig. 6a in comparison to Fig. 4a). In fact, the magnitude of
the priming effect on the M-term NPP response to eCO[2] is comparable to its L- and VL-term NPP responses, indicating a persistent eCO[2] effect over time (Table 3). A faster decomposition rate and
therefore a smaller pool size of the slow SOM pool are observed (Table 5). With a fixed wood N:C ratio assumption, the NPP response to eCO[2] is drastically reduced in the M term compared to the
model with a variable wood N:C assumption (Fig. 6b), but it is comparable to its corresponding baseline fixed wood N:C model (Table 3). Varying parameter coefficients (a[0], a[1], f[cue,rhizo],
and k[m]) affects the decomposition rates of the slow soil organic pool and hence could lead to variation of the priming effect on M-term CO[2] response (Fig. S3). Further experimental studies are
needed to better constrain these parameters. Adding root exudation without influencing the slow SOM pool decomposition rate (Eq. 21) leads to a smaller predicted M-term CO[2] response than the model
with the direct effect on the slow SOM pool. However, it also leads to a higher predicted M-term CO[2] response than the baseline model (Fig. 7) because a[r] and n[r] affect the reburial fraction of
the slow SOM pool, as shown in McMurtrie et al. (2000). Finally, the model with a variable wood N:C assumption indicates that there is no increase in NUE (Table 2) in the M term compared to its L-
and VL-term responses (Fig. 6c). In comparison, the fixed wood N:C ratio assumption means that there is a decreased wood “quality” (reflected via a decreased N:C ratio), and therefore faster
decomposition of the slow SOM pool does not release much extra N to support the M-term CO[2] response, leading to a significant rise of NUE in the M term (Fig. 6d).
4.1Influence of alternative N uptake assumptions on predicted CO[2] fertilization
The quasi-equilibrium analysis of the time-varying plant response to eCO[2] provides a quantitative framework to understand the relative contributions of different model assumptions governing the
supply of N to plants in determining the magnitude of the CO[2] fertilization effect. Here, we evaluated how plant responses to eCO[2] are affected by widely used model assumptions relating to plant
N uptake, soil decomposition, and immobilization demand under alternative wood N–C coupling strategies (variable and fixed wood N:C ratios). These assumptions have been adopted in land surface
models such as O-CN (Zaehle and Friend, 2010), CABLE (Wang et al., 2007), LPJ-Guess N (Wårlind et al., 2014), JASBACH-CNP (Goll et al., 2012), ORCHIDEE-CNP (Goll et al., 2017a), and CLM4 (Thornton et
al., 2007). In line with previous findings (Comins and McMurtrie, 1993; Dewar and McMurtrie, 1996; Kirschbaum et al., 1998; McMurtrie and Comins, 1996; Medlyn and Dewar, 1996), our results show that
assumptions related to wood stoichiometry have a very large impact on estimates of plant responses to eCO[2]. More specifically, models incorporating a fixed wood N:C ratio consistently predicted
smaller CO[2] fertilization effects on production than models using a variable N:C ratio assumption (Table 3). Examples of models assuming constant (Thornton et al., 2007; Weng and Luo, 2008) and
variable (Zaehle and Friend, 2010) plant tissue stoichiometry are both evident in the literature, and therefore, assuming that all other model structures and assumptions are similar, prediction
differences could potentially be attributed to the tissue stoichiometric assumption incorporated into these models, as suggested in some previous simulation studies (Medlyn et al., 2016, 2015;
Meyerholt and Zaehle, 2015; Zaehle et al., 2014). Together with a more appropriate representation of the trade-offs governing tissue C–N coupling (Medlyn et al., 2015), further tissue biochemistry
data are necessary to constrain this fundamental aspect of ecosystem model uncertainty (Thomas et al., 2015).
C–N coupled simulation models generally predict that the CO[2] fertilization effect on plant production is progressively constrained by soil N availability over time: the progressive nitrogen
limitation hypothesis (Luo et al., 2004; Norby et al., 2010; Zaehle et al., 2014). Here we showed similar temporal patterns in a model with different plant N uptake assumptions (Fig. 4) and the
potential NPP assumption (Fig. 5). In particular, the progressive N limitation effect on NPP is shown as a downregulated M-term CO[2] response after the sharp instantaneous CO[2] fertilization effect
on production is realized. However, the model incorporating a priming effect of C on soil N availability with a flexible wood N:C ratio assumption induced a strong M-term CO[2] response (13%
increase in NPP), thereby introducing a persistent CO[2] effect over time (Fig. 6a). This strong M-term CO[2] response is due to an enhanced decomposition rate of soil organic matter, consistent with
a series of recent observations and modeling studies (Finzi et al., 2015; Guenet et al., 2018; Sulman et al., 2014; van Groenigen et al., 2014). However, as a previous quasi-equilibrium study showed,
a significant increase in the M-term CO[2] response can occur via changes in litter quality into the slow SOM pool or increased N input into the system (McMurtrie et al., 2000). Our study differs
from McMurtrie et al. (2000) in that we introduced an explicit effect of C priming on k[slow] – the decomposition rate of the slow SOM pool – via extra rhizodeposition (Eq. 21). As such, a faster
decomposition rate of slow SOM is observed (Table 5), equivalent to adding extra N for mineralization to support the M-term CO[2] response (Fig. 6c). More complex models for N uptake, incorporating a
carbon cost for nitrogen acquisition, are being proposed (Fisher et al., 2010; Ghimire et al., 2016; M. Shi et al., 2015); we suggest that the likely effects of introducing these complex sets of
assumptions into large-scale models could usefully be explored with the quasi-equilibrium framework.
Processes regulating progressive nitrogen limitation under eCO[2] were evaluated by Liang et al. (2016) based on a meta-analysis, which bridged the gap between theory and observations. It was shown
that the expected diminished CO[2] fertilization effect on plant growth was not apparent at the ecosystem scale due to extra N supply through increased biological N fixation and decreased leaching
under eCO[2]. Here, our baseline assumption assumed fixed N input into the system, and therefore plant-available N is progressively depleted through increased plant N sequestration under eCO[2], as
depicted by the progressive N limitation hypothesis (Luo et al., 2004). A function that allows the N fixation parameter to vary could provide further assessment of the tightness of the ecosystem N
cycle process and its impact on plant response to eCO[2]. Furthermore, given the significant role the wood N:C ratio plays in plant N sequestration, matching the modeled range of wood tissue
stoichiometry with observations can provide an additional level of evaluation of model performance. Our study provides a generalizable evaluation based on the assumption that the wood N:C ratio,
when allowed to vary in a model, is proportional to the leaf N:C ratio. Case-specific, more realistic evaluations can be performed based on the quasi-equilibrium framework to bridge models with
A strong M term and persistent CO[2] fertilization effects over time was also found by some models in Walker et al. (2015), but without introducing a priming effect. In models such as CLM, N losses
from the system are concentration dependent, and plant N uptake is a function of both N supply and plant demand. Increased plant N demand in models in which N uptake is a function of plant N demand
reduces the soil solution N concentration and therefore system N losses. This means that over time N can accumulate in the system in response to eCO[2] and sustain an eCO[2] response. Here, our
quasi-equilibrium framework considers N lost as a fixed rate that depends linearly on the mineral N pool, and the mineral N pool changes at different equilibrium time points. For example, as shown in
Table S1, the M-term N loss rate is significantly reduced under eCO[2] compared to the VL-term N loss rate under aCO[2]. This suggests a positive relationship between N loss and NPP, as embedded in
Eq. (4).
We also showed that the magnitude of the CO[2] fertilization effect is significantly reduced at all timescales when models incorporate the potential NPP approach (Fig. 5). Among all model assumptions
tested, the potential NPP approach induced the smallest M- to VL-term responses (Table 3). It can be shown from equation derivation (Eq. 17) that the fraction ${P}_{\mathrm{dem}}/\left({P}_{\mathrm
{dem}}+{I}_{\mathrm{dem}}$) depends only on the N:C ratios and f (fraction of litter C become soil C), implying that models incorporating the potential NPP assumption should show no response of NPP
to CO[2]. Both our study and simulation-based studies showed small CO[2] responses (Walker et al., 2015; Zaehle et al., 2014), possibly because the timing of P[dem] and I[dem] differs due to the
fluctuating nature of GPP and N mineralization at daily to seasonal time steps such that N is limiting at certain times of the year but not at others. Additionally, models such as CLM have
volatilization losses (not leaching) that are reduced under eCO[2], which may lead to production not limited by N availability, meaning that a full CO[2] fertilization effect may be realized.
Finally, leaching is simplified here and treated as a fixed fraction of the mineral N pool. In models such as CLM or JASBACH, it is a function of the soil-soluble N concentration, implying a
dependency on litter quality (Zaehle et al., 2014).
4.2Implications for probing model behaviors
Model–data intercomparisons have been shown as a viable means to investigate how and why models differ in their predicted response to eCO[2] (De Kauwe et al., 2014; Walker et al., 2015; Zaehle et
al., 2014). Models make different predictions because they have different model structures (Lombardozzi et al., 2015; Meyerholt et al., 2016; Shi et al., 2018; Xia et al., 2013; Zhou et al., 2018),
parameter uncertainties (Dietze et al., 2014; Wang et al., 2011), response mechanisms (Medlyn et al., 2015), and numerical implementations (Rogers et al., 2016). It is increasingly difficult to
diagnose model behaviors from the multitude of model assumptions incorporated into the model. Furthermore, while it is true that the models can be tuned to match observations within the domain of
calibration, models may make correct predictions but based on incorrect or simplified assumptions (Medlyn et al., 2005, 2015; Walker et al., 2015). As such, diagnosing model behaviors can be a
challenging task in complex plant–soil models. In this study, we showed that the effect of a model assumption on plant response to eCO[2] can be analytically predicted by solving the photosynthetic
and nutrient recycling constraints together. This provides a constrained model framework to evaluate the effect of individual model assumptions without having to run a full set of sensitivity
analyses, thereby providing an a priori understanding of the underlying response mechanisms through which the effect is realized. We suggest that before implementing a new function into the full
structure of a plant–soil model, one could use the quasi-equilibrium framework as a test bed to examine the effect of the new assumption.
The quasi-equilibrium framework requires that additional model assumptions be analytically solvable, which is increasingly not the case for complex modeling structures. However, as we demonstrate
here, studying the behavior of a reduced-complexity model can nonetheless provide real insight into model behavior. In some cases, the quasi-equilibrium framework can highlight where additional
complexity is not valuable. For example, here we showed that adding complexity in the representation of plant N uptake did not result in significantly different predictions of plant response to eCO
[2]. Where the quasi-equilibrium framework indicates little effect of more complex assumptions, there is a strong case for keeping simpler assumptions in the model. However, we do acknowledge that
the quasi-equilibrium framework operates on timescales of >5 years; where fine-scale temporal responses are important, the additional complexity may be warranted.
The multiple-element limitation framework developed by Rastetter and Shaver (1992) analytically evaluates the relationship between short-term and long-term plant responses to eCO[2] and nutrient
availability under different model assumptions. It was shown that there could be a marked difference in the short-term and long-term ecosystem responses to eCO[2] (Rastetter et al., 1997; Rastetter
and Shaver, 1992). More specifically, Rastetter et al. (1997) showed that the ecosystem NPP response to eCO[2] appeared on several characteristic timescales: (1) there was an instantaneous increase
in NPP, which results in an increased vegetation C:N ratio; (2) on a timescale of a few years, the vegetation responded to eCO[2] by increasing uptake effort for available N through increased
allocation to fine roots; (3) on a timescale of decades, there was a net movement of N from soil organic matter to vegetation, which enables vegetation biomass to accumulate; and (4) on the timescale
of centuries, ecosystem responses were dominated by increases in total ecosystem N, which enable organic matter to accumulate in both vegetation and soils. Both the multiple-element limitation
framework and the quasi-equilibrium framework provide information about equilibrium responses. These approaches also provide information about the degree to which the ecosystem replies to internally
recycled N vs. exchanges with external sources and sinks. The multiple-element limitation framework also offers insight into the C–N interaction that influences transient dynamics. These analytical
frameworks are both useful tools for making quantitative assessments of model assumptions.
A related model assumption evaluation tool is the traceability framework, which decomposes complex models into various simplified component variables, such as ecosystem C storage capacity or
residence time, and hence helps to identify structures and parameters that are uncertain among models (Z. Shi et al., 2015; Xia et al., 2013, 2012). Both the traceability and quasi-equilibrium
frameworks provide analytical solutions to describe how and why model predictions diverge. The traceability framework decomposes complex simulations into a common set of component variables,
explaining differences due to these variables. In contrast, quasi-equilibrium analysis investigates the impacts and behavior of a specific model assumption, which is more indicative of mechanisms and
processes. Subsequently, one can relate the effect of a model assumption more mechanistically to the processes that govern the relationship between the plant N:C ratio and NPP, as depicted in
Fig. 1, thereby facilitating efforts to reduce model uncertainties.
Models diverge in future projections of plant responses to increases in CO[2] because of the different assumptions that they make. Applying model evaluation frameworks, such as the quasi-equilibrium
framework, to attribute these differences will not necessarily reduce multi-model prediction spread in the short term (Lovenduski and Bonan, 2017). Many model assumptions are still empirically
derived, and there is a lack of mechanistic and observational constraints on the effect size, meaning that it is important to apply models incorporating diverse process representations. However, use
of the quasi-equilibrium framework can provide crucial insights into why model predictions differ and thus help identify the critical measurements that would allow us to discriminate among
alternative models. As such, it is an invaluable tool for model intercomparison and benchmarking analysis. We recommend the use of this framework to analyze likely outcomes of new model assumptions
before introducing them to complex model structures.
Appendix A:Baseline quasi-equilibrium model derivation
Here we show how the baseline quasi-equilibrium framework is derived. Specifically, there are two analytical constraints that form the foundation of the quasi-equilibrium framework, namely the
photosynthetic constraint and the nitrogen cycling constraint. The derivation follows Comins and McMurtrie (1993), which is further elaborated in work by McMurtrie et al. (2000) and Medlyn and Dewar
(1996) and evaluated Comins (1994).
A1Photosynthetic constraint
Firstly, gross primary production (GPP) in the simulation mode is calculated using a light-use efficiency approach named MATE (Model Any Terrestrial Ecosystem) (McMurtrie et al., 2008; Medlyn et al.,
2011; Sands, 1995), in which absorbed photosynthetically active radiation is estimated from leaf area index (L) using Beer's law and is then multiplied by a light-use efficiency (LUE), which depends
on the foliar N:C ratio (n[f]) and atmospheric CO[2] concentration (C[a]):
$\begin{array}{}\text{(A1)}& \mathrm{GPP}=\mathrm{LUE}\left({n}_{\mathrm{f}},{C}_{\mathrm{a}}\right)\cdot {I}_{\mathrm{0}}\cdot \left(\mathrm{1}-{e}^{-kL}\right),\end{array}$
where I[0] is the incident radiation, k is the canopy light extinction coefficient, and L is leaf area index. The derivation of LUE for the MATE is described in full by McMurtrie et al. (2008); our
version differs only in that the key parameters determining the photosynthetic rate follow the empirical relationship with the foliar N:C ratio given by Walker et al. (2014), and the expression for
stomatal conductance follows Medlyn et al. (2011).
In the quasi-equilibrium framework, the photosynthetic constraint is derived by assuming that the foliage C pool (C[f]) has equilibrated. That is, the new foliage C production equals turnover, which
is assumed to be a constant fraction (s[f]) of the pool:
$\begin{array}{}\text{(A2)}& {a}_{\mathrm{f}}\mathrm{NPP}={s}_{\mathrm{f}}{C}_{\mathrm{f}},\end{array}$
where a[f] is the allocation coefficient for foliage. From Eq. (A1), net primary production is a function of the foliar N:C ratio and the foliage C pool:
$\begin{array}{}\text{(A3)}& \mathrm{NPP}=\mathrm{LUE}\left({n}_{\mathrm{f}},{C}_{\mathrm{a}}\right)\cdot {I}_{\mathrm{0}}\cdot \left(\mathrm{1}-{e}^{-k\mathit{\sigma }{C}_{\mathrm{f}}}\right)\cdot \
where σ is the specific leaf area. Combining the two equations above leads to an implicit relationship between NPP and n[f],
$\begin{array}{}\text{(A4)}& \mathrm{NPP}=\mathrm{LUE}\left({n}_{\mathrm{f}},{C}_{\mathrm{a}}\right)\cdot {I}_{\mathrm{0}}\cdot \left(\mathrm{1}-{e}^{-k\mathit{\sigma }{a}_{\mathrm{f}}\mathrm{NPP}/
{s}_{\mathrm{f}}}\right)\cdot \mathrm{CUE},\end{array}$
which is the photosynthetic constraint.
A2Nutrient recycling constraint
The nitrogen cycling constraint is derived by assuming that nitrogen inputs to and outputs from the equilibrated pools are equal. Based on the assumed residence times of the passive SOM (∼400 years),
slow SOM (15 years), and woody biomass (50 years) pools, we can calculate the nutrient recycling constraint at three different timescales: very long (VL, >500 years, all pools equilibrated), long (L,
100–500 years, all pools equilibrated except the passive pool), or medium (M, 5–50 years, all pools equilibrated except slow, passive and wood pools).
In the VL term, we have
$\begin{array}{}\text{(A5)}& {N}_{\mathrm{in}}={N}_{\mathrm{loss}},\end{array}$
where N[in] is the total N input into the system, and N[loss] is the total N lost from the system via leaching and volatilization. Following Comins and McMurtrie (1993), the flux N[in] is assumed to
be a constant. The total N loss term is proportional to the rate of N mineralization (N[m]), following
$\begin{array}{}\text{(A6)}& {N}_{\mathrm{loss}}={l}_{\mathrm{n}}\cdot {N}_{\mathrm{m}},\end{array}$
where l[n] is the fraction of N mineralization that is lost. It is assumed that mineralized N that is not lost is taken up by plants (N[U]):
$\begin{array}{}\text{(A7)}& {N}_{\mathrm{U}}={N}_{\mathrm{m}}-{N}_{\mathrm{loss}}.\end{array}$
Combining with Eq. (A6), we have
$\begin{array}{}\text{(A8)}& {N}_{\mathrm{loss}}=\frac{{l}_{\mathrm{n}}}{\left(\mathrm{1}-{l}_{\mathrm{n}}\right)}{N}_{\mathrm{U}}.\end{array}$
The plant N uptake rate depends on production (NPP) and plant N:C ratios, according to
$\begin{array}{}\text{(A9)}& {N}_{\mathrm{U}}=\mathrm{NPP}\cdot \left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm{w}}+{a}_{\mathrm{r}}{n}_{\mathrm{r}}\right),\end{array}$
where a[f], a[w], and a[r] are the allocation coefficients for foliage, wood, and roots, respectively, and n[fl], n[w], and n[r] are the N:C ratios for foliage litter, wood, and roots,
respectively. The foliage litter N:C ratio (n[fl]) is proportional to n[f], according to Table 2. Combining Eq. (A9) with Eqs. (A5) and (A8), we obtain a function of NPP that can be related to
total N input, which is the nutrient recycling constraint in the VL term, expressed as
$\begin{array}{}\text{(A10)}& \mathrm{NPP}=\frac{{N}_{\mathrm{in}}\left(\mathrm{1}-{l}_{\mathrm{n}}\right)}{{l}_{\mathrm{n}}\left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_{\mathrm{w}}+
Since n[w] and n[r] are assumed proportional to n[f], the nutrient recycling constraint also links NPP and n[f]. The intersection with the photosynthetic constraint yields the very-long-term
equilibria of both NPP and n[f].
In the L term, we now have to consider N flows leaving and entering the passive SOM pool, which is no longer equilibrated:
$\begin{array}{}\text{(A11)}& {N}_{\mathrm{in}}+{N}_{{\mathrm{R}}_{\mathrm{p}}}={N}_{\mathrm{loss}}+{N}_{{\mathrm{S}}_{\mathrm{p}}},\end{array}$
where ${N}_{{\mathrm{R}}_{\mathrm{p}}}$ and ${N}_{{\mathrm{S}}_{\mathrm{p}}}$ are the release and sequestration of the passive SOM N pool, respectively. The release flux, ${N}_{{\mathrm{R}}_{\mathrm
{p}}}$, can be assumed to be constant on the L-term timescale. The sequestration flux, ${N}_{{\mathrm{S}}_{\mathrm{p}}}$, can be calculated as a function of NPP. In G'DAY, as with most
carbon–nitrogen coupled ecosystem models, carbon flows out of the soil pools are directly related to the pool size. As demonstrated by Comins and McMurtrie (1993), such soil models have the
mathematical property of linearity, meaning that carbon flows out of the soil pools are proportional to the production input to the soil pool, or NPP. Furthermore, the litter input into the soil
pools is assumed proportional to the foliar N:C ratio, with the consequence that N sequestered in the passive SOM is also related to the foliar N:C ratio. The sequestration flux into the passive
soil pool (${N}_{{\mathrm{S}}_{\mathrm{p}}}$) can thus be written as
$\begin{array}{}\text{(A12)}& {N}_{{\mathrm{S}}_{\mathrm{p}}}=\mathrm{NPP}{n}_{\mathrm{p}}\left({\mathrm{\Omega }}_{{\mathrm{p}}_{\mathrm{f}}}\cdot {a}_{\mathrm{f}}+{\mathrm{\Omega }}_{{\mathrm{p}}_
{\mathrm{w}}}\cdot {a}_{\mathrm{w}}+{\mathrm{\Omega }}_{{\mathrm{p}}_{\mathrm{r}}}\cdot {a}_{\mathrm{r}}\right),\end{array}$
where n[p] is the N:C ratio of the passive SOM pool, and ${\mathrm{\Omega }}_{{\mathrm{p}}_{\mathrm{f}}}$, ${\mathrm{\Omega }}_{{\mathrm{p}}_{\mathrm{w}}}$, and ${\mathrm{\Omega }}_{{\mathrm{p}}_{\
mathrm{r}}}$ are the burial coefficients for foliage, wood, and roots (the proportion of plant carbon production that is ultimately buried in the passive pool), respectively. The burial coefficients
${\mathrm{\Omega }}_{{\mathrm{p}}_{\mathrm{f}}}$, ${\mathrm{\Omega }}_{{\mathrm{p}}_{\mathrm{w}}}$, and ${\mathrm{\Omega }}_{{\mathrm{p}}_{\mathrm{r}}}$ depend on the N:C ratios of foliage, wood,
and root litter (detailed derivation in Comins and McMurtrie, 1993). Combining and rearranging, we obtain the nutrient recycling constraint in the L term as
$\begin{array}{ll}\text{(A13)}& & \mathrm{NPP}=& \frac{{N}_{\mathrm{in}}+{N}_{{\mathrm{R}}_{\mathrm{p}}}}{{n}_{\mathrm{p}}\left({\mathrm{\Omega }}_{{\mathrm{p}}_{\mathrm{r}}}{a}_{\mathrm{r}}+{\mathrm
{\Omega }}_{{\mathrm{p}}_{\mathrm{f}}}{a}_{\mathrm{f}}+{\mathrm{\Omega }}_{{\mathrm{p}}_{\mathrm{w}}}{a}_{\mathrm{w}}\right)+\frac{{l}_{\mathrm{n}}}{\mathrm{1}-{l}_{\mathrm{n}}}\left({a}_{\mathrm{f}}
Similarly, in the M term, we have
$\begin{array}{}\text{(A14)}& {N}_{\mathrm{in}}+{N}_{{\mathrm{R}}_{\mathrm{p}}}+{N}_{{\mathrm{R}}_{\mathrm{s}}}+{N}_{{\mathrm{R}}_{\mathrm{w}}}={N}_{\mathrm{loss}}+{N}_{{\mathrm{S}}_{\mathrm{p}}}+{N}
where ${N}_{{\mathrm{R}}_{\mathrm{s}}}$and ${N}_{{\mathrm{R}}_{\mathrm{w}}}$ are the N released from the slow SOM and wood pool, respectively, and ${N}_{{\mathrm{S}}_{\mathrm{s}}}$ and ${N}_{{\mathrm
{S}}_{\mathrm{w}}}$ are the N stored in the slow SOM and wood pool, respectively (Medlyn et al., 2000). The nutrient recycling constraint in the M term can thus be derived as
$\begin{array}{ll}\text{(A15)}& & \mathrm{NPP}=& \frac{{N}_{\mathrm{in}}+{N}_{{\mathrm{R}}_{\mathrm{p}}}+{N}_{{\mathrm{R}}_{\mathrm{s}}}+{N}_{{\mathrm{R}}_{\mathrm{w}}}}{{a}_{\mathrm{f}}\left({\
mathrm{\Omega }}_{{\mathrm{s}}_{\mathrm{f}}}{n}_{\mathrm{s}}+{\mathrm{\Omega }}_{{\mathrm{p}}_{\mathrm{f}}}{n}_{\mathrm{p}}\right)+{a}_{\mathrm{r}}\left({\mathrm{\Omega }}_{{\mathrm{s}}_{\mathrm{r}}}
{n}_{\mathrm{s}}+{\mathrm{\Omega }}_{{\mathrm{p}}_{\mathrm{r}}}{n}_{\mathrm{p}}\right)+\frac{{l}_{\mathrm{n}}}{\mathrm{1}-{l}_{\mathrm{n}}}\left({a}_{\mathrm{f}}{n}_{\mathrm{fl}}+{a}_{\mathrm{w}}{n}_
where n[s] is the slow SOM pool N:C ratio, and ${\mathrm{\Omega }}_{{\mathrm{s}}_{\mathrm{f}}}$ and ${\mathrm{\Omega }}_{{\mathrm{s}}_{\mathrm{r}}}$ are foliage and root C sequestration rate into
the slow SOM pool, respectively (Medlyn et al., 2000). The intersection between the nitrogen recycling constraint and the photosynthetic constraint provides an analytical solution to both NPP and n
[f] at different timescales, and we can then interpret how changing model assumptions affect the predicted plant responses to elevated CO[2].
BEM and MJ designed the study; MJ, BEM, and SZ performed the analyses; APW, MGDK, and SZ designed the priming effect equations; all authors contributed to results interpretation and paper writing.
The authors declare that they have no conflict of interest.
This paper builds heavily on ideas originally developed by Ross McMurtrie and Hugh Comins (now deceased). We would like to acknowledge their intellectual leadership and inspiration.
Sönke Zaehle and Silvia Caldararu were supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (QUINCY; grant no. 647204) and the
German Academic Exchange Service (DAAD; project ID 57318796). David S. Ellsworth and Mingkai Jiang were also supported by the DAAD.
This paper was edited by David Lawrence and reviewed by two anonymous referees.
Bonan, G. B. and Levis, S.: Quantifying carbon-nitrogen feedbacks in the Community Land Model (CLM4), Geophys. Res. Lett., 37, L07401, https://doi.org/10.1029/2010GL042430, 2010.
Comins, H. N.: Equilibrium Analysis of Integrated Plant – Soil Models for Prediction of the Nutrient Limited Growth Response to CO[2] Enrichment, J. Theor. Biol., 171, 369–385, 1994.
Comins, H. N. and McMurtrie, R. E.: Long-term response of nutrient-limited forests to CO[2] enrichment; equilibrium behavior of plant-soil models, Ecol. Appl., 3, 666–681, 1993.
Corbeels, M., McMurtrie, R. E., Pepper, D. A., and O'Connell, A. M.: A process-based model of nitrogen cycling in forest plantations: Part I. Structure, calibration and analysis of the decomposition
model, Ecol. Model., 187, 426–448, 2005.
De Kauwe, M. G., Medlyn, B. E., Zaehle, S., Walker, A. P., Dietze, M. C., Wang, Y.-P., Luo, Y., Jain, A. K., El-Masri, B., Hickler, T., Wårlind, D., Weng, E., Parton, W. J., Thornton, P. E., Wang,
S., Prentice, I. C., Asao, S., Smith, B., McCarthy, H. R., Iversen, C. M., Hanson, P. J., Warren, J. M., Oren, R., and Norby, R. J.: Where does the carbon go? A model–data intercomparison of
vegetation carbon allocation and turnover processes at two temperate forest free-air CO[2] enrichment sites, New Phytol., 203, 883–899, 2014.
Dewar, R. C. and McMurtrie, R. E.: Analytical model of stemwood growth in relation to nitrogen supply, Tree Physiol., 16, 161–171, 1996.
Dietze, M. C., Serbin, S. P., Davidson, C., Desai, A. R., Feng, X., Kelly, R., Kooper, R., LeBauer, D., Mantooth, J., McHenry, K., and Wang, D.: A quantitative assessment of a terrestrial biosphere
model's data needs across North American biomes, J. Geophys. Res.-Biogeo., 119, 286–300, 2014.
Dijkstra, F. A. and Cheng, W.: Interactions between soil and tree roots accelerate long-term soil carbon decomposition, Ecol. Lett., 10, 1046–1053, 2007.
Dybzinski, R., Farrior, C. E., and Pacala, S. W.: Increased forest carbon storage with increased atmospheric CO[2] despite nitrogen limitation: a game-theoretic allocation model for trees in
competition for nitrogen and light, Glob. Change Biol., 21, 1182–1196, 2014.
Fernández-Martínez, M., Vicca, S., Janssens, I. A., Sardans, J., Luyssaert, S., Campioli, M., Chapin Iii, F. S., Ciais, P., Malhi, Y., Obersteiner, M., Papale, D., Piao, S. L., Reichstein, M., Rodà,
F., and Peñuelas, J.: Nutrient availability as the key regulator of global forest carbon balance, Nat. Clim. Change, 4, 471–476, https://doi.org/10.1038/NCLIMATE2177, 2014.
Finzi, A. C., Abramoff, R. Z., Spiller, K. S., Brzostek, E. R., Darby, B. A., Kramer, M. A., and Phillips, R. P.: Rhizosphere processes are quantitatively important components of terrestrial carbon
and nutrient cycles, Glob. Change Biol., 21, 2082–2094, 2015.
Fisher, J. B., Sitch, S., Malhi, Y., Fisher, R. A., Huntingford, C., and Tan, S. Y.: Carbon cost of plant nitrogen acquisition: A mechanistic, globally applicable model of plant nitrogen uptake,
retranslocation, and fixation, Global Biogeochem. Cy., 24, GB1014, https://doi.org/10.1029/2009GB003621, 2010.
Friend, A. D., Lucht, W., Rademacher, T. T., Keribin, R., Betts, R., Cadule, P., Ciais, P., Clark, D. B., Dankers, R., Falloon, P. D., Ito, A., Kahana, R., Kleidon, A., Lomas, M. R., Nishina, K.,
Ostberg, S., Pavlick, R., Peylin, P., Schaphoff, S., Vuichard, N., Warszawski, L., Wiltshire, A., and Woodward, F. I.: Carbon residence time dominates uncertainty in terrestrial vegetation responses
to future climate and atmospheric CO[2], P. Natl. Acad. Sci. USA, 111, 3280–3285, 2014.
Gerber, S., Hedin Lars, O., Oppenheimer, M., Pacala Stephen, W., and Shevliakova, E.: Nitrogen cycling and feedbacks in a global dynamic land model, Global Biogeochem. Cy., 24, GB1001, https://
doi.org/10.1029/2008GB003336, 2010.
Ghimire, B., Riley William, J., Koven Charles, D., Mu, M., and Randerson James, T.: Representing leaf and root physiological traits in CLM improves global carbon and nitrogen cycling predictions, J.
Adv. Model. Earth Syst., 8, 598–613, 2016.
Goll, D. S., Brovkin, V., Parida, B. R., Reick, C. H., Kattge, J., Reich, P. B., van Bodegom, P. M., and Niinemets, Ü.: Nutrient limitation reduces land carbon uptake in simulations with a model of
combined carbon, nitrogen and phosphorus cycling, Biogeosciences, 9, 3547–3569, https://doi.org/10.5194/bg-9-3547-2012, 2012.
Goll, D. S., Vuichard, N., Maignan, F., Jornet-Puig, A., Sardans, J., Violette, A., Peng, S., Sun, Y., Kvakic, M., Guimberteau, M., Guenet, B., Zaehle, S., Penuelas, J., Janssens, I., and Ciais, P.:
A representation of the phosphorus cycle for ORCHIDEE (revision 4520), Geosci. Model Dev., 10, 3745–3770, https://doi.org/10.5194/gmd-10-3745-2017, 2017a.
Goll, D. S., Winkler, A. J., Raddatz, T., Dong, N., Prentice, I. C., Ciais, P., and Brovkin, V.: Carbon–nitrogen interactions in idealized simulations with JSBACH (version 3.10), Geosci. Model Dev.,
10, 2009–2030, https://doi.org/10.5194/gmd-10-2009-2017, 2017b.
Guenet, B., Moyano, F. E., Peylin, P., Ciais, P., and Janssens, I. A.: Towards a representation of priming on soil carbon decomposition in the global land biosphere model ORCHIDEE (version 1.9.5.2),
Geosci. Model Dev., 9, 841–855, https://doi.org/10.5194/gmd-9-841-2016, 2016.
Guenet, B., Camino-Serrano, M., Ciais, P., Tifafi, M., Maignan, F., Soong Jennifer, L., and Janssens Ivan, A.: Impact of priming on global soil carbon stocks, Glob. Change Biol., 24, 1873–1883,
Huntzinger, D. N., Michalak, A. M., Schwalm, C., Ciais, P., King, A. W., Fang, Y., Schaefer, K., Wei, Y., Cook, R. B., Fisher, J. B., Hayes, D., Huang, M., Ito, A., Jain, A. K., Lei, H., Lu, C.,
Maignan, F., Mao, J., Parazoo, N., Peng, S., Poulter, B., Ricciuto, D., Shi, X., Tian, H., Wang, W., Zeng, N., and Zhao, F.: Uncertainty in the response of terrestrial carbon sink to environmental
drivers undermines carbon-climate feedback predictions, Sci. Rep., 7, 4765, https://doi.org/10.1038/s41598-017-03818-2, 2017.
Jiang, M., Zaehle, S., De Kauwe, M. G., Walker, A. P., Caldararu, S., Ellsworth, D. S., and Medlyn, B. E.: The quasi-equilibrium framework analytical platform, Zenodo, https://doi.org/10.5281/
zenodo.2574192, 2019.
Kirschbaum, M. U. F., King, D. A., Comins, H. N., McMurtrie, R. E., Medlyn, B. E., Pongracic, S., Murty, D., Keith, H., Raison, R. J., Khanna, P. K., and Sheriff, D. W.: Modeling forest response to
increasing CO[2] concentration under nutrient-limited conditions, Plant Cell Environ., 17, 1081–1099, 1994.
Kirschbaum, M. U. F., Medlyn, B. E., King, D. A., Pongracic, S., Murty, D., Keith, H., Khanna, P. K., Snowdon, P., and Raison, R. J.: Modelling forest-growth response to increasing CO[2]
concentration in relation to various factors affecting nutrient supply, Glob. Change Biol., 4, 23–41, 1998.
Koven, C. D., Chambers, J. Q., Georgiou, K., Knox, R., Negron-Juarez, R., Riley, W. J., Arora, V. K., Brovkin, V., Friedlingstein, P., and Jones, C. D.: Controls on terrestrial carbon feedbacks by
productivity versus turnover in the CMIP5 Earth System Models, Biogeosciences, 12, 5211–5228, https://doi.org/10.5194/bg-12-5211-2015, 2015.
Kowalczyk, E. A., Wang, Y. P., Law, R. M., Davies, H. L., McGregor, J. L., and Abramowitz, G.: The CSIRO Atmosphere Biosphere Land Exchange (CABLE) model for use in climate models and as an offline
model, CSIRO, Australia, 2006.
Liang, J., Qi, X., Souza, L., and Luo, Y.: Processes regulating progressive nitrogen limitation under elevated carbon dioxide: a meta-analysis, Biogeosciences, 13, 2689–2699, https://doi.org/10.5194/
bg-13-2689-2016, 2016.
Lombardozzi, D. L., Bonan, G. B., Smith, N. G., Dukes, J. S., and Fisher, R. A.: Temperature acclimation of photosynthesis and respiration: A key uncertainty in the carbon cycle-climate feedback,
Geophys. Res. Lett., 42, 8624–8631, 2015.
Lovenduski, N. S. and Bonan, G. B.: Reducing uncertainty in projections of terrestrial carbon uptake, Environ. Res. Lett., 12, 044020, https://doi.org/10.1088/1748-9326/aa66b8, 2017.
Ludwig, D., Jones, D. D., and Holling, C. S.: Qualitative Analysis of Insect Outbreak Systems: The Spruce Budworm and Forest, J. Anim. Ecol., 47, 315–332, 1978.
Luo, Y., Su, B., Currie, W. S., Dukes, J. S., Finzi, A., Hartwig, U., Hungate, B., McMurtrie, R. E., Oren, R., Parton, W. J., Pataki, D. E., Shaw, R. M., Zak, D. R., and Field, C. B.: Progressive
Nitrogen Limitation of Ecosystem Responses to Rising Atmospheric Carbon Dioxide, BioScience, 54, 731–739, 2004.
McMurtrie, R. and Comins, H. N.: The temporal response of forest ecosystems to doubled atmospheric CO[2] concentration, Glob. Change Biol., 2, 49–57, 1996.
McMurtrie, R. E., Dewar, R. C., Medlyn, B. E., and Jeffreys, M. P.: Effects of elevated [CO[2]] on forest growth and carbon storage: a modelling analysis of the consequences of changes in litter
quality/quantity and root exudation, Plant Soil, 224, 135–152, 2000.
McMurtrie, R. E., Medlyn, B. E., and Dewar, R. C.: Increased understanding of nutrient immobilization in soil organic matter is critical for predicting the carbon sink strength of forest ecosystems
over the next 100 years, Tree Physiol., 21, 831–839, 2001.
McMurtrie, R. E., Norby, R. J., Medlyn, B. E., Dewar, R. C., Pepper, D. A., Reich, P. B., and Barton, C. V. M.: Why is plant-growth response to elevated CO[2] amplified when water is limiting, but
reduced when nitrogen is limiting? A growth-optimisation hypothesis, Funct. Plant Biol., 35, 521–534, 2008.
McMurtrie, R. E., Iversen, C. M., Dewar, R. C., Medlyn, B. E., Näsholm, T., Pepper, D. A., and Norby, R. J.: Plant root distributions and nitrogen uptake predicted by a hypothesis of optimal root
foraging, Ecol. Evol., 2, 1235–1250, 2012.
Medlyn, B. E. and Dewar, R. C.: A model of the long-term response of carbon allocation and productivity of forests to increased CO[2] concentration and nitrogen deposition, Glob. Change Biol., 2,
367–376, 1996.
Medlyn, B. E., McMurtrie, R. E., Dewar, R. C., and Jeffreys, M. P.: Soil processes dominate the long-term response of forest net primary productivity to increased temperature and atmospheric CO[2]
concentration, Can. J. For. Res., 30, 873–888, 2000.
Medlyn, B. E., Robinson, A. P., Clement, R., and McMurtrie, R. E.: On the validation of models of forest CO[2] exchange using eddy covariance data: some perils and pitfalls, Tree Physiol., 25,
839–857, 2005.
Medlyn, B. E., Duursma, R. A., Eamus, D., Ellsworth, D. S., Prentice, I. C., Barton, C. V. M., Crous, K. Y., De Angelis, P., Freeman, M., and Wingate, L.: Reconciling the optimal and empirical
approaches to modelling stomatal conductance, Glob. Change Biol., 17, 2134–2144, 2011.
Medlyn, B. E., Zaehle, S., De Kauwe, M. G., Walker, A. P., Dietze, M. C., Hanson, P. J., Hickler, T., Jain, A. K., Luo, Y., Parton, W., Prentice, I. C., Thornton, P. E., Wang, S., Wang, Y.-P., Weng,
E., Iversen, C. M., McCarthy, H. R., Warren, J. M., Oren, R., and Norby, R. J.: Using ecosystem experiments to improve vegetation models, Nat. Clim. Change, 5, 528–534, 2015.
Medlyn, B. E., De Kauwe Martin, G., Zaehle, S., Walker Anthony, P., Duursma Remko, A., Luus, K., Mishurov, M., Pak, B., Smith, B., Wang, Y. P., Yang, X., Crous Kristine, Y., Drake John, E., Gimeno
Teresa, E., Macdonald Catriona, A., Norby Richard, J., Power Sally, A., Tjoelker Mark, G., and Ellsworth David, S.: Using models to guide field experiments: a priori predictions for the CO[2]
response of a nutrient- and water-limited native Eucalypt woodland, Glob. Change Biol., 22, 2834–2851, 2016.
Meyerholt, J. and Zaehle, S.: The role of stoichiometric flexibility in modelling forest ecosystem responses to nitrogen fertilization, New Phytol., 208, 1042–1055, 2015.
Meyerholt, J., Zaehle, S., and Smith, M. J.: Variability of projected terrestrial biosphere responses to elevated levels of atmospheric CO[2] due to uncertainty in biological nitrogen fixation,
Biogeosciences, 13, 1491–1518, https://doi.org/10.5194/bg-13-1491-2016, 2016.
Norby, R. J., Warren, J. M., Iversen, C. M., Medlyn, B. E., and McMurtrie, R. E.: CO[2] enhancement of forest productivity constrained by limited nitrogen availability, P. Natl. Acad. Sci. USA, 107,
19368–19373, 2010.
Oleson, K. W., Dai, Y. J., Bonan, G. B., Bosilovich, M., Dichinson, R., Dirmeyer, P., Hoffman, F., Houser, P., Levis, S., Niu, G.-Y., Thornton, P. E., Vertenstein, M., Yang, Z. L., and Zeng, X.:
Technical description of the Community Land Model (CLM), National Center for Atmospheric Research, Boulder, Colorado, USA, 2004.
Rastetter, E. B. and Shaver, G. R.: A Model of Multiple-Element Limitation for Acclimating Vegetation, Ecology, 73, 1157–1174, 1992.
Rastetter, E. B., Ågren, G. I., and Shaver, G. R.: Responses Of N-Limited Ecosystems To Increased CO[2]: A Balanced-Nutrition, Coupled-Element-Cycles Model, Ecol. Appl., 7, 444–460, 1997.
Reich, P. B. and Hobbie, S. E.: Decade-long soil nitrogen constraint on the CO[2] fertilization of plant biomass, Nat. Clim. Change, 3, 278–282, https://doi.org/10.1038/NCLIMATE1694, 2012.
Rogers, A., Medlyn Belinda, E., Dukes Jeffrey, S., Bonan, G., Caemmerer, S., Dietze Michael, C., Kattge, J., Leakey Andrew, D. B., Mercado Lina, M., Niinemets, Ü., Prentice, I. C., Serbin Shawn, P.,
Sitch, S., Way Danielle, A., and Zaehle, S.: A roadmap for improving the representation of photosynthesis in Earth system models, New Phytol., 213, 22–42, 2016.
Sands, P.: Modelling Canopy Production. II. From Single-Leaf Photosynthesis Parameters to Daily Canopy Photosynthesis, Funct. Plant Biol., 22, 603–614, 1995.
Shi, M., Fisher, J. B., Brzostek, E. R., and Phillips, R. P.: Carbon cost of plant nitrogen acquisition: global carbon cycle impact from an improved plant nitrogen cycle in the Community Land Model,
Glob. Change Biol., 22, 1299–1314, 2015.
Shi, Z., Xu, X., Hararuk, O., Jiang, L., Xia, J., Liang, J., Li, D., and Luo, Y.: Experimental warming altered rates of carbon processes, allocation, and carbon storage in a tallgrass prairie,
Ecosphere, 6, 1–16, 2015.
Shi, Z., Crowell, S., Luo, Y., and Moore, B.: Model structures amplify uncertainty in predicted soil carbon responses to climate change, Nat. Communi., 9, 2171, https://doi.org/10.1038/
s41467-018-04526-9, 2018.
Sigurdsson, B. D., Medhurst, J. L., Wallin, G., Eggertsson, O., and Linder, S.: Growth of mature boreal Norway spruce was not affected by elevated [CO[2]] and/or air temperature unless nutrient
availability was improved, Tree Physiol., 33, 1192–1205, 2013.
Smith, B., Prentice, I. C., and Sykes, M. T.: Representation of vegetation dynamics in the modelling of terrestrial ecosystems: comparing two contrasting approaches within European climate space,
Global Ecol. Biogeogr., 10, 621–637, 2001.
Sokolov, A. P., Kicklighter, D. W., Melillo, J. M., Felzer, B. S., Schlosser, C. A., and Cronin, T. W.: Consequences of Considering Carbon–Nitrogen Interactions on the Feedbacks between Climate and
the Terrestrial Carbon Cycle, J. Climate, 21, 3776–3796, 2008.
Stocker, B. D., Prentice, I. C., Cornell, S. E., Davies-Barnard, T., Finzi, A. C., Franklin, O., Janssens, I., Larmola, T., Manzoni, S., Näsholm, T., Raven, J. A., Rebel, K. T., Reed, S., Vicca, S.,
Wiltshire, A., and Zaehle, S.: Terrestrial nitrogen cycling in Earth system models revisited, New Phytol., 210, 1165–1168, 2016.
Sulman, B. N., Phillips, R. P., Oishi, A. C., Shevliakova, E., and Pacala, S. W.: Microbe-driven turnover offsets mineral-mediated storage of soil carbon under elevated CO[2], Nat. Clim. Change, 4,
1099, https://doi.org/10.1038/NCLIMATE2436, 2014.
Thomas, R. Q., Brookshire, E. N. J., and Gerber, S.: Nitrogen limitation on land: how can it occur in Earth system models?, Glob. Change Biol., 21, 1777–1793, 2015.
Thornton, P. E., Lamarque, J. F., Rosenbloom Nan, A., and Mahowald, N. M.: Influence of carbon-nitrogen cycle coupling on land model response to CO[2] fertilization and climate variability, Global
Biogeochem. Cy., 21, GB4018, https://doi.org/10.1029/2006GB002868, 2007.
Thornton, P. E., Doney, S. C., Lindsay, K., Moore, J. K., Mahowald, N., Randerson, J. T., Fung, I., Lamarque, J.-F., Feddema, J. J., and Lee, Y.-H.: Carbon-nitrogen interactions regulate
climate-carbon cycle feedbacks: results from an atmosphere-ocean general circulation model, Biogeosciences, 6, 2099–2120, https://doi.org/10.5194/bg-6-2099-2009, 2009.
van Groenigen, K. J., Qi, X., Osenberg, C. W., Luo, Y., and Hungate, B. A.: Faster Decomposition Under Increased Atmospheric CO[2] Limits Soil Carbon Storage, Science, 344, 508–509, https://doi.org/
10.1126/science.1249534, 2014.
Walker, A. P., Hanson, P. J., De Kauwe, M. G., Medlyn, B. E., Zaehle, S., Asao, S., Dietze, M., Hickler, T., Huntingford, C., Iversen, C. M., Jain, A., Lomas, M., Luo, Y. Q., McCarthy, H., Parton, W.
J., Prentice, I. C., Thornton, P. E., Wang, S. S., Wang, Y. P., Warlind, D., Weng, E. S., Warren, J. M., Woodward, F. I., Oren, R., and Norby, R. J.: Comprehensive ecosystem model-data synthesis
using multiple data sets at two temperate forest free-air CO[2] enrichment experiments: Model performance at ambient CO[2] concentration, J. Geophys. Res.-Biogeo., 119, 937–964, 2014.
Walker, A. P., Zaehle, S., Medlyn, B. E., De Kauwe, M. G., Asao, S., Hickler, T., Parton, W., Ricciuto, D. M., Wang, Y.-P., Wårlind, D., and Norby, R. J.: Predicting long-term carbon sequestration in
response to CO[2] enrichment: How and why do current ecosystem models differ?, Global Biogeochem. Cy., 29, 476–495, 2015.
Wang, Y. P., Houlton, B. Z., and Field, C. B.: A model of biogeochemical cycles of carbon, nitrogen, and phosphorus including symbiotic nitrogen fixation and phosphatase production, Global
Biogeochem. Cy., 21, GB1018, https://doi.org/10.1029/2006GB002797, 2007.
Wang, Y. P., Kowalczyk, E., Leuning, R., Abramowitz, G., Raupach, M. R., Pak, B., van Gorsel, E., and Luhar, A.: Diagnosing errors in a land surface model (CABLE) in the time and frequency domains,
J. Geophys. Res.-Biogeo., 116, G01034, https://doi.org/10.1029/2010JG001385, 2011.
Wårlind, D., Smith, B., Hickler, T., and Arneth, A.: Nitrogen feedbacks increase future terrestrial ecosystem carbon uptake in an individual-based dynamic vegetation model, Biogeosciences, 11,
6131–6146, https://doi.org/10.5194/bg-11-6131-2014, 2014.
Weng, E. and Luo, Y.: Soil hydrological properties regulate grassland ecosystem responses to multifactor global change: A modeling analysis, J. Geophys. Res.-Biogeo., 113, G03003, https://doi.org/
10.1029/2007JG000539, 2008.
Xia, J. Y., Luo, Y. Q., Wang, Y.-P., Weng, E. S., and Hararuk, O.: A semi-analytical solution to accelerate spin-up of a coupled carbon and nitrogen land model to steady state, Geosci. Model Dev., 5,
1259–1271, https://doi.org/10.5194/gmd-5-1259-2012, 2012.
Xia, J. Y., Luo, Y., Wang, Y.-P., and Hararuk, O.: Traceable components of terrestrial carbon storage capacity in biogeochemical models, Glob. Change Biol., 19, 2104–2116, 2013.
Yang, X., Wittig, V., Jain, A. K., and Post, W.: Integration of nitrogen cycle dynamics into the Integrated Science Assessment Model for the study of terrestrial ecosystem responses to global change,
Global Biogeochem. Cy., 23, GB4029, https://doi.org/10.1029/2009GB003474, 2009.
Zaehle, S. and Friend, A. D.: Carbon and nitrogen cycle dynamics in the O-CN land surface model: 1. Model description, site-scale evaluation, and sensitivity to parameter estimates, Global
Biogeochem. Cy., 24, GB1005, https://doi.org/10.1029/2009GB003521, 2010.
Zaehle, S., Friend, A. D., Friedlingstein, P., Dentener, F., Peylin, P., and Schulz, M.: Carbon and nitrogen cycle dynamics in the O-CN land surface model: 2. Role of the nitrogen cycle in the
historical terrestrial carbon balance, Global Biogeochem. Cy., 24, GB1006, https://doi.org/10.1029/2009GB003522, 2010.
Zaehle, S., Medlyn, B. E., De Kauwe, M. G., Walker, A. P., Dietze, M. C., Hickler, T., Luo, Y. Q., Wang, Y. P., El-Masri, B., Thornton, P., Jain, A., Wang, S. S., Warlind, D., Weng, E. S., Parton,
W., Iversen, C. M., Gallet-Budynek, A., McCarthy, H., Finzi, A. C., Hanson, P. J., Prentice, I. C., Oren, R., and Norby, R. J.: Evaluation of 11 terrestrial carbon-nitrogen cycle models against
observations from two temperate Free-Air CO[2] Enrichment studies, New Phytol., 202, 803–822, 2014.
Zaehle, S., Jones, C. D., Houlton, B., Lamarque, J.-F., and Robertson, E.: Nitrogen Availability Reduces CMIP5 Projections of Twenty-First-Century Land Carbon Uptake, J. Climate, 28, 2494–2511,
Zhou, S., Liang, J., Lu, X., Li, Q., Jiang, L., Zhang, Y., Schwalm, C. R., Fisher, J. B., Tjiputra, J., Sitch, S., Ahlström, A., Huntzinger, D. N., Huang, Y., Wang, G., and Luo, Y.: Sources of
Uncertainty in Modeled Land Carbon Storage within and across Three MIPs: Diagnosis with Three New Techniques, J. Climate, 31, 2833–2851, 2018.
|
{"url":"https://gmd.copernicus.org/articles/12/2069/2019/","timestamp":"2024-11-05T09:10:03Z","content_type":"text/html","content_length":"398464","record_id":"<urn:uuid:c2a99d59-7a29-43b0-8f47-e9a9316c0701>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00575.warc.gz"}
|
Uniformly accelerated circular motion • pickedshares
A mass point moves with a constant angular acceleration c[1] on a circular path of radius R. How big must the angular acceleration be so that the mass point comes to a standstill after one
\[ \ddot \phi = c_1\]
\[\tag{IC 1} \dot \phi (t=0) = \omega_0\]
\[\tag{IC 2} \phi (t=0) = \phi_0\]
A solution video will be published soon on this channel.
Since it is a circular movement with a constant radius, the reduced consideration of the angle of rotation, the angular velocity and the angular acceleration is sufficient. The function is obtained
by integrating the angular acceleration
\[\tag{1} \int \ddot \phi dt \rightarrow \dot \phi (t) = c_1 \cdot t + c_2\]
With the initial condition IC1 c[2] can be solved.
\[\tag{2} \dot \phi (t = 0) = \omega_0\]
\[\tag{3} c_2 = \omega_0\]
\[\tag{4} \dot \phi (t) = c_1 \cdot t + \omega_0 \]
By integrating the angular velocity it follows
\[\tag{5} \int \dot \phi dt \rightarrow \phi (t) = \frac{1}{2} c_1 \cdot t^2 + \omega_0 t + c_3\]
Using the initial condition IC2 we can find c[3]:
\[\tag{6} \phi (t = 0) = \phi_0\]
\[\tag{7} c_3 = \phi_0\]
\[\tag{8} \phi (t) = \frac{1}{2} c_1 \cdot t^2 + \omega_0 t + \phi_0\]
The point in time at which the movement comes to a standstill is currently unknown and is referred to below as T. At time T, a complete revolution has taken place
\[\tag{9} \phi (t = T) = 2 \pi\]
\[\tag{10} 2 \pi = \frac{1}{2} c_1 T^2 + \omega_0 T + \phi_0\]
and the angular velocity is zero.
\[\tag{11} \dot \phi (t = T) = 0\]
\[\tag{12} 0 = c_1 T + \omega_0\]
\[\tag{13} c_1 = - \frac{\omega_0}{T}\]
The determined c[1] will be used in equation 10 and solved for T.
\[\tag{14} T = \frac{2 \cdot (2 \pi - \phi_0)}{\omega_0} \]
T used in equation 13 yields the angular acceleration we are looking for.
\[\tag{15} c_1 = - \frac{\omega_0^2}{2 \cdot (2 \pi - \phi_0)} \]
|
{"url":"https://pickedshares.com/en/engineering-mechanics-3-exercise-2-uniformly-accelerated-circular-motion/","timestamp":"2024-11-14T04:31:53Z","content_type":"text/html","content_length":"67692","record_id":"<urn:uuid:90c6a481-6ade-442d-87f2-86901c448651>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00121.warc.gz"}
|
[Solved] The expected value for a question was: E( | SolutionInn
Answered step by step
Verified Expert Solution
The expected value for a question was: E(X) = ( 1 / t h e t a ) ( ( e 5 6 ) /
The expected value for a question was:
E(X) =( 1 / t h e t a ) ( ( e56 ) / ( and51))
Using the E(X) above, comapre this expected value with the expected value of the Expo(theta) distribution, and comment.
Hint: the eponential distribution has E(x)=1/theta
What comments would I make about this?
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started
Recommended Textbook for
Authors: Stanley Block, Geoffrey Hirt, Bartley Danielsen, Doug Short, Michael Perretta
10th Canadian edition
1259261018, 1259261015, 978-1259024979
More Books
Students also viewed these Mathematics questions
View Answer in SolutionInn App
|
{"url":"https://www.solutioninn.com/study-help/questions/the-expected-value-for-a-question-was-ex-1009596","timestamp":"2024-11-07T09:41:24Z","content_type":"text/html","content_length":"104003","record_id":"<urn:uuid:bbdbc3a8-37ad-461c-b2fa-33017759055a>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00331.warc.gz"}
|
Lesson 7
What Fraction of a Group?
7.1: Estimating a Fraction of a Number (5 minutes)
In this warm-up, students estimate the value of a fraction of a number (e.g., \(\frac13\) of 7) using what they know about the size of the given fraction. Then, they write multiplication expressions
to represent the verbal questions. The goal is to activate prior understandings that a fraction of a number can be found by multiplication, preparing students to explore division problems in which
the quotient is less than 1 whole.
Ask students to keep their materials closed. Display one estimation question at a time. (If all questions are displayed, ask students to work on one question at a time and to begin when cued.) Give
students 30 seconds of quiet think time per question and ask them to give a signal when they have an answer and can explain their strategy.
Select 1–2 students to briefly share their estimates and how they made them. Record and display their estimates for all to see. After discussing the third estimation question, ask students to write a
multiplication expression to represent each of the three questions.
Student Facing
1. Estimate the quantities:
1. What is \(\frac13\) of 7?
2. What is \(\frac45\) of \(9\frac23\)?
3. What is \(2\frac47\) of \(10\frac19\)?
2. Write a multiplication expression for each of the previous questions.
Anticipated Misconceptions
Some students may try to find the exact answers to the questions instead of estimating. Encourage them to think about benchmark fractions that could help them estimate.
Activity Synthesis
Ask a few students to share the expressions they wrote for the questions. Record and display the expressions for all to see. Ask the class to indicate if they agree or disagree with each expression.
If not already brought up in students’ explanations, highlight the idea that we can find the exact value of a fraction of a number (e.g., \(\frac45\) of \(9\frac23\)) by multiplying the fraction and
the number. It does not matter whether the number is a whole number, a mixed number, or another fraction.
To involve more students in the conversation, consider asking:
• “Who can restate ___’s reasoning in a different way?”
• “Did anyone solve the problem the same way but would explain it differently?”
• “Did anyone solve the problem in a different way?”
• “Does anyone want to add on to _____’s strategy?”
• “Do you agree or disagree? Why?”
7.2: Fractions of Ropes (10 minutes)
This task helps to transition students from thinking about “how many groups?” to “what fraction of a group?”.
Students compare different lengths of ropes and express their relative lengths in multiplicative terms. Rope B and C are 5 and \(2\frac12\) times as long as rope A, respectively, but rope D is
shorter than rope A, so then we say that it is \(\frac34\) times as long as rope A. They see that it is possible that the answer to a “how many groups?” question is a number less than 1 when the
given amount is smaller than the size of a group.
As students work, notice how they go about making multiplicative comparisons. Select students who write clear and concise questions for the equations in the last problem so they can share later.
Arrange students in groups of 2. Give students 4–5 minutes of quiet think time and then a couple of minutes to compare their responses with a partner and discuss any disagreements. Clarify that rope
A is 4 units long.
Students using the digital materials can use the applet at ggbm.at/kZUgANCC to compare the segments. The segments can be moved by dragging endpoints with open circles. The yellow “pins” can help
students keep track of the groups.
Student Facing
The segments in the applet represent 4 different lengths of rope. Compare one rope to another, moving the rope by dragging the open circle at one endpoint. You can use the yellow pins to mark off
1. Complete each sentence comparing the lengths of the ropes. Then, use the measurements shown on the grid to write a multiplication equation and a division equation for each comparison.
1. Rope B is _______ times as long as rope A.
2. Rope C is _______ times as long as rope A.
3. Rope D is _______ times as long as rope A.
2. Each equation can be used to answer a question about Ropes C and D. What could each question be?
1. \({?} \boldcdot 3=9\) and \(9 \div 3={?}\)
2. \({?} \boldcdot 9=3\) and \(3 \div 9= {?}\)
Arrange students in groups of 2. Give students 4–5 minutes of quiet think time and then a couple of minutes to compare their responses with a partner and discuss any disagreements. Clarify that rope
A is 4 units long.
Students using the digital materials can use the applet at ggbm.at/kZUgANCC to compare the segments. The segments can be moved by dragging endpoints with open circles. The yellow “pins” can help
students keep track of the groups.
Student Facing
Here is a diagram that shows four ropes of different lengths.
1. Complete each sentence comparing the lengths of the ropes. Then, use the measurements shown on the grid to write a multiplication equation and a division equation for each comparison.
1. Rope B is _______ times as long as Rope A.
2. Rope C is _______ times as long as Rope A.
3. Rope D is _______ times as long as Rope A.
2. Each equation can be used to answer a question about Ropes C and D. What could each question be?
1. \({?} \boldcdot 3=9\) and \(9 \div 3={?}\)
2. \({?} \boldcdot 9=3\) and \(3 \div 9= {?}\)
Anticipated Misconceptions
Some students might associate the wrong lengths with the ropes or confuse the order of comparison (e.g., comparing A to C instead of C to A). Encourage them to put the length of each rope next to the
diagram and attend more closely to the ropes being compared.
Activity Synthesis
Display the solutions to the first set of problems for all to see. Give students a minute to check their answers and ask questions. Then, focus class discussion on two key ideas:
• The connection between “how many groups?” questions and “how many times as long?” questions. Ask students how these two types of questions are similar and different. Make sure students see that
both have the structure of \(? \boldcdot a =b\), where \(a\) is the size of 1 group (or the unit we are using for comparison), and \(b\) is a given number.
• The language commonly used when referring to a situation in which the number of groups is less than 1 whole. Explain that we have seen equal-sized groups where the number of groups is greater
than 1, but some situations involve a part of 1 group. So instead of saying “the number of groups” or asking “how many groups?,” we would ask “what fraction of a group?” or “what part of a group?
”. For example, in the case of rope D, where the answer is less than 1, we can ask, “What fraction of rope A is rope D?”
Ask 1–2 previously identified students to share the question they wrote for the last pair of equations (\(? \boldcdot 9=3\) and \(3 \div 9= ?\)). Make sure students see that this pair of equations
represent a situation with a fractional group
(i.e., rope D is shorter than rope C, so the length of rope D is a fraction of that of rope C).
Representation: Internalize Comprehension. Use color and annotations to illustrate connections between representations. As students describe their reasoning about “how many groups?” and “how many
times as long?”, use color and annotations to scribe their thinking on a display of each problem so that it is visible for all students.
Supports accessibility for: Visual-spatial processing; Conceptual processing
Speaking: MLR8 Discussion Supports. As students compare and contrast these two types of division questions, provide a sentence frame such as: “Something these two types of questions have in common is
. . .” and "A difference between these two types of questions is . . . ." This will help students produce and make sense of the language needed to communicate their ideas about the relationship
between multiplication and division equations.
Design Principle(s): Support sense-making; Optimize output (for comparison)
7.3: Fractional Batches of Ice Cream (20 minutes)
In this activity, students make sense of quotients that are less than 1 and greater than 1 in the same context. Later in the task, students generalize their reasoning to solve division problems
(where the quotient is less than 1) without contexts.
Given the amount of milk required for 1 batch of ice cream (i.e., the size of 1 group), students find out how many batches (i.e., the number of groups or what fraction of a group) can be made with
different amounts of milk. They continue to use tape diagrams and write equations to reason about the situations, but this time, they are not prompted to write multiplication equations.
As students work, identify students students who drew clear and effective diagrams for the ice cream problems. Select them to share later.
Keep students in groups of 2. Display an example of a tape diagram that students have used in a previous lesson. The diagram for the question “how many \(\frac45\)s are in 2?” from a previous
cool-down is shown here.
Point out how the diagram shows both full groups of \(\frac45\) and a partial group. Tell students that they will see more situations involving partial groups in this activity.
Give students 6–8 minutes of quiet work time for the first two sets of questions about ice cream. Ask students to make a quick estimate on whether each answer will be greater than or less than 1
before solving the problem. Provide access to colored pencils, as some students may find it helpful to identify whole groups and partial groups on a tape diagram by coloring.
Give students 2–3 minutes to discuss their responses with their partner. Follow with a whole-class discussion before students return to the last set of questions.
Ask previously identified students to share their diagrams for the ice cream problems, or display the following diagrams.
Monday and Tuesday:
Thursday and Friday:
To help students see the structure in the diagrams, ask: “How are the diagrams for Monday and Tuesday like and unlike those for Thursday and Friday?” If not brought up in their responses, point out
• The size of 1 group (or the amount of milk in 1 batch) is the same in all diagrams, but the amounts we are comparing to 1 group vary. Those amounts are greater than 1 batch (9 cups) on Monday and
Tuesday, and less than 1 batch on Thursday and Friday.
• This comparison to the size of 1 group is also reflected in the questions. We ask “how many batches?” for the first two, and “what fraction of a batch?” for the other two.
To help students notice the structure in the equations, ask: “How are the division equations for Monday and Tuesday different than those for Thursday and Friday? How are they the same?”
\(\displaystyle 12 \div 9 = ?\)
\(\displaystyle 22 \frac 12 \div 9 = ?\)
\(\displaystyle 6 \div 9 = ?\)
\(\displaystyle 7 \frac 12 \div 9 = ?\)
Highlight that, regardless of whether the answer is greater than 1 or less than 1, the equations show that the questions “how many batches (of 9 cups)?” and “what fraction of a batch (of 9 cups)?”
can be expressed with a division by 9, because the multiplication counterparts of these situations all have the structure of “what number times 9 equals a given amount of milk?” or \(\displaystyle
{?} \boldcdot 9 = b\) where \(b\) is a given amount of milk.
Give students quiet time to complete the last set of questions.
Reading: MLR6 Three Reads. Use this routine to support reading comprehension of this word problem, without solving it for students. Use the first read to orient students to the situation. After a
shared reading, ask students “what is this situation about?” (A chef makes makes different amounts of ice cream on different days). After the second read, students list any quantities that can be
counted or measured, without focusing on specific values (number of cups of milk needed for every batch of ice cream, number of cups of milk used each day). Listen for, and amplify, the two important
quantities that vary in relation to each other in this situation: number of cups of milk, and number of (or part of) batches of ice cream. After the third read, ask students to brainstorm possible
strategies to answer the question.
Design Principle(s): Support sense-making
Student Facing
One batch of an ice cream recipe uses 9 cups of milk. A chef makes different amounts of ice cream on different days. Here are the amounts of milk she used:
• Monday: 12 cups
• Tuesday: \(22 \frac12\) cups
• Thursday: 6 cups
• Friday: \(7 \frac12\) cups
1. How many batches of ice cream did she make on these days? For each day, write a division equation, draw a tape diagram, and find the answer.
1. Monday
2. Tuesday
2. What fraction of a batch of ice cream did she make on these days? For each day, write a division equation, draw a tape diagram, and find the answer.
1. Thursday
2. Friday
3. For each question, write a division equation, draw a tape diagram, and find the answer.
1. What fraction of 9 is 3?
2. What fraction of 5 is \(\frac 12\)?
Anticipated Misconceptions
If students are not sure how to begin representing a situation with a tape diagram, ask them to represent one quantity or number at a time. For example, they could begin by showing the amount of milk
used as a tape with a particular length, and then mark the second quantity (the amount of milk in 1 batch) on the same tape and with the same starting point. Or they could represent the amounts in
the opposite order.
Activity Synthesis
After students worked on the last set of problems, discuss the question “what fraction of 5 is \(\frac12\)?” To help students connect this question to previous ones, consider asking:
• “How can we tell if the answer is greater than 1 or less than 1 before calculating?” (The phrase “what fraction of” offers a clue that it is less than 1. Or, we are comparing \(\frac 12\) to 5
and can see that \(\frac 12\) is less than 5.)
• “What is the size of 1 group here? How do we know?” (We can tell that 5 is the size of 1 group because that is the value to which another number is being compared.)
• “How do we write a multiplication equation for this question? A division equation?” (\({?} \boldcdot 5 = \frac 12\), and \(\frac 12\div 5 = {?}\))
Select a student to display a correct diagram for the problem, or display this diagram for all to see. Discuss how the two given values and the solution are represented in the diagram.
Lesson Synthesis
In this lesson, we saw that a division problem can represent the idea of equal-sized groups but may have a total amount that is less than the size of one full group. Instead of “how many of this is
in that?”, the question is now “what fraction of this is that?”.
• “How can we tell if a division situation involves less than one whole group?” (The total amount is less than the size of the a group, or the question asks “what fraction of. . .?”)
• “How do we find quotients that are less than 1?” (We can write a multiplication equation that corresponds to the situation and draw a tape diagram to help us reason about what fraction of 1 group
the given amount is.)
We also explored division problems as representing the answers to comparison questions about measurement. Instead of “how many groups?”, we can ask “how many times as long (or as heavy)?” For
example: \(16 \div 4 = \,?\) corresponds to \(? \boldcdot 4 = 16\), which can represent the question “how many times as long as 4 cm is 16 cm?” We can reason that 16 cm is 4 times as long as 4 cm.
In the same context, \(3 \div 4\) corresponds to \(? \boldcdot 4 = 3\), and would mean “how many times as long as 4 cm is 3 cm?” Here, we can see that the answer will be a fraction less than 1.
Because 3 is \(\frac34\) of 4, we can say “3 cm is \(\frac34\) as long as 4 cm.”
7.4: Cool-down - A Partially Filled Container (5 minutes)
Student Facing
It is natural to think about groups when we have more than one group, but we can also have a fraction of a group.
To find the amount in a fraction of a group, we can multiply the fraction by the amount in the whole group. If a bag of rice weighs 5 kg, \(\frac34\) of a bag would weigh (\(\frac34 \boldcdot 5)\)
Sometimes we need to find what fraction of a group an amount is. Suppose a full bag of flour weighs 6 kg. A chef used 3 kg of flour. What fraction of a full bag was used? In other words, what
fraction of 6 kg is 3 kg?
This question can be represented by a multiplication equation and a division equation, as well as by a diagram.
\(\displaystyle {?} \boldcdot 6 = 3\) \(\displaystyle 3\div 6 = {?}\)
We can see from the diagram that 3 is \(\frac12\) of 6, and we can check this answer by multiplying: \(\frac12 \boldcdot 6 = 3\).
In any situation where we want to know what fraction one number is of another number, we can write a division equation to help us find the answer.
For example, “What fraction of 3 is \(2\frac14\)?” can be expressed as \({?} \boldcdot 3 = 2\frac14\), which can also be written as \(2\frac14\div 3 = {?}\).
The answer to “What is \(2\frac14 \div 3\)?” is also the answer to the original question.
The diagram shows that 3 wholes contain 12 fourths, and \(2\frac14\) contains 9 fourths, so the answer to this question is \(\frac{9}{12}\), which is equivalent to \(\frac34\).
We can use diagrams to help us solve other division problems that require finding a fraction of a group. For example, here is a diagram to help us answer the question: “What fraction of \(\frac94\)
is \(\frac32\)?,” which can be written as \(\frac32 \div \frac94 = {?}\).
We can see that the quotient is \(\frac69\), which is equivalent to \(\frac23\). To check this, let’s multiply. \(\frac23 \boldcdot \frac94 = \frac{18}{12}\), and \(\frac{18}{12}\) is, indeed, equal
to \(\frac32\).
|
{"url":"https://curriculum.illustrativemathematics.org/MS/teachers/1/4/7/index.html","timestamp":"2024-11-11T06:49:08Z","content_type":"text/html","content_length":"153360","record_id":"<urn:uuid:a6ff1a5d-03cf-47d7-90dc-f8e64f8a6bc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00875.warc.gz"}
|
Learning Convex Optimization Control Policies
Many control policies used in applications compute the input or action by solving a convex optimization problem that depends on the current state and some parameters. Common examples of such convex
optimization control policies (COCPs) include the linear quadratic regulator (LQR), convex model predictive control (MPC), and convex approximate dynamic programming (ADP) policies. These types of
control policies are tuned by varying the parameters in the optimization problem, such as the LQR weights, to obtain good performance, judged by application-specific metrics. Tuning is often done by
hand, or by simple methods such as a grid search. In this paper we propose a method to automate this process, by adjusting the parameters using an approximate gradient of the performance metric with
respect to the parameters. Our method relies on recently developed methods that can efficiently evaluate the derivative of the solution of a convex program with respect to its parameters. A longer
version of this paper, which illustrates our method on many examples, is available at https://web.stanford.edu/-boyd/papers/learning_cocps.html.
All Science Journal Classification (ASJC) codes
• Artificial Intelligence
• Software
• Control and Systems Engineering
• Statistics and Probability
• Stochastic control
• approximate dynamic programming
• convex optimization
Dive into the research topics of 'Learning Convex Optimization Control Policies'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/learning-convex-optimization-control-policies","timestamp":"2024-11-03T20:11:29Z","content_type":"text/html","content_length":"49943","record_id":"<urn:uuid:f022be3c-495e-431b-9469-47c562f754b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00470.warc.gz"}
|
Root Locus
Root Locus Design
Root locus design is a common control system design technique in which you edit the compensator gain, poles, and zeros in the root locus diagram.
As the open-loop gain, k, of a control system varies over a continuous range of values, the root locus diagram shows the trajectories of the closed-loop poles of the feedback system. For example, in
the following tracking system:
P(s) is the plant, H(s) is the sensor dynamics, and k is an adjustable scalar gain The closed-loop poles are the roots of
The root locus technique consists of plotting the closed-loop pole trajectories in the complex plane as k varies. You can use this plot to identify the gain value associated with a desired set of
closed-loop poles.
Tune Electrohydraulic Servomechanism Using Root Locus Graphical Tuning
This example shows how to design a compensator for an electrohydraulic servomechanism using root locus graphical tuning techniques.
Plant Model
A simple version of an electrohydraulic servomechanism model consists of
• A push-pull amplifier (a pair of electromagnets)
• A sliding spool in a vessel of high-pressure hydraulic fluid
• Valve openings in the vessel to allow for fluid to flow
• A central chamber with a piston-driven ram to deliver force to a load
• A symmetrical fluid return vessel
The force on the spool is proportional to the current in the electromagnet coil. As the spool moves, the valve opens, allowing the high-pressure hydraulic fluid to flow through the chamber. The
moving fluid forces the piston to move in the opposite direction of the spool. For more information on this model, including the derivation of a linearized model, see [1].
You can use the input voltage to the electromagnet to control the ram position. When measurements of the ram position are available, you can use feedback for the ram position control, as shown in the
following, where Gservo represents the servomechanism:
Design Requirements
For this example, tune the compensator, C(s) to meet the following closed-loop step response requirements:
• The 2% settling time is less than 0.05 seconds.
• The maximum overshoot is less than 5%.
Open Control System Designer
At the MATLAB^® command line, load a linearized model of the servomechanism, and open Control System Designer in the root locus editor configuration.
load ltiexamples Gservo
The app opens and imports Gservo as the plant model for the default control architecture, Configuration 1.
In Control System Designer, a Root Locus Editor plot and input-output Step Response open.
To view the open-loop frequency response and closed-loop step response simultaneously, click and drag the plots to the desired location.
The app displays Bode Editor and Step Response plots side-by-side.
In the closed-loop step response plot, the rise time is around two seconds, which does not satisfy the design requirements.
To make the root locus diagram easier to read, zoom in. In the Root Locus Editor, right-click the plot area and select Properties.
In the Property Editor dialog box, on the Limits tab, specify Real Axis and Imaginary Axis limits from -500 to 500.
Click Close.
Increase Compensator Gain
To create a faster response, increase the compensator gain. In the Root Locus Editor, right-click the plot area and select Edit Compensator.
In the Compensator Editor dialog box, specify a gain of 20.
In the Root Locus Editor plot, the closed-loop pole locations move to reflect the new gain value. Also, the Step Response plot updates.
The closed-loop response does not satisfy the settling time requirement and exhibits unwanted ringing.
Increasing the gain makes the system underdamped and further increases lead to instability. Therefore, to meet the design requirements, you must specify additional compensator dynamics. For more
information on adding and editing compensator dynamics, see Edit Compensator Dynamics.
Add Poles to Compensator
To add a complex pole pair to the compensator, in the Root Locus Editor, right-click the plot area and select Add Pole or Zero > Complex Pole. Click the plot area where you want to add one of the
complex poles.
The app adds the complex pole pair to the root locus plot as red X’s, and updates the step response plot.
In the Root Locus Editor, drag the new poles to locations near –140 ± 260i. As you drag one pole, the other pole updates automatically.
As you drag a pole or zero, the app displays the new value in the status bar, on the right side.
Add Zeros to Compensator
To add a complex zero pair to your compensator, in the Compensator Editor dialog box, right-click the Dynamics table, and select Add Pole or Zero > Complex Zero
The app adds a pair of complex zeros at –1 ± i to your compensator
In the Dynamics table, click the Complex Zero row. Then in the Edit Selected Dynamics section, specify a Real Part of -170 and an Imaginary Part of 430.
The compensator and response plots automatically update to reflect the new zero locations.
In the Step Response plot, the settling time is around 0.1 seconds, which does not satisfy the design requirements.
Adjust Pole and Zero Locations
The compensator design process can involve some trial and error. Adjust the compensator gain, pole locations, and zero locations until you meet the design criteria.
One possible compensator design that satisfies the design requirements is:
• Compensator gain of 10
• Complex poles at –110 ± 140i
• Complex zeros at –70 ± 270i
In the Compensator Editor dialog box, configure your compensator using these values. In the Step Response plot, the settling time is around 0.05 seconds.
To verify the exact settling time, right-click the Step Response plot area and select Characteristics > Settling Time. A settling time indicator appears on the response plot.
To view the settling time, move the cursor over the settling time indicator.
The settling time is about 0.043 seconds, which satisfies the design requirements.
[1] Clark, R. N. Control System Dynamics, Cambridge University Press, 1996.
See Also
Control System Designer | rlocusplot
Related Topics
|
{"url":"https://in.mathworks.com/help/control/ug/root-locus-design.html","timestamp":"2024-11-14T09:01:26Z","content_type":"text/html","content_length":"81414","record_id":"<urn:uuid:3af53ac0-88a1-4b1d-862f-547b756718f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00305.warc.gz"}
|
A simple method to reduce torque ripple and mechanical vibration in direct torque controlled permanent magnet synchronous motor
The Direct Torque Control (DTC) technique of Permanent Magnet Synchronous Motor (PMSM) receives increasing attention due to its simplicity and robust dynamic response compared with other control
techniques. The classical switching table based DTC presents large flux, torque ripples and more mechanical vibrations in the motor. Several studies have been reported in the literature on classical
DTC. However, only limited studies that actually discuss or evaluate the classical DTC. This paper proposes a simple DTC method / Switching table for PMSM, to reduce flux and torque ripples as well
as mechanical vibrations. In this paper two DTC schemes are proposed. The six sector and twelve sector methodology is considered in DTC scheme I and DTC scheme II, respectively. In both DTC schemes a
simple modification is made in the classical DTC structure that is by eliminating two level inverter available in the classical DTC is replaced by three level Neutral Point Clamped (NPC) inverter. To
further improve the performance of the proposed DTC scheme I, the available 27 voltage vectors are allowed to form different groups of voltage vectors such as Large - Zero (LZ), Medium - Zero (MZ)
and Small - Zero (SZ), where as in DTC scheme II, all the voltage vectors are considered to form a switching table. Based on these groups, new switching table is proposed. The proposed DTC schemes
are comparatively investigated with the classical DTC and existing literatures from the aspects of theory analysis and computer simulations. It can be observed that the proposed techniques can
significantly reduce the flux, torque ripples, mechanical vibrations and improves the quality of current waveform compared with traditional and existing methods.
1. Introduction
More or less 40 years ago, in 1971 F. Blaschke proposed the concept of Field Oriented Control (FOC) for Induction Motor [1]. Since, from that time, the FOC dominates in the advanced AC drive market,
even though it has complicated structure. Thirteen years later, a new control technique for the Torque Control of Induction Motor was proposed by I. Takahashi and T. Noguchi as Direct Torque Control
(DTC) [2]. Two years later M. Depenbrock presented another one control technique named as Direct Self Control (DSC) [3]. The first follows Circular Trajectory and the later follows Hexagon
Trajectory. Both of them proved that it is possible to obtain a good dynamic control of the torque without any sensor on the mechanical shaft. Thus, DTC and DSC can be considered as sensorless type
control technique.
The DTC scheme normally preferred for low and medium power applications, where as DSC scheme is preferred for high power applications. In this paper, the attention is focused on the DTC scheme, which
is best suited for low and medium power applications. The DTC overcomes the drawbacks of FOC such as requirement of current regulators, co-ordinate transformations and PWM signal generators. DTC also
provides high efficiency, high power/torque density and high reliability [4-7]. Due to its simplicity, DTC allows a good torque control in steady state and start-up transient state.
On the other hand, the classical DTC have some disadvantages and listed major disadvantages are as follows:
1) difficulty to control torque at very low speed,
2) high current and torque ripple,
3) more mechanical vibrations.
Most of the literature [8-17] surveyed have analyzed classical DTC using two level inverter and all have presented high degree of torque ripple in the results under dynamic conditions and this will
reflect in speed and current too. In this paper the possibilities for minimization of torque ripple and mechanical vibrations in the DTC is focused. The minimization of torque ripple is achieved by
made improvement in the following areas, such as inverter and switching table.
In this paper the conventional two level inverter is replaced by three level Neutral Point Clamped (NPC) inverter which will have 27 voltage vectors, where as only 8 voltage vectors are available
with classical DTC. The 27 voltage vectors include large and medium voltage vectors are six numbers in each, small voltage vectors are twelve numbers and three numbers of zero voltage vectors.
Some of the literature [18-25] presents three level inverter with classical DTC, but it utilizes all the 27 voltage vectors to construct the switching table. This paper proposes three kinds of DTC
methods to reduce torque ripple. In the DTC method 1, only the large and zero voltage vectors are used to construct a switching table, where as in DTC method 2, medium and zero voltage vectors alone
are utilized to construct a switching table. Small and zero voltage vectors are considered to form a switching table for the DTC method 3.
Thus on the basis of the experience of the authors, the fair comparisons between all the methods are presented in both steady state and external load disturbance / transient conditions. The
comparison is useful to indicate to the users which one of the methods can be effectively utilized for various applications that today require torque control.
2. Model of PMSM
2.1. Machine equations
The mathematical model of a PMSM can be expressed as:
${U}_{\alpha }=-\frac{1}{\sqrt{3}}{U}_{ys}+\frac{1}{\sqrt{3}}{U}_{bs},$
${U}_{\beta }=\frac{2}{3}{U}_{rs}-\frac{1}{3}{U}_{ys}-\frac{1}{3}{U}_{bs},$
and the stator flux equations are:
${\phi }_{\alpha s}=\int \left({U}_{\alpha }-{R}_{s}{i}_{\alpha }\right)\text{\hspace{0.17em}}dt,$
${\phi }_{\beta s}=\int \left({U}_{\beta }-{R}_{s}{i}_{\beta }\right)\text{\hspace{0.17em}}dt,$
$|{\phi }_{s}|=\sqrt{{\phi }_{\alpha s}^{2}+{\phi }_{\beta s}^{2}},$
and the electromagnetic torque developed by a PMSM in a stationary reference frame is expressed as:
${T}_{e}=\frac{3}{2}\text{\hspace{0.17em}}p\text{\hspace{0.17em}}\left({\phi }_{s}×{i}_{s}\right),$
${T}_{e}=\frac{3}{2}\text{\hspace{0.17em}}p\text{\hspace{0.17em}}\left({i}_{\beta }{\phi }_{\alpha s}-{i}_{\alpha }{\phi }_{\beta s}\right).$
2.2. Voltage vector impact on torque
According to principle of DTC, electrical angle between stator and rotor flux vectors $\delta$ can control the torque developed by the PMSM. In background this can be achieved by controlling the
voltage vector. Hence the voltage vector is the prime controllable input variable in DTC. However it is mandatory to develop a relation between torque developed and voltage vector.
The voltage and stator flux equations in stationary frame are expressed as:
${U}_{s}=\frac{d{\phi }_{s}}{dt}+{R}_{s}{i}_{s},$
${\phi }_{s}={L}_{s}{i}_{s}+{\phi }_{r}.$
From Eq. (8) and Eq. (9), we can get:
${L}_{s}\frac{d{i}_{s}}{dt}={U}_{s}-{R}_{s}{i}_{s}-\frac{d{\phi }_{r}}{dt},$
${L}_{s}\frac{d{i}_{s}}{dt}={U}_{s}-{R}_{s}{i}_{s}-j\omega {\phi }_{r}.$
From Eq. (6), the torque differentiation with respect to time $t$ is:
${L}_{s}\frac{d{i}_{s}}{dt}={U}_{s}-{R}_{s}{i}_{s}-j\omega {\phi }_{r}.$
Substituting Eq. (8), Eq. (9) and Eq. (11) in Eq. (12), we can get:
${L}_{s}\frac{d{T}_{e}}{dt}=\frac{3}{2}\text{\hspace{0.17em}}p\text{\hspace{0.17em}}{\phi }_{r}×{U}_{s}-\frac{3}{2}\text{\hspace{0.17em}}p\text{\hspace{0.17em}}\omega \text{\hspace{0.17em}}{\phi }_
{r}\cdot {\phi }_{s}-{R}_{s}{T}_{e},$
It can be seen from Eq. (13) the equation contains three components [26, 27]. The second component is negative and a function of speed. The third component is also negative and depends on stator
resistance. The first component is always positive and depends on voltage vector. From this it is concluded that the non-zero vector always increases the torque developed and the zero vectors always
decrease the torque developed.
3. Classical DTC method
Based on the errors between the reference and the actual values of torque and flux, it is possible to control directly the inverter switching states in order to reduce the torque and flux errors
within the prefixed band limits. That is why this technique is called as Direct Torque Control. The block diagram of the classical DTC for PMSM is shown in Figure 1.
The basic principle of DTC is to select stator voltage vectors according to the differences between the reference and actual torques. The reference and actual value of the stator flux is processed
through two level hysteresis comparator. If the error is positive, the magnitude of flux has to be increased and this is denoted as $d{\phi }_{s}=1$. If the error is negative, the magnitude of the
flux has to be decreased and this is denoted as $d{\phi }_{s}=0$. The flux comparator conditions are given as:
$d{\phi }_{s}=1\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}for\hspace{0.17em}\hspace
{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}|{\phi }_{s}|\le |{\phi }_{sref}|-|\Delta {\phi }_{s}|,$
$d{\phi }_{s}=0\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}for\hspace{0.17em}\hspace{0.17em}\hspace
{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}|{\phi }_{s}|\ge |{\phi }_{sref}|+|\Delta {\phi }_{s}|.$
The rotor reference speed is compared with the actual rotor speed and the error obtained is converted into reference torque by using suitable PI regulator.
The reference and actual torque is processed through three level hysteresis comparator. If the error is positive, the magnitude of torque has to be increased and this is denoted as $d{T}_{e}=1$. If
the error is negative, the magnitude of torque has to be decreased and this is denoted as $d{T}_{e}=-1$. If the error is zero, the magnitude of torque has to be maintained constant and this is
denoted as $d{T}_{e}=0$. The torque comparator conditions are given as:
$d{T}_{e}=1\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em} \hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}for\hspace{0.17em}\hspace
{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}|{T}_{e}|\le |{T}_{ref}|-|\Delta {T}_{e}|,$
{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}|{T}_{e}|\ge |{T}_{ref}|+|\Delta {T}_{e}|,$
$d{T}_{e}=0\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em} \hspace{0.17em}\hspace{0.17em}for\hspace{0.17em}\hspace
{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}|{T}_{ref}|-|\Delta {T}_{e}|\le |{T}_{e}|\le |{T}_{ref}|+|\Delta {T}_
Fig. 1Block diagram of the classical DTC
Finally most suitable voltage vectors are selected form the switching table based on the flux and torque errors for all the sectors.
4. Proposed DTC method
The classical DTC uses two level inverter and produces only eight voltage vectors which includes six number of non-zero vectors and rest of them are zero vectors. This does not allow smooth variation
in the flux and torque. This could be one of the main reasons for large flux, torque ripples and mechanical vibrations. In this proposed DTC method, the two level inverter is replaced by NPC three
level inverter. Due to increment in the level of the inverter there are 27 voltage vectors which are available to construct the switching table. In which six numbers of large vectors, twelve numbers
of small vectors, three numbers of zero vectors and remaining are medium vectors.
The inference from the section 3 is that the switching table plays an important role in the DTC technique. For proper switching table the best result can be obtained. The structure of the proposed
DTC schemes is shown in Figure 2.
Fig. 2Block diagram of the proposed DTC schemes
In the proposed DTC scheme I, the available 27 voltage vectors are allowed to form different groups of voltage vectors such as Large - Zero (LZ), Medium - Zero (MZ) and Small - Zero (SZ). Based on
these groups, new switching table is proposed. In the proposed DTC scheme II, all the voltage vectors are considered for switching table. The proposed DTC methods provide satisfactory results as
compared to classical DTC. The Table 1 provides the technical idea of the proposed DTC schemes.
Table 1Technical difference among the classical and proposed DTC schemes
Proposed DTC
Proposed DTC scheme I
scheme II
Classical DTC DTC DTC DTC
Techniques used
DTC method 1 method 2 method 3 method 4
Number of sectors used Six Six Six Six Twelve
Inverter level Two Three Three Three Three
Nature of voltage vectors used LZ LZ MZ SZ LMSZ
4.1. Proposed DTC scheme I
The proposed DTC scheme I includes DTC method 1, DTC method 2 and DTC method 3 and all are following six sector methodology.
4.1.1. DTC method 1
The proposed DTC method 1 utilizes large and zero voltage vectors. In this method 9 voltage vectors are used in which 6 of them are large voltage vectors and 3 of them are zero voltage vectors. This
method is almost imitation of the classical DTC, because in both cases only large and zero voltage vectors are used. The switching table is constructed using these 9 voltage vectors. The switching
table developed in this method is almost similar to the classical DTC switching table. The drawback obtained in the DTC is repeated in this method also, because of the non availability of
intermediate voltage vectors.
4.1.2. DTC method 2
In the proposed DTC method 2, the medium and zero voltage vectors are used to construct switching table. There are 6 medium voltage vectors and 3 zero voltage vectors available. In DTC method 1, the
large and zero voltage vectors are used, that means the switching is between large voltage vectors and zero voltage vectors. This will produce large ripples in the flux and torque. However in DTC
method 2, the medium and zero voltage vectors are used, so the ripples in the flux and torque are considerably reduced as compared to DTC method 1. This can be observed from Figures 8-12.
4.1.3. DTC method 3
The DTC method 2 produces slightly lesser torque ripples as compared to the classical DTC method. This is because no large voltage vectors are used in this method. From the experience of previous
methods, the switching of vector plays important role in the flux and torque ripple reduction. The switching from zero voltage to large / medium voltage increases the ripples in the flux and torque,
harmonic content and stress across the switching devices.
To overcome these problems, an appropriate switching table is constructed using only small and zero voltage vectors. The equation (13) tells us that the large voltage vectors are contributing to a
torque in the same direction which will lead to large errors in the actual torque. This is true for small voltage vectors also. The lesser torque ripples can be expected by combining small and zero
voltage vectors. There are 12 small voltage vectors and 3 zero voltage vectors available in this method. The small voltage vectors exist in redundant pair, i.e. six number of positive small vectors
and six number of negative small vectors. So the switching table is formed either by using positive small vector or negative small vector in order to balance neutral point potential. The small
voltage vectors are selected to meet the demand of the flux and torque, as well as to reduce the flux, torque ripples and mechanical vibrations. So this ensures the safe operation of the entire
4.2. Proposed DTC scheme II
The proposed DTC scheme II is different in the view of number of sectors used and the techniques used in the switching table formation. In this scheme, all the voltage vectors are used to form the
switching table. This is referred as DTC method 4 in future.
5. Simulation and results
The MATLAB / Simulink is used to perform the simulation for proposed DTC methods and classical DTC method. The machine parameters used in this paper are the same as in [26, 28] and also the same is
listed in Table 2. In this paper the simulation results of classical DTC, DTC method 1, DTC method 2, DTC method 3 and DTC method 4 were presented. For all the methods, the performance analysis was
carried out in the different point of view like performance at different operating points, performance during external load disturbance.
Table 2Machine parameters
Number of pole pairs $p$ 3
Permanent magnet flux ${\phi }_{f}$ 0.1057 Wb
Stator resistance ${R}_{s}$ 1.8 Ω
$d$-aixs and $q$-axis inductance ${L}_{d},{L}_{q}$ 15 mH
Rated speed ${N}_{r}$ 2000 rpm
Rated torque ${T}_{r}$ 4.5 Nm
Rated line-line voltage ${U}_{r}$ 128 V
DC bus voltage ${U}_{dc}$ 200 V
5.1. Comparative study with existing work
First, the classical DTC will be carried out to show the effectiveness of the proposed DTC methods. The proposed methods are also compared with existing work [26, 28]. The switching table used in
Figure 2 is different from the literature [26, 28].
Figure 3 to Figure 7 present the responses at 1000 rpm with an external load of 3 Nm applied at 0.1 s for classical DTC, DTC method 1, DTC method 2, DTC method 3 and DTC method 4. From the top to
bottom, the waveforms are stator current, torque, flux, rotor speed and the harmonic analysis of stator current, respectively. It can be seen that the current waveform is more sinusoidal in the
proposed DTC methods as compared to existing methods. The classical DTC exhibits large flux and torque ripples. The Total Harmonic Distortion (THD) is calculated up to 6000 Hz.
The quantitative results are carried out at 1000 rpm with 3 Nm external load for all the methods. It is seen that the stator current THD of the proposed DTC method 3 is 4.40 %, much lower than the
5.85 % and 4.67 % of the existing DTC methods available in the papers [26, 28], respectively. The DTC method 4 provides lesser stator current THD as compared to classical DTC and [26]. Among all the
proposed DTC methods and existing DTC methods, the DTC method 3 provides better performance in the view of stator current THD. The dominant harmonics between 2000 Hz and 3000 Hz in proposed DTC
method 3 are much lesser than as compared to other proposed methods and classical DTC method.
The average commutation frequency is calculated using the formula, ${f}_{av}=N/K/0.05$, where $N$ is the total commutation instants of all the legs of the inverter used in the DTC methods during
fixed period, e.g., 0.05 s in this paper and $K$ is the switch numbers.
The Root Mean Square (RMS) torque ripple is calculated for all the DTC methods in this paper. The proposed DTC method 3 exhibits its better performance in terms of torque ripple, stator current THD
and average commutation frequency, ${f}_{av}$. The DTC method 3 exhibits better performance while comparing all other proposed methods and classical DTC method.
The average commutation frequency of the proposed DTC method 3 is 1.63 kHz. The proposed DTC method 3 has its average commutation frequency ${f}_{av}$ only 51 % and 37 % of that of existing DTC
methods available in the literature [26, 28], respectively. This validates the superiority of the proposed DTC methods. The DTC method 4 provides lesser average commutation frequency as compared to
[26] where as torque ripple is less while comparing to both [26, 28].
The Table 3 also informs us that all the proposed DTC methods produce lesser torque ripple as compared to classical DTC method. While comparing with the existing DTC methods of [26, 28], the proposed
DTC method 3 provides lesser torque ripple. The proposed DTC method 3 gives torque ripple only 54 %, 89 %, 98 % of that of classical DTC method, Zhang et al. [26] and Zhang et al. [28], respectively.
The DTC method 4 exhibits lesser torque ripple as compared to classical DTC, [26, 28], DTC method 1 and DTC method 2. In terms of average commutation frequency, torque ripple and THD of stator
current, the DTC method 3 shows better performance among all the DTC methods.
Table 3Quantitative comparison of the proposed DTC methods with existing DTC methods
Method ${f}_{av}$ (Hz) ${\varphi }_{ripple}$ (Wb) ${T}_{ripple}$ (Nm) % THD of stator current
Classical DTC 5.36 k 0.0066 0.2004 6.73 %
Zhang Y., Zhu J. [26] 4.41 k 0.0043 0.1222 5.85 %
Zhang Y., Zhu J. [28] 3.22 k 0.0017 0.1110 4.67 %
Proposed DTC method 1 4.99 k 0.0064 0.1956 6.97 %
Proposed DTC method 2 4.68 k 0.0063 0.1563 5.70 %
Proposed DTC method 3 1.63 k 0.0048 0.1092 4.40 %
Proposed DTC method 4 3.09 k 0.0086 0.1107 5.28 %
Fig. 3Response of classical DTC at 1000 rpm with external load of 3 Nm
Fig. 4Response of DTC method 1 at 1000 rpm with external load of 3 Nm
5.2. Results at 10 % of rated speed
The proposed DTC methods are analyzed at different operating points. In Figure 8 the operating point is considered at 200 rpm (10 % of the rated speed) without load as an example. Figure 8 shows the
flux and torque responses for classical DTC, DTC method 1, DTC method 2, DTC method 3 and DTC method 4, respectively. From top to bottom, the responses shown in Figure 8 are classical DTC, DTC method
1, DTC method 2, DTC method 3 and DTC method 4, respectively, the flux response in the left and torque response in the right.
It is seen that for the operating point at 200 rpm, the DTC method 1 gives almost same performance as compared to classical DTC because their switching pattern is almost similar. However the DTC
method 2 gives lesser torque ripple as compared to classical DTC and DTC method 1, but gives instantaneous spikes in the flux. The main drawback of the DTC drive is more torque ripple at lower speed.
So this analysis provides important conclusion: hence it can be concluded that DTC method 3 presents the best overall performance among the four kinds of proposed DTC methods and classical DTC.
Fig. 5Response of DTC method 2 at 1000 rpm with external load of 3 Nm
Fig. 6Response of DTC method 3 at 1000 rpm with external load of 3 Nm
Fig. 7Response of DTC method 4 at 1000 rpm with external load of 3 Nm
Fig. 8Steady state response at 200 rpm (10 % of the rated speed) for classical DTC, DTC method 1, DTC method 2, DTC method 3 and DTC method 4
Fig. 9Steady state response at 500 rpm (25 % of the rated speed) for classical DTC, DTC method 1, DTC method 2, DTC method 3 and DTC method 4
5.3. Results at 25 % of rated speed
At this operating point, in the view of flux and torque ripple, the DTC method 3 exhibits better performance followed by DTC method 4, DTC method 2, DTC method 1 and classical DTC. The ripples in the
flux and torque waveform are also significantly diminished in the DTC method 3 as compared to other methods proposed in this paper. This can be observed from the Figure 9.
5.4. Results at 50 % of rated speed
As is shown in Figure 10 the high ripples and distortion in flux and torque waveform can be noticed in all the methods. However, a remarkable reduction in torque ripple can be observed in the DTC
method 3. The DTC method 3 and DTC method 4 provide almost similar torque ripple, but the instantaneous spikes are noticed in torque response of DTC method 4.
5.5. Results at 75 % of rated speed
The DTC method 1 almost imitates the classical DTC. The DTC method 2 presents the lower torque ripple among the other methods. According to switching table of this method, at any point of time the
inverter will provide half voltages for two lines and zero voltage for one line. This voltage is not sufficient to rotate the rotor at this speed. The DTC method 3 gives satisfactory operation up to
70 % of the rated speed. However the DTC method 4 provides lesser torque ripple compared to all other methods.
5.6. Results at 100 % of rated speed
It is found that there is no significant improvement in the DTC method 1 as compared to classical DTC. Large vectors are considered in classical DTC and DTC method 1. According to equation (13) this
will lead to large torque ripples. At the same time, the DTC method 2 presents lesser torque ripple as compared to other methods. Nevertheless the DTC method 3 is not able to trace the reference
Fig. 10Steady state response at 1000 rpm (50 % of the rated speed) for classical DTC, DTC method 1, DTC method 2, DTC method 3 and DTC method 4
Fig. 11Steady state response at 1500 rpm (75 % of the rated speed) for classical DTC, DTC method 1, DTC method 2 and DTC method 4
Fig. 12Steady state response at 2000 rpm (100 % of the rated speed) for classical DTC, DTC method 1 and DTC method 2
Fig. 13Percentage of rated speed
5.7. Responses to external load disturbance
The responses to the external disturbances are shown in Figure 14(a-e) for classical DTC, DTC method 1, DTC method 2, DTC method 3 and DTC method 4, respectively. The motor is operated at a steady
state with 2.5 Nm and 50 % of the rated speed, and then the load is suddenly removed in order to check the disturbance rejection capability of the classical and proposed DTC methods.
In a very short period, the motor speed returns to its original speed due to its fast torque response. It is observed that about 3 % peak speed increases for all the proposed DTC methods 1, 2, 3 and
4, where as it is about 2.5 % for classical DTC method when the load is suddenly removed. However almost all the DTC methods including classical DTC method take the same time to reach its original
speed after the load is removed. This comparison informs that all the proposed DTC methods are exhibiting their fast response of torque as compared to classical DTC. Even though the peak speed
increases about 3 %, but takes lesser time to reach its steady state, whereas the classical DTC takes the same time to reach from its 2.5 % peak speed. However the classical DTC, DTC method 1 and DTC
method 2 exhibit the best performance at the cost of larger torque ripple, whereas DTC method 3 provides lesser torque ripple and better performance in terms of disturbance rejection.
Fig. 14Response to external load disturbance for: a) Classical DTC; (b) DTC method 1, c) DTC method 2, d) DTC method 3, e) DTC method 4
5.8. Harmonics and mechanical vibration reduction
The major disadvantages of the DTC based PMSM drive is more torque ripple and that leads to mechanical vibrations and acoustic noise. For electric and hybrid vehicle applications the torque ripple
could result in mechanical vibration and acoustic noise. These phenomenon are undesirable for most of the applications. In this paper the status of the Total Harmonic Distortion in the current
waveform, RMS level of vibration and noise have been examined and their comparision is shown in the Figure 15. The RMS level of the vibration is calculated using LabVIEW software. This proves that
the proposed DTC methods are able to suppress the torque ripple and mechanical vibration.
Fig. 15Response of DTC methods in the view of mechanical vibration: a) percentage THD of stator current, b) RMS level of vibration, c) noise produced in various DTC methods
6. Important observations
In this paper the classical DTC method and all the proposed DTC methods are comparatively investigated in the aspect of torque ripple, disturbance rejection performance during external load
disturbance and mechanical vibration. From the results indicated earlier, it is found that the torque ripple in the proposed DTC methods is less as compared to classical DTC method and existing
literatures. In most of the operating points the proposed DTC method 1 provides slightly higher torque ripple as compared to proposed DTC method 2, 3 and 4. However, proposed DTC method 3 shows
better performance in terms of torque ripple as compared to classical DTC method and proposed DTC method 1, 2 and 4. The usage of small and zero voltage vectors in proposed DTC method 3 provides
better performance with the comparison of all the methods including classical DTC method. Once again all the DTC methods exhibit all most similar decelerating capability, however proposed DTC method
3 provides lesser ripple in the current waveform. All the proposed and existing DTC methods show good disturbance rejection characteristics at the cost of higher torque ripple except proposed DTC
method 3. In the view of mechanical vibration, all the proposed DTC methods provide lesser vibration as compared to classical DTC method.
7. Conclusions
In this paper, a simple method to minimize the torque ripple for a DTC of PMSM drives has been proposed. The new switching table is proposed in which only any two voltage vectors (LZ, MZ and SZ) are
utilized out of four voltage vectors (L, M, S, Z) available due to increment in the level of inverter. The performance of the proposed DTC method is comparatively investigated with classical DTC and
existing literatures. The simulation results prove that the proposed DTC methods are able to diminish the torque ripple at different operating points as compared to classical DTC. Consequently, the
proposed DTC method also gives satisfactory performance during external load disturbance operations. The proposed DTC methods also are capable to suppress mechanical vibration. The settling time of
the torque can be reduced if compared with the classical DTC method, furthermore, the related current ripple is also reduced. The proposed DTC methods also retain the merits of simplicity and
robustness as in DTC.
• Blaschke F. The principle of field orientation as applied to the new TRANSVECTOR closed loop control system for rotating field machines. Siemens Rev., Vol. 34, 1972, p. 217-220.
• Takahashi I., Nogushi T. A new quick-response and high efficiency control strategy of an induction motor. IEEE Trans. Ind. Appl., Vol. IA-22, No. 5, Sep. 1986, p. 820-827.
• Depenbrock M. Direct self-control (DSC) of inverter-fed induction machine. IEEE Trans. Power Electron., Vol. 3, No. 4, Oct. 1988, p. 420-429.
• Cheng B., Tesch T. R. Torque feed forward control technique for permanent-magnet synchronous motors. IEEE Trans. Ind. Electron., Vol. 57, No. 3, Mar. 2010, p. 969-974.
• Tursini M., Chiricozzi E., Petrella R. Feed forward flux-weakening control of surface-mounted permanent-magnet synchronous motors accounting for resistive voltage drop. IEEE Trans. Ind.
Electron., Vol. 57, No. 1, Jan. 2010, p. 440-448.
• Ortega C., Arias A., Caruana C., Balcells J., Asher G. M. Improved waveform quality in the direct torque control of matrix-converter-fed PMSM drives. IEEE Trans. Ind. Electron., Vol. 57, No. 6,
Jun. 2010, p. 2101-2110.
• Foo F., Rahman M. F. Sensorless sliding-mode MTPA control of an IPM synchronous motor drive using a sliding-mode observer and HF signal injection. IEEE Trans. Ind. Electron., Vol. 57, No. 4, Apr.
2010, p. 1270-1278.
• Buja G. S., Kazmierkowski M. P. Direct torque control of PWM inverter-fed AC motors – a survey. IEEE Trans. Ind. Electron., Vol. 51, No. 4, Aug. 2004, p. 744-757.
• Zhong L., Rahman M. F., Hu W., Lim K. Analysis of direct torque control in permanent magnet synchronous motor drives. IEEE Trans. Power Electron., Vol. 12, No. 3, May 1997, p. 528-536.
• Foo G., Rahman M. F. Sensorless direct torque and flux-controlled IPM synchronous motor drive at very low speed without signal injection. IEEE Trans. Ind. Electron., Vol. 57, No. 1, Jan. 2010, p.
• Pacas M., Weber J. Predictive direct torque control for the PM synchronous machine. IEEE Trans. Ind. Electron., Vol. 52, No. 5, Oct. 2005, p. 1350-1356.
• Kang J. K., Sul S. K. New direct torque control of induction motor for minimum torque ripple and constant switching frequency. IEEE Trans. Ind. Appl., Vol. 35, No. 5, Sep./Oct. 1999, p.
• Abad G., Rodriguez M. A., Poza J. Two-level VSC based predictive direct torque control of the doubly fed induction machine with reduced torque and flux ripples at low constant switching
frequency. IEEE Trans. Power Electron., Vol. 23, No. 3, May 2008, p. 1050-1061.
• Romeral L., Arias A., Aldabas E., Jayne M. Novel direct torque control (DTC) scheme with fuzzy adaptive torque-ripple reduction. IEEE Trans. Ind. Electron., Vol. 50, No. 3, Jun. 2003, p. 487-492.
• Morales Caporal R., Pacas M. Encoderless predictive direct torque control for synchronous reluctance machines at very low and zero speed. IEEE Trans. Ind. Electron., Vol. 55, No. 12, Dec. 2008,
p. 4408-4416.
• Shyu K. K., Lin J. K., Pham V. T., Yang M. J., Wang T. W. Global minimum torque ripple design for direct torque control of induction motor drives. IEEE Trans. Ind. Electron., Vol. 57, No. 9, Sep.
2010, p. 3148-3156.
• Flach E., Hoffmann R., Mutschler P. Direct mean torque control of an induction motor. Proc. EPE, Vol. 3, 1997, p. 672-677.
• Andreescu G. D., Pitic C., Blaabjerg F., Boldea I. Combined flux observer with signal injection enhancement for wide speed range sensorless direct torque control of IPMSM drives. IEEE Trans.
Energy Convers., Vol. 23, No. 2, Jun. 2008, p. 393-402.
• Zhang Y., Zhao Z., Lu T., Yuan L. Sensorless 3-level inverter-fed induction motor drive based on indirect torque control. Proc. IEEE 6th IPEMC Conf., 2009, p. 589-593.
• Tang L., Zhong L., Rahman M. F., Hu Y. A novel direct torque controlled interior permanent magnet synchronous machine drive with low ripple in flux and torque and fixed switching frequency. IEEE
Trans. Power Electron., Vol. 19, No. 2, Mar. 2004, p. 346-354.
• Casadei D., Profumo F., Serra G., Tani A. FOC and DTC: two viable schemes for induction motors torque control. IEEE Trans. Power Electron., Vol. 17, No. 5, Sep. 2002, p. 779-787.
• Kouro S., Bernal R., Miranda H., Silva C., Rodriguez J. High performance torque and flux control for multilevel inverter fed induction motors. IEEE Trans. Power Electron., Vol. 22, No. 6, Nov.
2007, p. 2116-2123.
• Lee K. B., Blaabjerg F. An improved DTC-SVM method for sensorless matrix converter drives using an overmodulation strategy and a simple nonlinearity compensation. IEEE Trans. Ind. Electron., Vol.
54, No. 6, Dec. 2007, p. 3155-3166.
• Zhang Y., Zhu J., Zhao Z., Xu W., Drroell D. G. An improved direct torque control for three-level inverter-fed induction motor sensorless drive. IEEE Trans. Power Electron., Vol. 27, No. 3, Mar.
2012, p. 1502-1513.
• Zhang Y., Zhu J., Xu W. Predictive torque control of permanent magnet synchronous motor drive with reduced switching frequency. Proc. Int. Conf. Electr. Mach. Syst., 2010, p. 798-803.
• Zhang Y., Zhu J. Direct torque control of permanent magnet synchronous motor with reduced torque ripple and commutation frequency. IEEE Trans. Power Electron., Vol. 26, No. 1, Jan. 2011, p.
• Zhang Y., Zhu J., Xu W., Guo Y. A simple method to reduce torque ripple in direct torque-controlled permanent-magnet synchronous motor by using vectors with variable amplitude and angle. IEEE
Trans. Ind. Electron., Vol. 58, No. 7, 2011, p. 2848-2859.
• Zhang Y., Zhu J. A novel duty cycle control strategy to reduce both torque and flux ripples for DTC of permanent magnet synchronous motor drives with switching frequency reduction. IEEE Trans.
Power Electron., Vol. 26, No. 10, Oct. 2011, p. 3055-3067.
About this article
AC motor drives
permanent magnet synchronous motor (PMSM) drives
ripple reduction
torque control
reduction of mechanical vibrations
Copyright © 2013 Vibroengineering
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
{"url":"https://www.extrica.com/article/14562","timestamp":"2024-11-14T14:00:05Z","content_type":"text/html","content_length":"162152","record_id":"<urn:uuid:29c77813-2d56-447e-8fea-40da0dd3d97b>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00868.warc.gz"}
|
Bender’s decomposition method
We consider a general mixed-integer convex program. We first develop an algorithm for solving this problem, and show its nite convergence. We then develop a finitely convergent decomposition
algorithm that separates binary variables from integer and continuous variables. The integer and continuous variables are treated as second stage variables. An oracle for generating a parametric …
Read more
|
{"url":"https://optimization-online.org/tag/benders-decomposition-method/","timestamp":"2024-11-01T20:52:33Z","content_type":"text/html","content_length":"83310","record_id":"<urn:uuid:01525c91-dc0f-41a6-8d4e-9cd70fc1760e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00071.warc.gz"}
|
How do you calculate thermal expansion?
What are issues with thermal expansion?
Temperature rises can cause significant increases in equipment size or pipe length, leading to potential damage from internal stress. Many process plants operate equipment at temperatures well above
ambient. The temperature rise during operation may cause significant increases in equipment size or pipe length.
How do you calculate expansion in physics?
ฮ L = ฮฑLฮ T is the formula for linear thermal expansion, where ฮ L is the change in length L, ฮ T is the change in temperature, and is the linear expansion coefficient, which varies slightly with
How do you calculate thermal expansion of metals?
Multiply the temperature change by 7.2 x 10-6, which is the expansion coefficient for steel. Continuing the example, you would multiply 0.0000072 by 5 to get 0.000036. Multiply the product of the
expansion coefficient and the temperature increase by the original length of the steel.
What are some examples of thermal expansion?
• Cracks in the road when the road expands on heating.
• Sags in electrical power lines.
• Windows of metal-framed need rubber spacers to avoid thermal expansion.
• Expansion joints (like joint of two railway tracks).
• The length of the metal bar getting longer on heating.
What formula is Q MC โ T?
The amount of heat gained or lost by a sample (q) can be calculated using the equation q = mcฮ T, where m is the mass of the sample, c is the specific heat, and ฮ T is the temperature change.
How do you calculate expansion length?
Formula for Length Change due to Thermal Expansion: The formula for calculating the change in length of a substance due to thermal expansion is ฮ L=ฮฑLฮ T ฮ L = ฮฑ L ฮ T where L is the original
length of the substance and ฮ T is the change in temperature of the substance either in degrees Celsius or in Kelvin.
What are the three types of thermal expansion?
Thermal expansion is of three types: Linear expansion. Area expansion. Volume expansion.
What is the cause and effect of thermal expansion?
Thermal expansion is caused by heating solids, liquids or gases, which makes the particles move faster or vibrate more (for solids). This means that the particles take up more space and so the
substance expands. Some everyday effects of thermal expansion are useful, but some are just a plain nuisance.
What is the equation for thermal expansion in two dimensions?
Thermal Expansion in Two Dimensions ฮ A=2ฮฑAฮ T, where ฮ A is the change in area A , ฮ T is the change in temperature, and ฮฑ is the coefficient of linear expansion, which varies slightly with
Is thermal expansion Good or bad?
Thermal Expansion and Your Plumbing System Since all the pipes in your home are full of water at any given time, thermal expansion creates pressure and stress that can cause damage or wear and tear.
What is thermal expansion in physics?
thermal expansion, the general increase in the volume of a material as its temperature is increased.
What is thermal stress formula?
Thermal Stress Formula Consider a thermal conducting rod, on heating, the rod expands. The change in length will be directly proportional to the amount of heat supplied and the coefficient of thermal
expansion. Thus, we can mathematically write thermal stress as: ฮด T = L ฮฑ ( T f โ T i )
How do you find the thermal expansion of a cylinder?
ฮ d = d0ฮฑฮ t = d0ฮฑ(t1 โ t0), where d0 is the initial diameter of the cylinder, t0 the initial temperature, t1 the final temperature, and ฮฑ is the coefficient of thermal expansion of brass.
How do you calculate the thermal expansion of a steel pipe?
ฮ L = aLo(T2-T1) (Equation 5) If the pipe is installed at an ambient temperature of 70 deg F, and the temperature of the pipe increases to 270 deg F, we can expect about 1.5 in of expansion in the
100 ft unanchored run.
How much does steel expand when heated?
“Steel will expand from 0.06 percent to 0.07 percent in length for each 100oF rise in temperature. The expansion rate increases as the temperature rises. Heated to 1,000oF, a steel member will expand
9ยฝ inches over 100 feet of lengthโ ฆ.
What is the thermal expansion of steel?
“The coefficient of thermal expansion for steel is 0.00000645in/in/deg F.
What material expands the most when heated?
Gases expand the most upon heating because the intermolecular space is more than in solids or liquids.
What expands more when heated a solid or a liquid?
Liquids expand for the same reason, but because the bonds between separate molecules are usually less tight they expand more than solids.
What is an example of thermal expansion in solids?
Solids also undergo thermal expansion. Railroad tracks and bridges, for example, have expansion joints to allow them to freely expand and contract with temperature changes.
What is Q MC โ H?
How do you calculate delta T?
Calculating Delta T is simple: just subtract the return air temperature from the supply air temperature. The difference, or delta, between the two is Delta T.
What unit is Q in Q MC โ T?
You want your q to be in units of Joules or kJ. If you used the q=mC(delta T) with your given C, your q would be in units of (grams)(kJ). In the problem, you were given the heat capacity, not the
specific heat capacity. Therefore, you don’t need mass to calculate q.
Is thermal expansion coefficient constant?
The coefficient of thermal expansion (CTE) refers to the rate at which a material expands with increase in temperature. More specifically, this coefficient is determined at constant pressure and
without a phase change, i.e. the material is expected to still be in its solid or fluid form.
What is the linear coefficient of thermal expansion?
The Coefficient of Linear Thermal Expansion (CLTE often referred to as “ฮฑ”) is a material property which characterizes the ability of a plastic to expand under the effect of temperature elevation.
It tells you how much the developed part will remain dimensionally stable under temperature variations.
|
{"url":"https://physics-network.org/how-do-you-calculate-thermal-expansion/","timestamp":"2024-11-05T20:19:25Z","content_type":"text/html","content_length":"309294","record_id":"<urn:uuid:b30806c9-fa19-495c-990e-c0e370d8bc7d>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00507.warc.gz"}
|
Wronskian Calculator for Students and Educators
When wrestling with the Wronskian, you're engaging with an essential concept in differential equations, one that can either clarify or complicate your understanding of linear independence.
As a student or educator, you've likely encountered the challenge of calculating Wronskians by hand, a process that's both time-consuming and prone to errors.
Fortunately, a Wronskian calculator can streamline this task, offering quick and accurate assessments of functions' relationships.
You're probably wondering how this tool could change the way you approach differential equations and what insights it might unlock.
The answer lies in the nuanced assistance it provides—beyond mere number-crunching—to enhance your grasp of intricate mathematical principles.
Let's uncover how embracing this calculator can transform your engagements with differential equations, and perhaps, your entire mathematical journey.
Understanding the Wronskian
To grasp the concept of the Wronskian, it's essential to understand that it's a determinant used to determine the linear independence of a set of differentiable functions. In essence, if you're
dealing with multiple functions and you want to know if they're linearly independent, the Wronskian is your go-to tool.
Here's how it works: you construct a matrix whose entries are the functions themselves and their successive derivatives. This matrix is known as the Wronskian matrix. You then calculate
wronskiancalculator the determinant of this matrix, which is the Wronskian. If the Wronskian is nonzero at least at one point, the functions are linearly independent. However, if it's identically
zero for all points in the interval of interest, the functions may be linearly dependent.
Understanding matrix determinants is crucial here, as they're the backbone of the Wronskian. A determinant provides a scalar value that's a particular property of a matrix and, in the context of the
Wronskian, it's the deciding factor for linear independence. Remember, mastering the calculation of matrix determinants is a step towards effectively using the Wronskian to analyze function sets.
Benefits of Using a Calculator
While mastering matrix determinants equips you with the fundamental skills for computing the Wronskian, employing a calculator can significantly streamline the process, ensuring accuracy and saving
valuable time. In the educational setting, the strategic use of technology, like a Wronskian calculator, isn't about promoting calculator dependency but about enhancing your mathematical toolkit.
You're likely aware that manual calculation, especially of complex determinants, is prone to human error, which can be minimized with a calculator.
Research supports the pedagogical benefits of calculators, emphasizing their role in facilitating a deeper understanding of mathematical concepts rather than just mechanical computation. By
offloading the tedious arithmetic to a calculator, you can focus on the interpretation and application of results. This is particularly important when dealing with Wronskian determinants in
higher-level mathematics, where the conceptual challenges outweigh the computational ones.
However, it's crucial to balance the use of calculators with manual calculation practices to maintain your foundational skills. This balanced approach ensures that you don't become overly reliant on
technology, preserving your ability to perform calculations by hand when necessary. Thus, a calculator serves as a complementary tool that enhances learning and efficiency rather than replacing the
essential skill of manual computation.
How to Use the Wronskian Calculator
Employing a Wronskian calculator begins with inputting the functions whose independence you wish to examine. You're tasked with entering each function clearly and accurately, as any mistakes can lead
to incorrect conclusions about their independence. The Wronskian is essentially a matrix determinant, calculated from a special matrix composed of the functions and their derivatives.
Once you've input the functions, the Wronskian calculator will typically require you to specify the order of derivatives to be included. It's crucial to ensure that you include derivatives up to the
(n-1)th order for n functions to obtain a valid Wronskian. After this, the calculator will construct the matrix by placing the functions in the first row and their successive derivatives in the
following rows.
With the matrix set up, the Wronskian calculator computes the determinant of this matrix. If the determinant is non-zero at least at one point, the functions are linearly independent, which implies
function independence. However, a zero determinant throughout the domain suggests the potential for linear dependence, though it's not a definitive proof.
Understanding the underlying concept is key—function independence implies that no function in the set can be written as a linear combination of the others, a fundamental aspect in differential
equations and vector space theory.
Common Mistakes to Avoid
When using a Wronskian calculator, it's crucial to double-check each step to avoid common errors that can lead to misleading results. Remember, the Wronskian is a determinant of a matrix composed of
functions and their derivatives. Accurately inputting these functions is paramount. One slip in notation or in the differentiation process can drastically alter the matrix determinants, skewing the
entire calculation.
In the context of differential equations, the Wronskian is used to determine the linear independence of a set of solutions. Ensure that you're applying the correct derivatives for each function in
your set. Substituting derivatives incorrectly is a frequent mishap that can invalidate your efforts to test for independence or dependence of solutions.
Moreover, it's not just about the numbers; understanding the theory behind the Wronskian is essential. For instance, a zero Wronskian may indicate linear dependence, but only if the functions are
continuously differentiable on the interval of interest. Without this conceptual clarity, you might misinterpret the results.
Resources for Further Learning
To build on your understanding of the Wronskian and avoid the pitfalls mentioned, explore these resources that will guide you through more complex examples and deepen your grasp of the subject. Start
with advanced textbooks on differential equations that delve into the Wronskian's role in determining solution uniqueness. These texts often provide a theoretical foundation coupled with practical
applications, allowing you to see the concept in action.
You'll also find online courses and lectures from universities that offer comprehensive overviews of differential equations, including the use of the Wronskian. These can be particularly helpful
because they present the information in a structured format, often with opportunities for practice and feedback.
For a more interactive experience, consider joining mathematics forums and discussion groups. Here, you can pose questions, share insights, and receive guidance from both peers and experts in the
field. Such communities are invaluable for troubleshooting and understanding the nuances of the Wronskian in various contexts.
Lastly, don't overlook research papers and articles published in academic journals. They can provide you with the latest findings and methodologies, pushing your understanding of the Wronskian and
its implications in differential equations to the forefront of mathematical research.
Now you've seen how a Wronskian calculator can streamline solving linear independence. It's a handy tool, but remember to avoid common pitfalls like incorrect function entry.
Dive into the resources provided to bolster your understanding. Keep practicing, and you'll master this technique, enhancing your mathematical toolkit.
Always double-check your work for accuracy. Happy calculating!
Comments (0)
No comments yet
|
{"url":"https://techplanet.today/post/wronskian-calculator-for-students-and-educators","timestamp":"2024-11-09T13:02:08Z","content_type":"text/html","content_length":"38944","record_id":"<urn:uuid:5a55deec-a703-40e0-8e1e-73a97ee10d7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00290.warc.gz"}
|
Self gravitating cosmic strings and the Alexandrov's inequality for
Liouville-type equations
Accepted Paper
Inserted: 22 apr 2016
Last Updated: 31 oct 2018
Journal: Communications in Contemporary Mathematics
Year: 2015
Doi: 10.1142/S0219199715500686
Motivated by the study of self gravitating cosmic strings, we pursue the well known method by C. Bandle to obtain a weak version of the classical Alexandrov's isoperimetric inequality. In fact we
derive some quantitative estimates for weak subsolutions of a Liouville-type equation with conical singularities. Actually we succeed in generalizing previously known results, including Bol's
inequality and pointwise estimates, to the case where the solutions solve the equation just in the sense of distributions. Next, we derive some \uv{new} pointwise estimates suitable to be applied to
a class of singular cosmic string equations. Finally, interestingly enough, we apply these results to establish a minimal mass property for solutions of the cosmic string equation which are \uv
{supersolutions} of the singular Liouville-type equation.
|
{"url":"https://cvgmt.sns.it/paper/3000/","timestamp":"2024-11-02T19:06:57Z","content_type":"text/html","content_length":"8875","record_id":"<urn:uuid:4379e490-8802-4671-bc98-d99e968d7ba8>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00628.warc.gz"}
|
Cite as
Galia R. Zimerman, Dina Svetlitsky, Meirav Zehavi, and Michal Ziv-Ukelson. Approximate Search for Known Gene Clusters in New Genomes Using PQ-Trees. In 20th International Workshop on Algorithms in
Bioinformatics (WABI 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 172, pp. 1:1-1:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)
Copy BibTex To Clipboard
author = {Zimerman, Galia R. and Svetlitsky, Dina and Zehavi, Meirav and Ziv-Ukelson, Michal},
title = {{Approximate Search for Known Gene Clusters in New Genomes Using PQ-Trees}},
booktitle = {20th International Workshop on Algorithms in Bioinformatics (WABI 2020)},
pages = {1:1--1:24},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-161-0},
ISSN = {1868-8969},
year = {2020},
volume = {172},
editor = {Kingsford, Carl and Pisanti, Nadia},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2020.1},
URN = {urn:nbn:de:0030-drops-127906},
doi = {10.4230/LIPIcs.WABI.2020.1},
annote = {Keywords: PQ-Tree, Gene Cluster, Efflux Pump}
|
{"url":"https://drops.dagstuhl.de/search/documents?author=Ziv-Ukelson,%20Michal","timestamp":"2024-11-05T07:40:24Z","content_type":"text/html","content_length":"79638","record_id":"<urn:uuid:4e27eeb3-b333-4464-85d6-2264efd5d8e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00246.warc.gz"}
|
Exploded Fibrations
1. Introduction
A symplectic manifold is a manifold M[2n] with a closed, maximally non-degenerate two-form ω, known as the symplectic form. Local coordinates of any such manifold (like x, y) ∈ R[n] × R[n] have the
symplectic form appear as ∑ dx[i]∧dy[i]. This implies symplectic manifolds lack local invariants, hence their study is referred to as symplectic topology.
One primary tool in symplectic topology is the study of holomorphic curves. Given a symplectic manifold (M, ω), it is possible to pick a contractible almost complex structure, J on M, that is tamed
by ω. This means that ω(v, Jv) > 0 for any nonzero vector v. With such a choice of J, a holomorphic curve is a function f : (S, j) → (M, J) from a Riemann surface S with a complex structure j so that
df∘j = J∘df.
2. The Moduli Space of Holomorphic Curves
The moduli space of holomorphic curves includes families of holomorphic curves where a bubble forms. This change in the domain's topology cannot occur in a connected smooth family of maps, so if the
behavior above is to be considered 'smooth', we need to redefine 'smooth'. This gives a hint that the smooth manifold category might not be suitable for the holomorphic curves theory.
3. Holomorphic Curves and Exploded Fibrations
A second reason for finding an extension of the smooth category with an optimal theory of holomorphic curves is that holomorphic curves, in general, are challenging to find in non-algebraic settings.
Many techniques for finding holomorphic curve invariants involve the degeneration of the almost complex structure J making holomorphic curves easier to find in the limit. Here, we discuss two types
of such degenerations which result in the breaking of a symplectic manifold into smaller, simpler pieces to compute holomorphic curve invariants.
4. Definition and Structure of an Exploded Fibrations
In section two of this paper, we will discuss the structure of exploded fibrations. In the third section, we will define the base's structure, and in the fourth section, we will explain how they fit
together. The category of exploded fibrations has well-defined products, so it is possible to deal with degenerations that look like products of this type locally.
5. Intersection Theory in Exploded Fibrations
As shown in section seven, exploded fibrations have a good intersection theory, further establishing the relevance of exploded fibrations in the study of holomorphic curves and symplectic topology.
6. The Perturbation Theory of Holomorphic Curves in Exploded Fibrations
The eighth section of this paper provides a sketch of the perturbation theory of holomorphic curves in exploded fibrations, broadening the understanding of the role of holomorphic curves within the
context of exploded fibrations.
Sign up to AI First Newsletter
|
{"url":"https://modelslab.com/ai-news/exploded-fibrations","timestamp":"2024-11-06T04:32:23Z","content_type":"text/html","content_length":"125116","record_id":"<urn:uuid:e3d90b18-6085-43b2-85b4-4b66f93e724d>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00711.warc.gz"}
|
Count based autoencoders and the future for scRNA-seq analysis
Two recent computational methods, scVI by Lopez et al and DCA by Eraslan & Simon et al, are the most promising methods for large scale scRNA-seq analysis in a while. This post will describe what they
do and why it is exciting, and demonstrate the results of running them on three recent datasets.
In almost all cases, to do anything useful in scRNA-seq the tens of thousands of genes measured need to be summarised and simplified. An extremely effective way to summarise many variables through
their covariance structure is principal component analysis (PCA). However, scRNA-seq data consists of counts, which have particular behaviours that cause issues with the interpretation and of PCA.
Potential ways of dealing with this is either figuring out how to transform count data to emulate the characteristics of continuous Gaussian data, or to reformulate PCA for the count setting. While
data transformations have historically had some success, they don’t perform so well for low count numbers, which is the case when massive scRNA-seq experiments are economically feasible.
A couple of years ago Risso et al successfully created a generalized linear factor analysis model for scRNA-seq counts called ZINB-WaVE, based on a zero-inflated negative binomial (ZINB) count
distribution. Omitting some further features, in this model underlying rates of observations of mRNA molecules from specific genes are modelled by a low-dimensional collection of continuous factors.
Every cell has a particular hidden value for these factors, and to fit the model all cells are investigated and assigned the most likely factor values. (In these equations, red color indicates
parameters that need to be inferred.)
\(\begin{align*} Y &\sim \text{ZINB}(\mu, \phi, \pi) \\ \log \mu &= {\color{red}{W}}^{T} {\color{red}{X}} \\ \cdots \end{align*}\)
With these ZINB based methods, the data does not need to be scaled, nor normalised, and in principle the common step of selecting highly variable genes is not necessary. The data just plugs in.
Inherently, learning the X means that each cell is compared to all other cells. The factors (W) can be investigated in attempts to deduce meaning, and in that way we gain knowledge. But if you show
the model a new cell y, it doesn’t know what to do with it. The inference will need to be rerun with the entire dataset including this new cell.
Two new methods, called scVI (single cell variational inference) and DCA (deep count autoencoder) rethinks this model, by moving from factor analysis to an autoencoder framework using the same ZINB
count distribution. (Their titles and abstracts phrase them as imputation methods, which is a bit odd and substantially undersell them!) The two methods have slightly different parameterizations, but
conceptually (abusing notation a bit), this represents with they both do:
\(\begin{align*} Y &\sim \text{ZINB}(\mu, \phi, \pi) \\ \log \mu &= f_{\mu}(X; {\color{red}{\theta_f}}) \\ \cdots \\ \hat{X} &= g(Y; {\color{red}{\theta_g}}) \end{align*}\)
A parametric function from the observed space (gene counts) to a low-dimensional space is fitted (g), at the same time as a function that maps the low dimensional space to ZINB parameters (f). For
any given cell, you can apply g to get its X-representation, then apply f on that to get parameters, and evaluate the likelihood of the cell. The functions g and f are parameterised as neural
networks because these are flexible and efficient.
This unlocks a lot of benefits. For inference, this setup makes it easier to use stochastic optimization, and is directly compatible with inference on mini-batches: you only need to look at a few
cells at a time, so no matter how many cells you have, you will not run out of memory. Scientifically, this allows you to generalize. You can apply g to any new cell and see where it ends up in the
By analysing how different regions of the X-space map to gene expression, markers and differential expression can be investigated. And on the converse, if you perform clustering in the X-space, you
can take new cells and evaluate which clusters the g function maps them to.
To illustrate what the methods do, we will apply them to three recent datasets. I picked smaller datasets on the order of ~2,500 cells because I wanted to quickly run them on desktop just for
testing. One from Rosenberg et al 2018, where I randomly sampled 3,000 out of 150,000 developing mouse brain cells. The second one is a taxon of Peripheral sensory neurons from mousebrain.org, which
is part of Zeisel et al 2018. And finally a dataset of male mouse germ cells that I found on GEO, but I couldn’t find an associated paper for (Lukassen 2018). I ran both DCA and scVI with default
parameters: DCA produces a 32-dimensional representation, and scVI a 10-dimensional. To qualitatively inspect the results I ran tSNE on the representations, and colored the cells based on labels
provided from the data sources.
The DCA method is implemented around the anndata Python package, and is very easy to run on any data you have. The scVI implementation requires you to manually wrangle your data into TensorFlow
tensors with correct data types, which can be frustrating if you are not used to it. This does however imply that if you want to scale the inference using out-of-core strategies, scVI directly
supports that.
In terms of run time, DCA was much faster than scVI, finishing in a few minutes. scVI took about an hour for each dataset. A large component is probably that DCA implements automatic early stopping,
while scVI will run for as many epochs as you tell it, even if the fit doesn’t improve.
The scVI method has the option to account for discrete nuisance variables (batch effects), but I did not try this. And even without it, it seems to align the two mice quite well in the Lukassen 2018
I am curious if there is a way to also account for continuous nuisance parameters (e.g. amplification cycles). In ZINB-WaVE this is straightforward, because it is a GLM, but it is not so clear here.
I mentioned clustering in the X-space, it might be possible to formulate these models as structured autoencoders (SVAE), and encourage the g function to learn a representation that favours cell type
segmentation. Notebooks where I ran the methods on the different datasets are available here.
|
{"url":"https://www.nxn.se/p/count-based-autoencoders-and-the-future-for-scrna-seq-analysis","timestamp":"2024-11-09T01:14:05Z","content_type":"text/html","content_length":"124976","record_id":"<urn:uuid:59e8d728-1541-4460-8428-f68e8a517b2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00290.warc.gz"}
|
Coordinate methods for matrix games
We develop primal-dual coordinate methods for solving bilinear saddle-point problems of the form min nolimits {x in mathcal{X}} max nolimits {y in mathcal{Y}}y{top}Ax which contain linear
programming, classification, and regression as special cases. Our methods push existing fully stochastic sublinear methods and variance-reduced methods towards their limits in terms of per-iteration
complexity and sample complexity. We obtain nearly-constant per-iteration complexity by designing efficient data structures leveraging Taylor approximations to the exponential and a binomial heap. We
improve sample complexity via low-variance gradient estimators using dynamic sampling distributions that depend on both the iterates and the magnitude of the matrix entries. Our runtime bounds
improve upon those of existing primal-dual methods by a factor depending on sparsity measures of the m by n matrix A. For example, when rows and columns have constant ell {1} ell {2} norm ratios, we
offer improvements by a factor of m+n in the fully stochastic setting and sqrt{m+n} in the variance-reduced setting. We apply our methods to computational geometry problems, i.e. minimum enclosing
ball, maximum inscribed ball, and linear regression, and obtain improved complexity bounds. For linear regression with an elementwise nonnegative matrix, our guarantees improve on exact gradient
methods by a factor of sqrt{text{nnz}(A)(m+n)}.
Publication series
Name Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
Volume 2020-November
ISSN (Print) 0272-5428
Conference 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020
Country/Territory United States
City Virtual, Durham
Period 16/11/20 → 19/11/20
Funders Funder number
National Science Foundation DGE-1656518, CCF-1844855
• linear regression
• matrix games
• minimax optimization
• stochastic gradient methods
Dive into the research topics of 'Coordinate methods for matrix games'. Together they form a unique fingerprint.
|
{"url":"https://cris.tau.ac.il/en/publications/coordinate-methods-for-matrix-games","timestamp":"2024-11-15T03:51:47Z","content_type":"text/html","content_length":"54639","record_id":"<urn:uuid:626e210d-f56b-4717-a65d-c9198b112784>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00892.warc.gz"}
|
Using pseudorandomness for stronger signal integrity
Scrambling and Descrambling
I was recently asked for some support regarding the usage of Scramblers and Descramblers. So far to be honest, I didn’t had much time to dig-into the scramblers and descramblers problematic, but I do
know, that there are interfaces such as the Xilinx’s Aurora 8/10 or Xilinx’s Aurora 64/66b, which uses these to enhance the protocol. Basically a scrambler/descrambler is a LSFR (Linear Shift
Feedback Register) with a predefined length, custom feedback connection and a few XORs. As previously mentioned in one of my previous posts (HERE), these LSFRs are used in many fields and not only
for link-communications, a very good example are the Golden Codes and the GPS satellite identification using its PRNS (Pseudo Random Sequence) – generated by a LSFR.
The feedback is described by a generating polynomial and examples are given below. The order of the polynomial relates to the LSFR’s length (Amount of D-FFs) . The implementation of any LSFR is quite
straightforward in any Hardware Description Language (HDL). What is more interesting are the properties of the Scramblers: They just basically randomize the data. Why is that helpful? Well basically
a randomized signals tends to have a better interference properties as it decreases the amount of spectral spurs to a minimum, so that they doesn’t interfere with other channels (PCIe or Display port
• Xilinx’s Aurora 8/10b Encoding: G(x) = 1 + X3 + X4 + X5 + X 16 (PG046)
• Xilinx’s Aurora 64/66b Encoding: G(x) = 1 + X39 + X58 (SP011)
• PCI-E V3.0 Encoding: G(x) = 1 + X2 + X5 + X8 + X16 +X21 + X23
In my opinion, it is always a good decision to visualize thigs, so we start with a message signal, then we are going to scramble (randomize) it and after that, we are going to descramble it in order
to verify that the scrambler and descrambler are working. I have in fact took an example from (HERE) and implemented in Matlab (Although its always a bit tricky to do things such as RTL outside the
real hardware.) The polynomial is defined for SMPTE 259M G1(x) = 1 + X4 + X9, G2(x) = 1 + X1. These are actually chained and must therefore be descrambled (unchained) in the reverse order.
As you can see,the scrambled data looks random, which is the expected result. Furthermore, the decrambled signal gives us back the original message delayed by a few bits, which is also an expected
behavior. The power spectrum of those signals would differ in such a way, that power of the scrambled signal would be spread among its peak, while the data (Depending on the data type! – Random data
itself doesnt require scrambling) would have just a few peaks. That being said, sending “random” data is the best way to avoid interference. Of course a link / interface has to have a scrambler/
descrambler on both the RX and TX, which should be however obvious. For testing and more variations about possible Polynomials see WIKI. I have actually implemented the LSFR inside Matlab in a way,
that at first, I compute the following states for each shift registers (the T_ prefix variables) and after all is computed, I “clock” the calculated values into the registers in order to simulate RTL
LSFR functionality.
Implementation of the Scrambler/Descrambler for Xilinx’s Aurora 8/10 is straightforward as well given a properly stated generating polynomial and using the Fibonacci LSFR. The descrambler is a so
called “Self-Synchronization” descrambler, as it doesn’t possess any feedback. As a result, any error on the input causes the output to be invalid for a limited time. After the error is shifted-out,
the descrambler is fully functional again. Furthermore, unlike the SMPTE 259M with NRZI, Aurora 8/10 requires only 1 polynomial – as stated before. “+” stands for XOR. The Scheme for Aurora 8/10 is
shown below.
The code for the aurora simulation in Matlab is available ➡️ HERE ⬅️.
|
{"url":"https://iriscores.com/2020/05/23/scrambling-and-descrambling/","timestamp":"2024-11-14T11:19:00Z","content_type":"text/html","content_length":"139782","record_id":"<urn:uuid:99a2dd8b-9cc1-4553-842d-2cc4f92e52ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00503.warc.gz"}
|
The number of odd n-digit numbers is 3.
Execution time limit is 1 second
Runtime memory usage limit is 64 megabytes
Determine and output the count of odd natural numbers with exactly n digits that fall within the range [a, b].
A single line containing three natural numbers: n, a, and b, separated by spaces (1 ≤ n ≤ 12, 1 ≤ a, b ≤ 10^12).
The solution to the problem.
Submissions 649
Acceptance rate 19%
|
{"url":"https://basecamp.eolymp.com/en/problems/9946","timestamp":"2024-11-12T17:05:26Z","content_type":"text/html","content_length":"243700","record_id":"<urn:uuid:93e8a9e3-a6a7-4d17-9174-efffe2351f0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00105.warc.gz"}
|
Econometric Sense
Social network analysis focuses on finding patterns in interactions between people or entities. These patterns may be described in the form of a network. Network analysis in general has many
applications including models of student integration and persistence, business to business supply chains, terrorist cells, or analysis of social media such as Facebook and Twitter. This presentation
provides a reference for basic concepts from social network analysis with examples using tweets from Twitter.
read full text pdf
HT: Yangchang via RDM (R datamining) group on LinkedIn.
Social Network Analysis with R
There is a tutorial on Network Analysis with package igraph by Gabor Csardi at http://igraph.sourceforge.net/igraphbook/. Although the tutorial is still under development, it provides some useful R
code examples on
- directed and undirected graphs;
- creating regular graphs, incl. full graphs, stars, rings, lattices and trees;
- creating graphs from real-world data;
- various random graphs;
- importing and exporting graphs in various formats, such as edge list files and Pajek format;
- Vertex and edge sequences and their indexing; and
- network flows and minumum cuts.
Another online resource on R for Social Network Analysis is available at
An online textbook on Introduction to social network methods can be found at http://www.faculty.ucr.edu/~hanneman/nettext/.
Drew Conway has also done a ton of work in this area and got me interested in this area some time ago.
Recently from R-Bloggers: http://www.r-bloggers.com/network-visualization-in-r-with-the-igraph-package/
I found the following from Denver SAS Users Group presentation (link):
I use SAS in our advanced undergraduate statistics courses.
The advantages of using SAS over the common spreadsheet-based statistical packages,
such as Minitab or SPSS, are:
1. The students are exposed to the logic of data management and data manipulations.
2. It involves creative programming, while other packages are mostly menu driven.
3. The wide scope of statistical capabilities enables us to work with more elaborate
4. Knowledge of SAS gives an advantage in the job market.
Yet, it is not too difficult for students to learn it (At the basic level).
I would agree, and this would just as easily apply to R as to SAS on all counts above. Most important, I think, is how spreadsheets and menu-driven packages overlook #1, which is just as important, I
repeat just as important as the actual analysis. In the age of data science, not being equipped to manage and manipulate data (hacking skills) can leave you high and dry. Who cares if you can run a
regression, the IT department probably isn't going to have time to get you the data exactly as you need it, and it might take several iterations of requests to get just what you need. And, without
hacking skills, you may not even have the ability to recognize the fact that the data you have is not in fact the data you think it is. And who says IT will always be able to hand you the data you
want. Learning statistics with the an attitude that just assumes the data will be there and moves on with theory and analysis is OK as long as you take another course that follows that includes
hacking/coding/data management. And guess what, that's not going to be done easily without some scripting language vs. pointing and clicking. This mindset I'll master stats and the data will just
come on its own, might make a great statistician, but a poor data scientist.
The importance of these skills are illustrated very clearly in a recent Radar O'Reilly piece Building Data Science Teams.
"Most of the data was available online, but due to its size, the data was in special formats and spread out over many different systems. To make that data useful for my research, I created a system
that took over every computer in the department from 1 AM to 8 AM. During that time, it acquired, cleaned, and processed that data. Once done, my final dataset could easily fit in a single computer's
RAM. And that's the whole point. The heavy lifting was required before I could start my research. Good data scientists understand, in a deep way, that the heavy lifting of cleanup and preparation
isn't something that gets in the way of solving the problem: it is the problem."
I think that last statement says it all.
And, besides valuable jobs skills, learning to code offers students a sense of empowerment. As quoted from a recent piece in Slate (HT Stephen Turner @ Getting Genetics Done):
“Learning to code demystifies tech in a way that empowers and enlightens. When you start coding you realize that every digital tool you have ever used involved lines of code just like the ones you're
writing, and that if you want to make an existing app better, you can do just that with the same foreach and if-then statements every coder has ever used.”
This has been kind of a rant. I'm not sure what the solution is. I'm not sure how much of this can actually be taught in the classroom, and the time constraints can be binding. I'm not sure data
management and statistical analysis both need to be part of the same course. I learned a lot of both on the job, and still have much to learn from a coding perspective. But I think at least making
the effort to acquaint yourself with a language that is used industry wide (like SAS or R, or even SPSS if the scripting language is introduced) as opposed to just any point and click interface with
little data management capability seems to me to at least be a start.
|
{"url":"https://econometricsense.blogspot.com/2012/03/","timestamp":"2024-11-11T11:53:05Z","content_type":"text/html","content_length":"91147","record_id":"<urn:uuid:caddfad6-12d4-49c7-af5d-3c5a136393c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00250.warc.gz"}
|
Obtaining permutations in a circle
1152 Views
11 Replies
3 Total Likes
Obtaining permutations in a circle
Permutations in a row or permutations in a line of n = 4 objects taking r = 3 at a time which is Length-3 permutations of {a,b,c,d} is obtained in Mathematica using Permutations[{a, b, c, d}, {3}].I
do not know if Permutations[{a, b, c, d}, {3}] can be tweaked in Mathematica to obtain permutations in a circle of n=4 objects taking r=3 at a time. Regardless, how can permutations in a circle of n=
4 objects taking r=3 at a time too be obtained in Mathematica?
11 Replies
I misunderstood. Deleted.
Lol. I'd love to be invited to your next party!
Me, I don’t care if Fred is to my left and Sheila to my right, or vice versa. Both are gonna get cocktail sauce on them either way.
Per the question I was interested in r<n but then I want to believe that once it works for r<n it should work for r=n
Yes BIG MAN DANIEL, to permute 3 of 4 elements and Keep one fixed as is required for permutations in a circle which uses the formula n combination r times (r-1) as opossed to permutations in a line
or row which uses the formula n combination r times r
Maybe I'm misunderstanding the OP's question. I interpret it that there are 4 persons showing up for dinner but you only have 3 chairs to seat them at a round table. I think the question is to
generate all possible seating arrangements (with just 3 being seated at a time).
That means one would first find all of the ways 3 out of 4 dinner guests could be chosen. For each of those sets of 3, all possible circular arrangements would be generated. For this example that
would be (3-1)! = 2 ways for each set of 3.
So I see the question being general for $n \geq r \geq 2$. For $n=r \geq 2$, one just has to fix 1 dinner guest and then permute the rest as usual resulting in $(n-1)!$ arrangements. No need for
But the OP will need to clarify.
I do not know what it means to permute 3 of 4 elements. Keep one fixed?
As for reversals, (1,4,3,2) is the same on a circle as (1,2,3,4)— just travel ccw from 1 instead of cw.
You've described an approach when $r=n$. But I think the OP is also interested in $r<n$. (Although I'm not yet convinced that any reversing is necessary in the case you describe. I'll think about
that a bit more.)
Put element 1 first. Permute 2,3,…,n. If first exceeds last, reverse them. Then list 1 followed by that (possibly reversed) permutation.
This assumes I understand the problem correctly.
Here's an alternative approach:
circlePermutations[object_, r_?IntegerQ] := Module[{t},
If[r <= Length[object],
t = {#[[1]], Permutations[#[[2 ;;]]]} & /@ Subsets[object, {r}];
Flatten[MapThread[Join, {ConstantArray[{#[[1]]}, Length[#[[2]]]], #[[2]]}] & /@ t, 1],
Print[ToString[r] <> " is greater than the length of " <> ToString[object] <> "."]]]
circlePermutations[{a, b, c, d}, 3]
(* {{a, b, c}, {a, c, b}, {a, b, d}, {a, d, b}, {a, c, d}, {a, d, c}, {b, c, d}, {b, d, c}} *)
I couldn't find any built in functions or options that would produce circle permutations. Here is a way to do it.
First, define a function that puts a permutation into some canonical order. I chose to rotate the permutation until its "minimal" element (by sort order) is first.
CanonicalizePermutation[perm_List] :=
NestWhile[RotateLeft, perm, Not@*MatchQ[{1, ___}]@*Ordering]
If we take a list of permutations and put them all in this canonical form, we can then use Union or DeleteDuplicates or just DeleteDuplicatesBy.
DeleteDuplicatesBy[Permutations[{a, b, c, d}, {3}], CanonicalizePermutation]
(* {{a, b, c}, {a, b, d}, {a, c, b}, {a, c, d}, {a, d, b}, {a, d, c}, {b, c, d}, {b, d, c}} *)
DeleteDuplicates[CanonicalizePermutation /@ Permutations[{a, b, c, d}, {3}]]
(* {{a, b, c}, {a, b, d}, {a, c, b}, {a, c, d}, {a, d, b}, {a, d, c}, {b, c, d}, {b, d, c}} *)
Union[CanonicalizePermutation /@ Permutations[{a, b, c, d}, {3}]]
(* {{a, b, c}, {a, b, d}, {a, c, b}, {a, c, d}, {a, d, b}, {a, d, c}, {b, c, d}, {b, d, c}} *)
Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use
|
{"url":"https://community.wolfram.com/groups/-/m/t/3206526?sortMsg=Recent","timestamp":"2024-11-02T14:41:10Z","content_type":"text/html","content_length":"145157","record_id":"<urn:uuid:8ab00d35-9d8d-46e2-8c1a-a453d411849c>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00597.warc.gz"}
|
FIXED DISTANCE - Translation in Swedish - bab.la
Worldwide distance calculator with air Predictions of finishing times for a range of other distances/trails will be previously completed running effort/race (select a distance/race/trail from the
list). (Reuters) - Statistics for Sunday's Hungarian Formula One Grand Prix at the Hungaroring, the 12th race of the 21-round season: Lap distance: Chord theorem. Angles subtended by the same arc cd
ab = v u 2. = Pythagoras' theorem. 2. 2.
The point returned by the Midpoint Formula is the same distance from each of the given points, and this distance is half of the distance between the given points. Therefore, the Midpoint Formula did
indeed return the midpoint between the two given points. Theorem 101: If the coordinates of two points are ( x 1, y 1) and ( x 2, y 2), then the distance, d, between the two points is given by the
following formula (Distance Formula). Example 1: Use the Distance Formula to find the distance between the points with coordinates (−3, 4) and (5, 2). These formula are easily derived by constructing
a right triangle with a leg on the hypotenuse of another (with the other leg orthogonal to the plane that contains the 1st triangle) and applying the Pythagorean theorem. This distance formula can
also be expanded into the arc-length formula. Using Pythagoras' Theorem we can develop a formula for the distance d.
theluxury.site: spinning jenny restaurant hatton country world
Distance in Euclidean space. In the Euclidean space R n, the distance between two points is usually given by the Euclidean distance (2-norm distance). How it works: Just type numbers into the boxes
below and the calculator will automatically calculate the distance between those 2 points . How to enter numbers: Enter any integer, decimal or fraction.
Hur kan jag veta om 10385274000 passar in i: NUMBER 10
We are given a coordinate system, 27 Oct 2020 Distance formula. Pythagorean Theorem.
Distance formula for a 2D coordinate plane: Where (x 1, y 1) and (x 2, y 2) are the coordinates of the two points involved. Distance formula, Algebraic expression that gives the distances between
pairs of points in terms of their coordinates (see coordinate system). In two- and three-dimensional Euclidean space, the distance formulas for points in rectangular coordinates are based on the
Pythagorean theorem. The point returned by the Midpoint Formula is the same distance from each of the given points, and this distance is half of the distance between the given points. Therefore, the
Midpoint Formula did indeed return the midpoint between the two given points.
Bästa asiatiska kokboken
This distance formula can also be expanded into the arc-length formula. Using Pythagoras' Theorem we can develop a formula for the distance d. Distance Formula. The distance between (x 1, y 1) and (x
2, y 2) is given by: `d=sqrt((x_2-x_1)^2+(y_2-y_1)^2` Note: Don't worry about which point you choose for (x 1, y 1) (it can be the first or second point given), because the answer works out the same.
You would have noticed that the body stops completely after covering a certain distance. This is called the stopping distance. The distance formula is a way of finding the distance between two
points. It does this by creating a virtual right triangle and using the Pythagorean theorem. The distance formula has a 2D (two-dimensional) variation and a 3D (three-dimensional) variation. The 2D
distance formula is given as: d = Formula 1 racing is a widely popular motorsport that has captured a global audience across Europe, Asia, Australia and North America.
Trestads veteranbilklubb
Instead, let's try The Distance Formula distance från engelska till portugisiska. Redfox Free är ett gratis lexikon som innehåller 41 språk. The Distance Formula is a variant of the Pythagorean
Theorem that you used far apart they are. distance formula distance formula ( distance formulas distance equations for parabolas using the focus, the directrix and the distance formula. equations
for parabolas with a vertex at the origin using the general formulas.
2020-08-15 · An easy way to remember the formulae is to put distance, speed and time (or the letters D, S and T) into a triangle. The triangles will help you remember these three rules: \ [Distance =
Speed Formulas are known for computing distances between different types of objects, such as the distance from a point to a line. In advanced mathematics, the concept of distance has been generalized
to abstract metric spaces, and other distances than Euclidean have been studied. The braking distance is affected by The vehicle’s speed (quadratic increase; “raised to the power of 2”): 2 x higher
speed = 4 x longer braking distance.
Mölndal kommunfullmäktige
nordenskiöld glacierofficer juggsstream sverige poleningrid rybergthevenins teorem
Distance Formula Written White Chalk On Stockfoto redigera nu
Other distances with other formulas are used in Non-Euclidean geometry. Distance in Euclidean space. In the Euclidean space R n, the distance between two points is usually given by the Euclidean
distance (2-norm distance). How it works: Just type numbers into the boxes below and the calculator will automatically calculate the distance between those 2 points .
Laura netzelfaculty of humanities
coordinate plane jpg - Superatus Projektledning
By Pythagoras theorem, we can derive the distance formula. Using distance formula is much easier than the Pythagorean theorem. AB = √[(x2 - x1)² + (y2 - y1)²] where points are A(x1, y1) and B(x2, y2)
Let us look at how this formula is derived The distance formula is a formula that is used to find the distance between two points. These points can be in any dimension. For example, you might want to
find the distance between two points on a line (1d), two points in a plane (2d), or two points in space (3d).
The Distance Formula - Jacqueline Young - häftad 9781635348705
The first set of input values are for The Formula And Graph Below: H(x) = - 0,0125x ^ {2} + 0,5625x + 1.8 Where H (x) Indicates The Height Of The Spear In Meters And X Indicates The Distance
Distance Attenuation Calculator are physic/math calculator to find sound level in dB decreases with distance from the sound source fast and Length, distance units calculator, converter, tool online.
Kilometers (mi² to km²) conversion calculator for Area conversions with additional tables and formulas. It's called the Haversine formula and I found it on stack overflow when looking to show a
user's distance from a given store location. Since I had loose jumping and low heights are consistent with the Distance formula.
There is free parking and a 24-hour front desk. Air conditioned Object image and focal distance relationship (proof of formula) Physics Khan Academy - video with english Translation for 'fixed
distance' in the free English-Swedish dictionary and many other Swedish translations. distance formula between to points in plane. public double euclidDistance(Point p1) { double dist = Math.sqrt
(Math.pow((p1.getX()-this.x),2) + Math Formula Infographic | 30 Essential formulas for High School Math. Includes algebra formulas, geometry formulas & speed, distance formulas.
|
{"url":"https://hurmanblirrikmfmf.web.app/42744/51437.html","timestamp":"2024-11-14T02:14:14Z","content_type":"text/html","content_length":"18597","record_id":"<urn:uuid:18bec54b-95aa-4a07-8359-be4ef53dc07a>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00787.warc.gz"}
|
Kruskal Algorithm
Kruskal Algorithm
reading time
2 minutes
Theoretical report on connected components and kruskal algorithm, with tests and related analysis.
The aim of this exercise is to find and evaluate the number of connected components using the Kruskal algorithm. The Kruskal algorithm is a widely used algorithm in graph theory for finding the
minimum spanning tree of a connected weighted graph.
To begin, we start by generating a random graph with a user-specified number of nodes. Each node in the graph represents a vertex, and we determine the probability of arcs (edges) between vertices.
The probability can range from 0 to 1, allowing us to control the density of connections in the graph.
After generating the random graph, we randomly assign weights to each arc. These weights represent the cost or distance associated with traversing the edge between two vertices. The weights can be
assigned according to various criteria, such as a uniform distribution or specific weight ranges.
Next, we employ the Kruskal algorithm, utilizing the efficient union-find data structure. The algorithm operates by iteratively selecting the edges with the lowest weights while ensuring that no
cycles are formed. By connecting the vertices through these selected edges, we construct a minimum spanning tree, which spans all the nodes in the graph while minimizing the total weight.
def Kruskal_algo(self, S, dictionary):
result = []
n = 0
for node in self.nodes:
self.makeSet(node, S)
sortedArchs = {k: node for k, node in sorted(dictionary.items(), key=lambda item: item[1])}
for i in sortedArchs:
if self.find(i[0], S) != self.find(i[1], S):
self.union(i[0], i[1], S)
n = n + 1
if n == len(self.nodes) - 1:
return result
Once we have constructed the minimum spanning tree using the Kruskal algorithm, we can determine the number of connected components in the original graph. Connected components are subsets of vertices
within the graph where each vertex is connected to at least one other vertex in the subset. The number of connected components reflects the level of connectivity and can provide insights into the
graph’s structure.
To evaluate the performance and behavior of the algorithm, we conduct several tests with different numbers of nodes in the graph. In particular, we choose to run the tests with 5, 50, and 500 nodes
to observe how the algorithm scales with varying graph sizes. By analyzing the results, we can gain a better understanding of the algorithm’s efficiency, scalability, and ability to accurately
identify connected components.
This exercise provides a hands-on opportunity to explore graph algorithms, specifically the Kruskal algorithm, and understand its practical applications in solving connectivity-related problems. The
implementation and analysis of the algorithm on graphs of different sizes offer valuable insights into its performance characteristics and its potential use in various real-world scenarios.
You can find my full project on this Github Repository.
|
{"url":"https://niccoloparlanti.com/projects/kruskal/","timestamp":"2024-11-11T07:52:56Z","content_type":"text/html","content_length":"8555","record_id":"<urn:uuid:c2b0a88a-2b8d-44e3-b125-969f345899e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00451.warc.gz"}
|
Problem configuring OCTOSPI on STM32H735 - writes going to the wrong location
2024-10-10 09:42 AM
We have been using the STM32H735G-DK eval board for a while with no issues with the STM OCTOSPI interface which is connected to a S70KL1281DABHI023
When switching to our target we had to change to using a S27KS0642GABHI020 HyperBus RAM device.
We have been unsuccessful configuring the STM interface to allow valid communications with the HyperBus RAM and would really welcome some assistance.
The effect seen is that;
• write any value to the HyperBus RAM at address 0x04
• Read back 0x04and it reports 0x00
• However using uVision "memory watch" the data written to 0x4 is now read back from address 0x00
• We can repeat this for 0x5,06,07 and these read back in 0x1, 2, 3 respectively
• write to 0x08, and the data disappears completely.
• write to 0xC and it appears in 0x08.
• And repeat through the memory.
This is the output of the write/read test for addresses 0x00 to 0x1f is shown below.
write h base=70000000 Base[0x0] w=0x00
write h base=70000000 Base[0x1] w=0x01
write h base=70000000 Base[0x2] w=0x02
write h base=70000000 Base[0x3] w=0x03
write h base=70000000 Base[0x4] w=0x04
write h base=70000000 Base[0x5] w=0x05
write h base=70000000 Base[0x6] w=0x06
write h base=70000000 Base[0x7] w=0x07
write h base=70000000 Base[0x8] w=0x08
write h base=70000000 Base[0x9] w=0x09
write h base=70000000 Base[0xa] w=0x0a
write h base=70000000 Base[0xb] w=0x0b
write h base=70000000 Base[0xc] w=0x0c
write h base=70000000 Base[0xd] w=0x0d
write h base=70000000 Base[0xe] w=0x0e
write h base=70000000 Base[0xf] w=0x0f
write h base=70000000 Base[0x10] w=0x10
write h base=70000000 Base[0x11] w=0x11
write h base=70000000 Base[0x12] w=0x12
write h base=70000000 Base[0x13] w=0x13
write h base=70000000 Base[0x14] w=0x14
write h base=70000000 Base[0x15] w=0x15
write h base=70000000 Base[0x16] w=0x16
write h base=70000000 Base[0x17] w=0x17
write h base=70000000 Base[0x18] w=0x18
write h base=70000000 Base[0x19] w=0x19
write h base=70000000 Base[0x1a] w=0x1a
write h base=70000000 Base[0x1b] w=0x1b
write h base=70000000 Base[0x1c] w=0x1c
write h base=70000000 Base[0x1d] w=0x1d
write h base=70000000 Base[0x1e] w=0x1e
write h base=70000000 Base[0x1f] w=0x1f
read h base=70000000 Base[0x0] r=0x04 =! w=0x 0
read h base=70000000 Base[0x1] r=0x05 =! w=0x 1
read h base=70000000 Base[0x2] r=0x06 =! w=0x 2
read h base=70000000 Base[0x3] r=0x07 =! w=0x 3
read h base=70000000 Base[0x4] r=0x00 =! w=0x 4
read h base=70000000 Base[0x5] r=0x00 =! w=0x 5
read h base=70000000 Base[0x6] r=0x00 =! w=0x 6
read h base=70000000 Base[0x7] r=0x00 =! w=0x 7
read h base=70000000 Base[0x8] r=0x0c =! w=0x 8
read h base=70000000 Base[0x9] r=0x0d =! w=0x 9
read h base=70000000 Base[0xa] r=0x0e =! w=0x a
read h base=70000000 Base[0xb] r=0x0f =! w=0x b
read h base=70000000 Base[0xc] r=0x00 =! w=0x c
read h base=70000000 Base[0xd] r=0x00 =! w=0x d
read h base=70000000 Base[0xe] r=0x00 =! w=0x e
read h base=70000000 Base[0xf] r=0x00 =! w=0x f
read h base=70000000 Base[0x10] r=0x14 =! w=0x10
read h base=70000000 Base[0x11] r=0x15 =! w=0x11
read h base=70000000 Base[0x12] r=0x16 =! w=0x12
read h base=70000000 Base[0x13] r=0x17 =! w=0x13
read h base=70000000 Base[0x14] r=0x00 =! w=0x14
read h base=70000000 Base[0x15] r=0x00 =! w=0x15
read h base=70000000 Base[0x16] r=0x00 =! w=0x16
read h base=70000000 Base[0x17] r=0x00 =! w=0x17
read h base=70000000 Base[0x18] r=0x1c =! w=0x18
read h base=70000000 Base[0x19] r=0x1d =! w=0x19
read h base=70000000 Base[0x1a] r=0x1e =! w=0x1a
read h base=70000000 Base[0x1b] r=0x1f =! w=0x1b
read h base=70000000 Base[0x1c] r=0x00 =! w=0x1c
read h base=70000000 Base[0x1d] r=0x00 =! w=0x1d
read h base=70000000 Base[0x1e] r=0x00 =! w=0x1e
read h base=70000000 Base[0x1f] r=0x00 =! w=0x1f
The main OCTOSPI settings are
#define OCTOSPI_DEVICE_SIZE 23
#define OCTOSPI_FIFO_THRESHOLD 4
#define OCTOSPI_FIFO_CLOCK_PRESCALER 4
#define OCTOSPI_CHIP_SELECT_HIGH_TIME 8
#define OCTOSPI_TRANSFER_RATE 250
#define OCTOSPI_HYPERRAM_RW_RECOVERY 3
#define OCTOSPI_MAX_TRANSFER 0
#define OCTOSPI_HYPERRAM_LATENCY 6
The OCTOSPI mux clock in 200MHz
2024-10-11 01:11 AM
2024-10-11 01:28 AM
2024-10-23 08:26 AM
2024-10-23 09:06 AM
2024-10-23 09:40 AM
2024-10-23 11:57 PM
|
{"url":"https://community.st.com/t5/stm32-mcus-products/problem-configuring-octospi-on-stm32h735-writes-going-to-the/td-p/729982","timestamp":"2024-11-07T02:16:30Z","content_type":"text/html","content_length":"287204","record_id":"<urn:uuid:a7f93bed-c54c-47b5-8bb3-c7233a98e6c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00390.warc.gz"}
|
Tennis Server - Tennis Science, Engineering and Technology - Trajectories 101A: The Flight Of The Tennis Ball
Trajectories 101A
The Flight Of The Tennis Ball
Jani Macari Pallis, Ph.D.
One of the most common questions that we received from folks during our Tennis Sport Science Project (wings.avkids.com/Tennis) was on the math and physical science governing the flight of tennis
You know from experience the factors which govern a tennis ball's motion (speed, spin, height the ball is hit, etc.) but you may have wondered how to predict the effects on the ball's flight.
Especially as discussions and questions arose with the introduction and testing of the larger Type 3 ball, you might have wanted to know exactly how the ball's diameter would affect the speed or
impact point with the court. You even may have pulled out a high school or introductory college physics textbook only to find that those simplified trajectory equations you remembered didn't allow
you to determine what would happen if the ball's diameter changed.
This is Trajectories 101A. Yes, this implies that there will be a 101B! This month we'll go through the basic trajectory science and next month finish off by putting the math together and some fun
At this point, to make sure everyone has the same knowledge, we'll start with very basic definitions. You may wonder why we're discussing some of this right now but it all ties together in the end.
To model the flight of a tennis ball mathematically you need to determine what factors affect the ball's flight. Each of these factors in turn will have a mathematical expression that we'll use next
time. For now and for this discussion we're going to limit ourselves to flight from the time the ball is struck to the instant before the first bounce. Let's just deal with serves right now.
What factors affect the flight of a tennis ball? Go ahead and jot them down. Your list might include: speed of the ball, angle of the racquet on impact with the ball, height at which the ball is
struck, dwell time of the ball on the racquet, spin direction (topspin, underspin, slice), spin rate, environmental conditions (like wind on the court, altitude), ball size, type of ball (court
specific, pressurized or pressureless), ball condition (new, used), I've actually played "Devil's Advocate" here. Some of these factors do affect the trajectory equations directly while others do
There's actually a method to solving all of these types of problems. You start by defining the forces that affect the situation. The term "forces" in physics and aerodynamics has a very specific
meaning and very specific numerical units (like a newton) associated with it. Subsequently, the math equation for the forces around the tennis ball, is related to the equation for a balls'
acceleration (rate of change of velocity), which in turn is related to equations for velocity and position on the court. So when we determine these forces, we'll have the key to solving the math for
velocity along any point on the trajectory as well as where the ball will land.
The trajectory of a tennis ball flight is affected by three forces: the ball's weight, the drag (or retarding force caused by the air moving across the ball's fabric cover, against the ball's round
shape and even by its spin) and lift. Lift is the force that pushes an airplane or a bird up against its weight and is created primarily by the movement of the air around the wings. The word "lift"
is a little misleading, because one meaning of the word is "to rise." On an airplane normally (and thankfully) that means "up." But in aerodynamics lift has a very specific definition and is not
always "up" as you'll see. On a tennis ball aerodynamic lift generally is created through spin. (The last segment of this article is devoted to spin and lift.) These three forces act in a
"tug-of-war" or "arm-wrestle" between each other. The ball's path is the result of the combined strength and direction of these forces. Change one of these forces and the trajectory changes.
Lift and drag are both dependent on air density, velocity, and the ball's diameter. Additionally, lift is dependent on a coefficient of lift (C[l]) and drag on a coefficient of drag (C[d]).
C[l] and C[d] are numbers determined through wind tunnel or series of trajectory tests. C[d] is affected by the object's surface roughness, shape and for some sports balls even the spin rate. C[l] is
affected by the ball's spin rate. C[l] and C[d] both change with velocity, so as the ball slows down from the time it is struck to the time it first hits the court, C[l] and C[d] may change values.
You may have already figured out some factors, like the weight of the ball, will directly "plug into" the trajectory math equations. Other factors, such as the altitude of the court, will be used to
calculate one of the variables that will be plugged in. In this case, the altitude of the court affects the value of the air's density used in the lift and drag equations. Still other factors
influence one of the variables, but are not used in the calculations. Dwell time on the racquet and racquet tension may affect the ball's speed, as would player biomechanics but only the ball's
resulting velocity is plugged into the trajectory equations regardless of how that velocity was generated. Finally, some factors will have no influence on the math equations at all - once the ball is
set in motion whether a ball is pressurized or pressureless will not affect the trajectory equations as long as the balls' weight, lift and drag are the same. So when we "do the math" next month,
we'll be separating out the items in our list that plug right into the math, versus the ones that "influence" one of these factors.
I wanted to share one final aspect of ball trajectories. Maybe you've wondered why a topspin ball drops faster than a ball with underspin?
Here are some photos from a wind tunnel test conducted with my colleague Dr. Rabi Mehta from the NASA Ames Research Center. Wind tunnels are used to visually study the flow over the ball as well as
to determine the numerical values of the lift and drag coefficients. This tunnel (and in general most wind tunnels) is not used to observe a ball's trajectory.
In Figure 1, the tennis ball is not spinning. The ball is stationary while the smoke is moving from left to right, simulating a ball moving from right to left. Early aerodynamicists recognized it
didn't matter if an object moved through a fluid (a liquid or a gas) at a certain speed or the object was held stationary and the fluid moved over the a stationary object at that same speed - the
forces would be the same. Clearly, wind tunnel testing makes determining whether or not an object like a new aircraft wing design will have enough lift much simpler and safer than just building the
wing and flying it on the aircraft.
There is a lot of information that can be determined just from studying videos of the tunnel tests and still photos. You can see that the flow over the top of the ball looks just about like a mirror
image (symmetrical) to the flow across the bottom of the ball. You can see that the smoke and air follow the contour of the ball until the smoke reaches the right side of the ball. Then the airflow
isn't able to negotiate the turn around the backside of the ball. The airflow has really separated from the ball. So although the top and bottom halves of the photo look the same, the right and left
sides of the photo do not. That empty voided looking area on the right hand side of the photo behind the ball, in the middle of the picture is the wake. The wake is an indication of how much drag
there is. For comparison, Figure 2 is a bowling ball -- much, much smoother than the tennis ball nap cover. You can see on the bowling ball that the wake on the right hand side is much smaller. There
is a lot less drag force on the bowling ball.
Now let's examine spinning balls. Figure 3 is a ball with topspin and Figure 4 is a ball with underspin. What's different? Let's start with Figure 3 (topspin). The top and bottom halves of the photo
no longer look identical. On the right side of the topspin ball photo the wake is tilted up. The airflow behind the topspin ball is being forced upwards. Compare this with Figure 4 (underspin) where
the wake is pointed down.
Okay - but in topspin the ball drops faster. Why is the wake tilted up for topspin - wouldn't that mean that topspin ball is being pushed up and not dropping faster? The direction of the wakes might
be a little puzzling at first. Now we need to remember Newton's Third Law Of Motion: "For every action there is an equal and opposite reaction." The fact that the wake is pointed upwards signifies
that the ball is actually being forced down. For underspin the opposite is true - the wake pointing down signifies that the ball is being forced up. If instead of a ball you held a pen in your hand
and pushed the right side of the pen up, which direction would the left side of the pen point? It would point down. The same thing basically happens on the ball, the airflow pushes the right side of
the ball up, pushing the left side of the ball moving along the flight path down.
That's all for Trajectories 101A for now. Hope you will visit the Tennis SET column next month for the conclusion. We'll specifically go through the math equations that take you from forces to the
location on the court. We'll do a flight comparison for the US Open in New York (located at about sea level) versus a ball under all the same conditions at the imaginary "Mount Everest Open" located
at 29,000 feet or 8800 meters above sea level. (You'll see curving a ball with spin is much more difficult at higher altitudes.)
Don't hesitate to write me using this form if you have questions.
Wishing You A Healthy And Prosperous New Year!
Tennis SET Archive
│ If you have not already signed up to receive our free e-mail newsletter Tennis Server INTERACTIVE, you can sign up here. You will receive notification each month of changes at the Tennis Server │
│ and news of new columns posted on our site. │
|
{"url":"http://www.tennisserver.com/set/set_02_01.html","timestamp":"2024-11-09T01:31:00Z","content_type":"application/xhtml+xml","content_length":"53128","record_id":"<urn:uuid:84d1bd21-a369-46d1-99b6-95a09a4c1e20>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00643.warc.gz"}
|
Quantitative Techniques in Management (VVN)
We Also Provide SYNOPSIS AND PROJECT.
Contact www.kimsharma.co.in for best and lowest cost solution or
Email: solvedstudymaterial@gmail.com
Call: +91 82907-72200 (Call/WhatsApp) or +91 88003-52777 (WhatsApp Only)
Quantitative Techniques in Management (VVN)
1 . What do you understand by a Linear Programming Problem? What are its limitations? Discuss briefly the applications of linear programming in any functional area of management.
2 . Solve the following transportation problem for optimal solution.
W1 W2 W3 W4 W5 quantity
P1 20 28 32 55 70 50
P2 48 36 40 44 25 100
P3 35 35 22 45 48 150
Demand 100 70 50 40 40
3 . What is the Hungarain mathod for the assingnment problem ?
4 . What do you mean by correlation? Explain various type of correlation with the help of examples .
5 . What is the probability of getting a sum ‘FOUR’ when two dice are thrown .
6 . Discuss various components of time series with the help of examples.
7 . For the given of a random variable x and associated probabilities ( given in rows 1 and 2 of the following table ) work out the variance and standard deviation
X 2 3 4 5 6 7 8 9 10 Total
P(x) .05 .10 .30 .20 .05 .10 .05 .10 .05 1.00
8 . Boys of a certain age are known to have a mean weight of μ = 45 Kilograms. A complaint is made that the boys living in a municipal children’s home are underfed. As one bit of evidence, n = 50
boys (of the same age) are weighed and found to have a mean weight of x¯¯ = 41.5 Kilograms. It is known that the population standard deviation σ is 5.6 Kilograms (the unrealistic part of this
example!). Based on the available data, what should be concluded concerning the complaint?
Case Detail :
The length of life of an instrument produced by a machine has a normal ditribution with a mean of 14 months and standard deviation of 2.5 months. Find the probability that an instrument produced by
this machine will last
1. between 10 and 14 months.
2. less than 10 months.
3. more than 10 months
1. Scatter diagram is also called _________________
Correlation graph
Dot Chart
Zero correlation
None of these
2. Correlation can be ____________________________________________
Positive only
Positive or negative
Negative only
None of these
3. In correlation analysis, P.E. = ________________. x 0.6745
Standard Error
Probable Error
Correlation analysis
None of these
4. Regression lines are also called ________________________.
Correlation graph
Scatter diagram
Estimating lines
None of these
5. The arithmetic mean of bxy and byx is ____________________________.
Equal to 1
Equal to 2
Greater than r
Less than r
6. ____________________________. refers to the chance of happening or not happening of an event.
None of these
7. An event whose occurrence is impossible, is called ______________________
Sure event
Impossible event
Uncertain event
None of these
8. If two events, A and B are not mutually exclusive, the P(AUB) = __________________
P(A) + P(B)
P(A) + P(B) – P(A and B)
P(A) + P(B) + P(A and B)
None of these
9. The definition of priori probability was originally given by ____________________________
Pierre de Fermat
James bernoulli
10. Three dies are thrown, probability of getting a sum of 3 is ____________________.
11. Binomial distribution is a ________________________________ probability distribution
Continuous distribution
None of these
12. When probability is revised on the basis of all the available information, it is called ____________.
Priori probability
Posterior probability
None of these
13. The height of persons in a country is a ________________________. random variable.
Discrete as well as continuous
None of these
14. For a binomial distribution with probability p of a success and of q of a failure, the relation between mean and variance is ____________________________.
Mean is greater than variance
Mean is less than variance
Mean is equal than variance
Mean is greater than or equal to variance
15. In a binomial distribution, if n =8 and p = 1/3, then variance = ________________________
16. Poisson distribution is the limiting form of ______________________________.
Binomial distribution
Normal distribution
None of these
17. Poisson distribution is a ____________________________probability distribution.
None of these
18. In Poisson distribution, the value of ‘e’ = __________________________
19. Mean and variance of Poisson distribution is equal to ______________________________.
20. __________________________.distribution gives a normal bell shaped curve.
None of These
21. The height of normal curve is at its maximum at the ______________________.
None of these
22. Normal distribution is ______________________
All of these
23. An approximate relation between MD about mean and SD of a normal distribution is
5MD = 4 SD
3MD = 3 SD
3MD = 2 SD
4MD = 5 SD
24. In a ________________________. distribution, quartiles are equi-distant from median
None of These
25. A normal distribution requires two parameters, namely the mean and ______________
Standard deviation
mean deviation
26. Mean ± 2 S.D. covers ______________.% area of normal curve.
27. A __________________________ is a function of sample values.
None of these
28. Test of hypothesis and ________________________ are the two branches of statistical inference
Statistical analysis
None of these
29. Quartile deviation of normal distribution is equal to ____________________
2/3 S.D.
4/5 S.D.
3/4 S.D.
1 S.D.
30. Type I error is denoted by the symbol ________________________________.
None of these
31. A sample is treated as large sample when its sample size is ____________________________
More than 30
More than 100
More than 20
More than 50
32. Degrees of freedom for Chi-square in case of contingency table of (4×3) order are __________________.
33. By test of significance , we mean ____________________________
A significant procedure in statistics
A method of making a significant statement
A rule of accepting or rejecting hypothesis
A significant estimation problem
34. When sample is small, ________________________ test is applied.
z- test
y- test
i- test
t- test
35. Who developed F-test ?
R.A. Fischer
Karl Pearson
James Bernoulli
Charles Babage
36. Chi-square test was developed by __________________
R.A. Fischer
Karl Pearson
William Gosset
James Bernoulli
37. In a normal curve, the significance level is usually termed as ______________________region
Acceptance region
Critical region
Level of acceptance
None of these
38. Chi-square test was first used by____________________________
R.A. Fischer
Karl Pearson
William Gosset
James Bernoulli
39. If two samples of size 9 and 11 have means 6.8 and 8.8, and variance 36 and 25
respectively, then value of t = ____________________.
None of these
40. In one way ANOVA, the variances are ______________________
Between samples
Within samples
Both 1&2
Neither 1 nor 2 option
We Also Provide SYNOPSIS AND PROJECT.
Contact www.kimsharma.co.in for best and lowest cost solution or
Email: solvedstudymaterial@gmail.com
Call: +91 82907-72200 (Call/WhatsApp) or +91 88003-52777 (WhatsApp Only)
|
{"url":"http://kimsharma.co.in/2017/03/09/quantitative-techniques-in-management-vvn/","timestamp":"2024-11-06T10:27:37Z","content_type":"text/html","content_length":"47389","record_id":"<urn:uuid:cccf9932-2a03-4045-a67e-f4535665923e>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00246.warc.gz"}
|
On the two-dimensional model of quantum Regge gravity
The Ashtekar-like variables are introduced in the Regge calculus. A simplified model of the resulting theory is quantized canonically. The consequences related to quantization of Regge areas are
Pub Date:
□ Calculus;
□ Dimensional Analysis;
□ Field Theory (Physics);
□ Quantum Mechanics;
□ Quantum Theory;
□ Two Dimensional Models;
□ Gravitation;
□ Transformations (Mathematics);
□ Thermodynamics and Statistical Physics
|
{"url":"https://ui.adsabs.harvard.edu/abs/1991otdm.book.....K/abstract","timestamp":"2024-11-10T16:10:06Z","content_type":"text/html","content_length":"32248","record_id":"<urn:uuid:bfab1281-5aa1-417e-a7c6-babaea5c9bd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00431.warc.gz"}
|
All Implemented Interfaces:
This class represents a 3D region: a set of polyhedrons.
• Nested Class Summary
Modifier and Type
static class
Container for Boundary REPresentation (B-Rep).
Nested classes/interfaces inherited from interface org.hipparchus.geometry.partitioning.Region
• Constructor Summary
Build a polyhedrons set representing the whole real line.
PolyhedronsSet(double xMin, double xMax, double yMin, double yMax, double zMin, double zMax, double tolerance)
Build a parallellepipedic box.
Build a polyhedrons set from a Boundary REPresentation (B-rep) specified by sub-hyperplanes.
Build a polyhedrons set from a Boundary REPresentation (B-rep) specified by connected vertices.
Build a polyhedrons set from a Boundary REPresentation (B-rep) specified by connected vertices.
Build a polyhedrons set from a BSP tree.
• Method Summary
Modifier and Type
Build a region using the instance as a prototype.
protected void
Compute some geometrical properties.
Get the first sub-hyperplane crossed by a semi-infinite line.
Get the boundary representation of the instance.
Rotate the region around the specified point.
Translate the region by the specified amount.
Methods inherited from class org.hipparchus.geometry.partitioning.AbstractRegion
applyTransform, checkPoint, checkPoint, checkPoint, checkPoint, contains, copySelf, getBarycenter, getBoundarySize, getSize, getTolerance, getTree, intersection, isEmpty, isEmpty, isFull, isFull,
projectToBoundary, setBarycenter, setBarycenter, setSize
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
• Constructor Details
□ PolyhedronsSet
public PolyhedronsSet(double tolerance)
Build a polyhedrons set representing the whole real line.
tolerance - tolerance below which points are considered identical
□ PolyhedronsSet
Build a polyhedrons set from a BSP tree.
The leaf nodes of the BSP tree must have a Boolean attribute representing the inside status of the corresponding cell (true for inside cells, false for outside cells). In order to avoid
building too many small objects, it is recommended to use the predefined constants Boolean.TRUE and Boolean.FALSE
This constructor is aimed at expert use, as building the tree may be a difficult task. It is not intended for general use and for performances reasons does not check thoroughly its input, as
this would require walking the full tree each time. Failing to provide a tree with the proper attributes, will therefore generate problems like NullPointerException or ClassCastException only
later on. This limitation is known and explains why this constructor is for expert use only. The caller does have the responsibility to provided correct arguments.
tree - inside/outside BSP tree representing the region
tolerance - tolerance below which points are considered identical
□ PolyhedronsSet
Build a polyhedrons set from a Boundary REPresentation (B-rep) specified by sub-hyperplanes.
The boundary is provided as a collection of sub-hyperplanes. Each sub-hyperplane has the interior part of the region on its minus side and the exterior on its plus side.
The boundary elements can be in any order, and can form several non-connected sets (like for example polyhedrons with holes or a set of disjoint polyhedrons considered as a whole). In fact,
the elements do not even need to be connected together (their topological connections are not used here). However, if the boundary does not really separate an inside open from an outside open
(open having here its topological meaning), then subsequent calls to the checkPoint method will not be meaningful anymore.
If the boundary is empty, the region will represent the whole space.
boundary - collection of boundary elements, as a collection of SubHyperplane objects
tolerance - tolerance below which points are considered identical
□ PolyhedronsSet
public PolyhedronsSet(List<Vector3D> vertices, List<int[]> facets, double tolerance)
Build a polyhedrons set from a Boundary REPresentation (B-rep) specified by connected vertices.
The boundary is provided as a list of vertices and a list of facets. Each facet is specified as an integer array containing the arrays vertices indices in the vertices list. Each facet normal
is oriented by right hand rule to the facet vertices list.
Some basic sanity checks are performed but not everything is thoroughly assessed, so it remains under caller responsibility to ensure the vertices and facets are consistent and properly
define a polyhedrons set.
vertices - list of polyhedrons set vertices
facets - list of facets, as vertices indices in the vertices list
tolerance - tolerance below which points are considered identical
MathIllegalArgumentException - if some basic sanity checks fail
□ PolyhedronsSet
Build a polyhedrons set from a Boundary REPresentation (B-rep) specified by connected vertices.
Some basic sanity checks are performed but not everything is thoroughly assessed, so it remains under caller responsibility to ensure the vertices and facets are consistent and properly
define a polyhedrons set.
brep - Boundary REPresentation of the polyhedron to build
tolerance - tolerance below which points are considered identical
MathIllegalArgumentException - if some basic sanity checks fail
□ PolyhedronsSet
public PolyhedronsSet(double xMin, double xMax, double yMin, double yMax, double zMin, double zMax, double tolerance)
Build a parallellepipedic box.
xMin - low bound along the x direction
xMax - high bound along the x direction
yMin - low bound along the y direction
yMax - high bound along the y direction
zMin - low bound along the z direction
zMax - high bound along the z direction
tolerance - tolerance below which points are considered identical
• Method Details
□ buildNew
Build a region using the instance as a prototype.
This method allow to create new instances without knowing exactly the type of the region. It is an application of the prototype design pattern.
The leaf nodes of the BSP tree must have a Boolean attribute representing the inside status of the corresponding cell (true for inside cells, false for outside cells). In order to avoid
building too many small objects, it is recommended to use the predefined constants Boolean.TRUE and Boolean.FALSE. The tree also must have either null internal nodes or internal nodes
representing the boundary as specified in the getTree method).
Specified by:
buildNew in interface Region<Euclidean3D>
Specified by:
buildNew in class AbstractRegion<Euclidean3D,Euclidean2D>
tree - inside/outside BSP tree representing the new region
the built region
□ getBRep
Get the boundary representation of the instance.
The boundary representation can be extracted only from bounded polyhedrons sets. If the polyhedrons set is unbounded, a MathRuntimeException will be thrown.
The boundary representation extracted is not minimal, as for example canonical facets may be split into several smaller independent sub-facets sharing the same plane and connected by their
As the B-Rep representation does not support facets with several boundary loops (for example facets with holes), an exception is triggered when attempting to extract B-Rep from such complex
polyhedrons sets.
boundary representation of the instance
MathRuntimeException - if polyhedrons is unbounded
□ firstIntersection
Get the first sub-hyperplane crossed by a semi-infinite line.
point - start point of the part of the line considered
line - line to consider (contains point)
the first sub-hyperplane crossed by the line after the given point, or null if the line does not intersect any sub-hyperplane
□ rotate
Rotate the region around the specified point.
The instance is not modified, a new instance is created.
center - rotation center
rotation - vectorial rotation operator
a new instance representing the rotated region
□ translate
Translate the region by the specified amount.
The instance is not modified, a new instance is created.
translation - translation to apply
a new instance representing the translated region
|
{"url":"https://hipparchus.org/apidocs-3.1/org/hipparchus/geometry/euclidean/threed/PolyhedronsSet.html","timestamp":"2024-11-14T04:59:30Z","content_type":"text/html","content_length":"39853","record_id":"<urn:uuid:0e13b400-c35c-4959-90c9-9d2aad79b56c>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00451.warc.gz"}
|
How to Not Suck At Math
a guide for helping yourself and your kids (part 1)
This post has a lot of pictures and is too long for some email clients; you can read it on the Substack website or in the app.
This is part of a new series about how to help yourself or your kids improve in mathematics. It’s based partly on my own experience of having to learn how to learn mathematics, partly on extensive
tutoring experience, and partly on many conversations with homeschooling parents, as well as parents struggling to understand Common Core mathematics. Many future editions are already planned, but
feel free to leave suggestions for future editions in the comments (open for paid subscribers) or by email to hollymathnerd at gmail dot com.
Holly’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
I may put future editions behind the paywall; I’m not sure yet. If you want to subscribe to be sure you get them all, this link will give you 10% off. If you can’t afford a paid subscription, email
me and I’ll give you a free one.
Part 1: Addition and Subtraction
First, a refresher. Most of you have been using calculators or apps for years now, so I’m writing out a refresher to help. To grasp both addition and subtraction, the first core concept is place
Now that we remember how place values work, here is a refresher on addition:
And one on subtraction, including borrowing:
But What ARE Addition and Subtraction?
Addition and subtraction are the mathematical operations for combining or separating. We think of them as opposites, but they are in fact the same thing.
To understand this counter-intuitive thought, let’s look at the number line:
Addition moves to the right on the number line, as in these examples:
Subtraction moves to the left, as in these examples:
Consider thinking of it like this: subtraction is addition of negative numbers. We just usually leave off the addition sign—but we don’t have to! These are all perfectly valid:
With a little practice, this makes dealing with positive and negative numbers much easier.
Adding positive numbers is moving to the right.
Adding negative numbers (subtraction) is moving to the left.
I will get into subtracting a negative from a negative in two posts. Next up: multiplication and division. After that: sign rules. Yes, you will finally understand why negative times negative is
Open Your Calculator App
This isn’t feeling very intuitive for some of you, so open your calculator app on your phone and prove it to yourself.
Do 100 - 70 and get 30.
Now do 100 + (-70) and get 30.
What to Focus On with Little Kids
For young children, tactile objects that they can count are the way to start. Save the number line for when they’re old enough to handle a bit more abstraction.
The first light bulb moment for little kids is the trick of completing a ten. 9 + 7 = 16, and the way to think of this is to take what you need from the 7 to complete the 10, and then the answer is
easily obvious. 10 + 6 = 16.
Once they have that, guide them, with questions if possible, to the realization that everything starts over with tens. 9 + 7 = 16, and 19 + 7 = 26, and 29 + 7 = 36, and so on. It’s beautifully
predictable, endlessly repeating cycles.
When these ideas are solid, working with subtraction is an equally important, and simple, matter for drilling. If 9 + 7 = 16, what’s 16 - 9? Your kid may do the reverse in their head first, asking
themselves 9 plus what equals 16? Or they may do 16 - 10 is 6, plus 1 is 7. Both are excellent ways to think about these problems and should be heavily praised.
When Your Kid Is Ready
Drilling is of crucial importance. These things should be second nature.
If you want your kid to be good at math, then by the time they’re ready for algebra, all of these basic arithmetic skills should be automatic and rote. The California frameworks and other educational
strategies that don’t use memorization are shortchanging kids tremendously.
The reason why kids struggle with algebra is that their arithmetic skills are weak. If every single step of an arithmetic problem requires intense concentration and focus and cognitive energy, they
will tire easily. They will have the opportunity to make a mistake at every stage.
But if your kids knows that 60 - 4 is 56 because they made the connection that 6 + 4 is 10 and 10 - 4 is 6 and that these cycles repeat over and over, then when they see 7x + 4 = 60, they’ll see 7x =
56 in that without intense focus and thought. And because you’ll have made sure they memorize the multiplication tables (justification will be in the next issue), they’ll know that x = 8.
The difference between a kid whose arithmetic is solid and one whose arithmetic is weak is one who can look at 7x + 4 = 60 and get x = 8 almost without thinking vs one who has to do multiple
arithmetic steps, using a lot of energy and potentially making a mistake, at every point.
Part 2: multiplication and division, why they’re the same thing, and how understanding that lets you subtract negative numbers from negative numbers.
Holly’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Haven't read through all this yet, but wondering how many tines you've swwn or heard "New Math" by Tom Lehrer.
Expand full comment
I love this. Where can I get your lessons? Are you selling instruction? Are you selling a book?
Expand full comment
20 more comments...
|
{"url":"https://hollymathnerd.substack.com/p/how-to-not-suck-at-math","timestamp":"2024-11-13T12:55:48Z","content_type":"text/html","content_length":"287218","record_id":"<urn:uuid:69866e0e-668c-4014-a3de-0231e56dc314>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00638.warc.gz"}
|
distance geometry problem
In this paper, we study the complexity of the selection of a graph discretization order with a stepwise linear cost function.Finding such vertex ordering has been proved to be an essential step to
solve discretizable distance geometry problems (DDGPs). DDGPs constitute a class of graph realization problems where the vertices can be ordered in such … Read more
A multiplicative weights update algorithm for MINLP
We discuss an application of the well-known Multiplicative Weights Update (MWU) algorithm to non-convex and mixed-integer nonlinear programming. We present applications to: (a) the distance geometry
problem, which arises in the positioning of mobile sensors and in protein conformation; (b) a hydro unit commitment problem arising in the energy industry, and (c) a class of … Read more
|
{"url":"https://optimization-online.org/tag/distance-geometry-problem/","timestamp":"2024-11-12T07:30:44Z","content_type":"text/html","content_length":"85391","record_id":"<urn:uuid:a9450ba8-6ffb-4bce-8725-d7efbde94dbd>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00110.warc.gz"}
|
Di Eybike Mame The Eternal MotherWomen in Yiddish Theater and Popular Song 1905–1929 ausführliche Textversionextended text version SM 16252 (CD WERGO) Di Eybike Mame (The Eternal Mother):Women in
Yiddish Theater and Popular Song, 1905–1929 The recordings presented on this anthology represent a cross-section of women's contribution to theYiddish-language popular song culture which began to
develop in the urban centers of Eastern Europein the mid-19th century. The first expression of this emerging culture were the broder-zinger, soloperformers and troupes of singer-songwriters who
performed songs and skits in the secular sur-roundings of the inns, wine cellars and restaurant gardens of Jewish centers in Austro-Hungary,Romania and Russia. The broder-zinger were generally
maskilim, followers of the haskalah, the
|
{"url":"http://health-articles.net/s/sunju.org1.html","timestamp":"2024-11-11T13:19:56Z","content_type":"text/html","content_length":"55030","record_id":"<urn:uuid:6bddfe41-29dd-4d71-8ae2-e7b684519f7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00154.warc.gz"}
|
Lesson 5
Bases and Heights of Parallelograms
Problem 1
Select all parallelograms that have a correct height labeled for the given base.
Problem 2
The side labeled \(b\) has been chosen as the base for this parallelogram.
Draw a segment showing the height corresponding to that base.
Problem 3
Find the area of each parallelogram.
Problem 4
If the side that is 6 units long is the base of this parallelogram, what is its corresponding height?
Problem 5
Find the area of each parallelogram.
Problem 6
Do you agree with each of these statements? Explain your reasoning.
1. A parallelogram has six sides.
2. Opposite sides of a parallelogram are parallel.
3. A parallelogram can have one pair or two pairs of parallel sides.
4. All sides of a parallelogram have the same length.
5. All angles of a parallelogram have the same measure.
Problem 7
A square with an area of 1 square meter is decomposed into 9 identical small squares. Each small square is decomposed into two identical triangles.
1. What is the area, in square meters, of 6 triangles? If you get stuck, consider drawing a diagram.
2. How many triangles are needed to compose a region that is \(1\frac 12\) square meters?
|
{"url":"https://curriculum.illustrativemathematics.org/MS/teachers/1/1/5/practice.html","timestamp":"2024-11-07T23:26:50Z","content_type":"text/html","content_length":"86700","record_id":"<urn:uuid:f4439504-4d4d-4793-b379-337cc7fb98c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00121.warc.gz"}
|
SPSS: Stats Practically Short and Simple
by Sidney Tyrrell
Publisher: BookBoon 2009
ISBN-13: 9788776814748
Number of pages: 83
This free textbook is for people who want to know how to use SPSS for analyzing data, who want practical help in as short a time as possible. The author has considerable experience of teaching many
such people and assumes they know the basics of statistics but nothing about SPSS, or as it is now known, PASW.
Download or read it online for free here:
Download link
(4.5MB, PDF)
Similar books
Octave Programming Tutorial
Henri Amuasi
WikibooksOctave is a high-level language, primarily intended for numerical computations. The purpose of this collection of tutorials is to get you through most (and eventually all) of the available
Octave functionality from a basic level.
Alain Le Stang
WikibooksMaple is a computer algebra system offering many possibilities for math problems. Users can enter mathematics in traditional mathematical notation. This book aims to give all tools needed to
be autonomous with this software.
Advanced Scientific Computing
Zdzislaw Meglicki
Indiana UniversityTopics: linear algebra and fast Fourier transform packages and algorithms, Message Passing Interface (MPI) and parallel I/O (MPI/IO), 3D visualisation of scientific data sets,
implementation of problem solving environments, quantum computing.
wxMaxima for Calculus I and II
Zachary HannanThese books introduce the free computer algebra system wxMaxima in the context of single variable calculus. Each book can serve as a lab manual for a one-unit semester calculus lab, a
source of supplemental CAS exercises or a tutorial reference.
|
{"url":"http://www.e-booksdirectory.com/details.php?ebook=3948","timestamp":"2024-11-09T12:53:03Z","content_type":"text/html","content_length":"11311","record_id":"<urn:uuid:9599a08e-4161-4cb1-8060-529f21d04b6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00465.warc.gz"}
|
Super Mario 169
Problem I
Super Mario 169
Siggy is quite the video game enthusiast, and he’s been playing lots of Super Mario 169 lately (the highly obscure sequel to the more popular Super Mario 64). This game takes place entirely in an
ocean, which can be modelled with a 3 dimensional coordinate system. The player’s objective is to swim around as the titular character, Mario, and collect all of the coins, of which there can be up
to 169.
The coins are not simply in plain sight, however! Instead, there are up to 13 switches which Mario can press by touching them. Pressing any switch causes up to 13 coins to appear. Additionally, each
switch can only be pressed once, and when it’s pressed, any other uncollected coins (which had been revealed by the previously-pressed switch, if any) disappear, meaning that Mario will be unable to
collect them in the future. All switches and coins are small enough that they can be treated as points.
To make sure that he’s playing the game as optimally as possible, Siggy wants to know the minimum possible distance required to collect all of the coins.
There will be a single test case in the input. This test case will begin with a single line with four integers:
n mx my mz
where $n$ ($1 \le n \le 13$) is the number of switches, and the 3D point $(m_ x,m_ y,m_ z)$ is Mario’s starting point.
The following pattern is then repeated $n$ times, once for each switch. The pattern begins with a single line with four integers:
k sx sy sz
where $k$ ($1 \le k \le 13$) is the number of coins activated by this switch, and the 3D point $(s_ x,s_ y,s_ z)$ is the point where the switch is found. Following this there will be $k$ lines with
three integers:
cx cy cz
where $(c_ x,c_ y,c_ z)$ is the 3D point of one of the coins activated by this switch. All coordinates $x$, $y$, $z$ of all points will be in the range $-1\, 000 \le x,y,z \le 1\, 000$, and all
points in a test case, whether Mario’s starting point, switch or coin, will be unique.
Output a single number equal to the minimum distance for Mario to travel in order to collect all of the coins. Your result should have an absolute or relative error of less than $10^{-3}$.
Sample Input 1 Sample Output 1
-11 -1 0
-11 1 0 44.224463
-10 0 0
|
{"url":"https://open.kattis.com/contests/na19warmup4/problems/supermario169","timestamp":"2024-11-10T01:56:10Z","content_type":"text/html","content_length":"31569","record_id":"<urn:uuid:67d79061-1c0f-4a74-ac70-9ef0739e5d5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00630.warc.gz"}
|
One-way relationships
Let’s break down how subjective randomness allows us to cut out all other influences. Remember that our definition of subjective randomness had 3 parts: 1) sensitivity to conditions (or influences);
2) non-repetition of conditions; and 3) a lack of knowledge of the conditions. In this post, we’ll look at just part 1 of this definition.
In posts gone by, we explored how we could estimate the causal effect of how we get to work on travel time without pesky confounders messing up our measurements. We came up with a list of possible
confounders, things that could affect both our choice of how to get to work and travel time. Including these in our causal model made it look like this:
Basing our choice on a coin toss would cut out the influences on how we get to work, which would also remove the confounding. But wouldn’t some of these influences, like mood and energy, also affect
the coin toss anyway? So perhaps the new causal model would be something like this:
All the influences into how to get to work have been cut by the green arrow. The black arrows that remain are pretty straightforward. For example, it’s reasonable to say that the more energy you
have, the less time it will take you to walk to work. But those grey arrows are a bit odd. We know mood and energy will affect the coin toss outcome, but we’re a bit fuzzy on how. For example, we
can’t just say the more (or less) energy you have, the more likely you are to flip a head. In fact, it’s not really clear what we can say about the relationship between energy and the coin toss
outcome, other than that there is one. Will this create an indirect association between coin toss and travel time, that isn’t due to the effect of the coin toss (via how to get to work) on travel
time? If so, that would mess up our causal measurements — and so we wouldn’t have gained anything by adding the coin toss.
Let’s focus on the indirect relationship between coin toss and travel time to see if it causes a problem. We’ll simplify our model and focus just on energy as our only common influence on coin toss
and travel time, removing everything else.^1All the simplifications we make will be aimed at strengthening the indirect relationship and removing the direct relationship. Since we’re removing how we
get to work, we’ll just assume we walk every time. The coin toss therefore now has no direct effect at all on travel time. Let’s also introduce the coin’s flipping speed (the number of times the coin
rotates per second while in the air) explicitly, as this will be important later. This gives us:
We can reason that, if we have more energy, we’ll take less time to walk to work. Let’s measure energy on a scale from 0-10 (say, 0 meaning that you feel like you have the least possible energy, and
10 the most), and for this post we’ll say our energy is completely random day-to-day on this scale.^2Being more precise, we’ll say that it’s uniform random on the continuous range 0-10, inclusive.
The specifics about energy, how it’s defined and its distribution won’t be important. Let’s also assume the relationship between energy and travel time is very simple and deterministic. It might look
like this if we graphed it:
That tells us it will take 21 minutes to get to work if we rate our energy level as a 0, 18 minutes if we rate our energy level as a 5, and 15 minutes if we rate our energy level as a 10.
What about energy‘s effect on coin toss? There’s two parts to this now: energy‘s effect on flipping speed, and then flipping speed‘s effect on the coin toss. It’s reasonable to suppose that the more
energy you have, the more vigorous the flip and the faster the flipping speed. So let’s make that another simple and deterministic relationship:
So the coin might flip around 30 times per second if our energy is at its lowest, and 50 times per second if it’s at its highest.^3Incidentally, the midpoint is very roughly in line with the average
flipping speed measured in Persi Diaconis’s paper.
Great! We’ve now got just one more relationship to specify: how does flipping speed affect the coin toss outcome?
This one’s different — there isn’t a nice increasing or decreasing relationship. In fact, the coin toss outcome bounces back and forth, and will depend on how long the coin spends in the air and
which side of the coin started up. We’re going to assume it stays in the air for 0.5 of a second and that it starts up heads,^4Yes, deterministically. and that there are no other factors that affect
the outcome. The coin toss now only depends (deterministically) on flipping speed. Very roughly, the relationship looks like this:
This relationship seems very different to our other two! If the flipping speed is 30 rotations per second, we would get heads. But if we increase it slightly to about 30.5, we’d get tails. Increase
it again slightly to 31.5 rotations per second, and we’d go back to heads again. And so on as we keep increasing the flipping speed.
There is something really interesting about this relationship. The coin toss outcome can’t tell us whether the flipping speed was fast or slow. The outcome certainly tells us something about the
flipping speed — say, if we get tails, we can rule out the ranges 29.5-30.5, 31.5-32.5, and so on, knocking out a 1 unit range of flipping speeds every 1 rotation per second. In fact, we can rule out
about 50% of all the possible flipping speeds. But it doesn’t tell us whether the flipping speed was fast or slow — a tail is still equally likely to have been due to a flipping speed below 40 (slow)
as above 40 (fast).
Even more interestingly, the average flipping speed doesn’t change after learning the coin toss outcome. The average flipping speed remains at 40 whether the coin turns up heads or tails. In this
case, this also means the average energy level doesn’t change, which in turn means the average travel time doesn’t change.^5If the relationship between energy and flipping speed were more complex, it
might change. But we won’t dive into those complexities here. And ultimately, this means that that there is no association between coin toss outcomes and average travel times, even if there is an
association between coin toss outcomes and the distribution of travel times.^6In this case, there won’t even be an association with the standard deviation of measured travel times, or other moments
of the travel time distribution. All these statements are approximations, but the approximation can be made arbitrarily good by increasing the number of head/tail alternations within the flipping
speed range — which could be achieved relatively easily by simply increasing the size of the flipping speed range itself.
Being very strict with our language, this means energy is a common cause (and a very strong one), but not a confounder! This seems surprising at first, since energy completely determines both the
coin toss outcome and the travel time. If we don’t measure energy, won’t it cause problems for everything else? But the determinism between energy and coin toss only goes one way. In the other
direction, the coin toss outcome gives limited new information about flipping speed and energy, and no new information at all about the average flipping speed and energy.
In this sense, it’s not just any kind of sensitivity that contributes to randomness — it’s specifically when every small part of the input (or cause) space maps to the same part of the outcome (or
effect) space repeatedly. In this case, every small change in flipping speed (say, +1 rotation per second) can produce both possible coin toss outcomes (heads and tails). This kind of sensitivity
turns a deterministic relationship in the causal direction into a weak relationship in the other direction. In fact, in the case of the average, there is no relationship in the other direction.
We can very much call this a one-way relationship. One-way relationships are especially useful for us because they allow us to cut out other influences (that probably aren’t one-way) without simply
introducing a new confounding problem.
|
{"url":"https://causalbayes.org/2024-04/one-way-relationships/","timestamp":"2024-11-01T18:47:13Z","content_type":"text/html","content_length":"47290","record_id":"<urn:uuid:a22bc927-958f-4714-9c94-c80d00065be3>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00095.warc.gz"}
|
How to Understand Betting Odds and How They Work
Once you learn how to understand betting odds, you’re more likely to succeed in the medium and long term. If you fail to grasp this concept, you are bound to take poor value prices continually.
Ultimately, if you regularly take bad odds, there’s practically no chance of making a profit beyond the occasional lucky win.
Increasing your knowledge of betting odds helps you better appreciate the risks you take when gambling. Instead of blindly wagering $50 on an outcome and hoping to win, you’ll have a fair idea of the
likelihood of success. Analyzing the odds you bet at may prove revealing if you are one of the many people who often lose.
Betting Odds Explained: What Are the Main Formats?
Bookmakers usually use one of the following three pricing formats:
• Fractional
• Decimal
• American
You can understand betting odds by selecting the option that’s suitable for you. Most crypto betting sites allow you to switch formats at your leisure. Keep reading to see which one seems easiest for
you to use.
Fractional Odds
Fractional odds are generally the territory of the “old school” bettor and are the main format used in the United Kingdom.
They include a hyphen, so six-to-one is represented as 6/1. A simple way to calculate returns is to remember that you must bet the amount on the right to receive the profit on the left. Therefore, a
6/1 bet means you receive $6 in profit for every $1 bet if you win, or $7 in total.
Adopting this practice makes it easy to learn how to read betting odds that are a little more complicated. Suppose the San Francisco 49ers are 13/8 to defeat the Kansas City Chiefs on the money line.
You will earn a profit of $13 for every $8 you wager.
What if you want to bet $100? In that case, you multiply your stake by the figure on the left and divide by the number on the right to get your possible profit:
100 x 13 = 1300
1300/8 = 162.5
Therefore, if the Niners win, your profit is $162.5, and your total returns are $262.50.
Decimal Odds
Decimal odds are popular in Europe and are a little easier to understand than fractional ones. The odds represent how much you’ll win for every $1 wagered.
For instance, if you back a team to win at odds of 1.77, you get $1.77 for every $1 you wager, $0.77 of which is profit.
As you can see, decimal odds are unsophisticated. However, remember that when you multiply your stake by the odds, you get the total return. You must subtract your original stake from that figure to
calculate your profit.
Suppose you pick Liverpool to defeat Manchester United in the English Premier League. Liverpool’s price is 2.45, and you want to risk $60 on the wager. Here is how much you’ll win if the Anfield team
2.45 x 60 = 147
147 – 60 = 87
Your total return is $147, with a profit of $87.
American Odds
This is the preferred option for North American sportsbooks. Initially, you may wonder, “what do these betting odds mean?” However, once you get the hang of it, you’ll find that American odds are
easy to calculate.
If the price is odds-on (shorter than even money), it is designated by a minus sign. This indicates how much you need to risk to win $100.
For example, imagine if the Los Angeles Lakers are heavy favorites to defeat the Golden State Warriors at -200 on the money line. It means you must wager $200 to make a $100 profit. It is the
equivalent of 1/2 in fractional or 1.50 in decimal odds.
Prices longer than evens have a plus sign before the figure.
Let’s say the Brooklyn Nets are underdogs against the Boston Celtics at odds of +400. You earn a profit of $400 for every $100 you risk if you win. It is the equivalent of 4/1 in fractional and 5.00
in decimal odds.
How Do Betting Odds Work?
The most important thing to remember is that betting odds represent the chances of an outcome occurring. The longer the odds, the less likely the event will happen and the more money you win if you
make a correct prediction.
When you learn how to understand betting odds, you must know the answers to the following two questions:
• What Is Implied Probability?
• What Are True Odds?
What Is Implied Probability?
The implied probability of an event is a conversion of the bookmaker’s odds into a percentage chance of winning.
The easiest way to determine the implied odds of an outcome involves dividing 100 by the decimal odds. For example, odds of 2.00 mean a 50% chance of the event happening (100 / 2 = 50).
Things get more challenging when you use fractional or American odds. With fractional odds, you can divide the number on the left by the number on the right and add one. For example, if the odds are
8 / 13 = 0.6153
0.6153 + 1 = 1.6153
In general, you may round the number down to two decimal points. This means the above becomes 1.62. Next, 100 is divided by 1.62:
100 / 1.62 = 61.73
Therefore, fractional odds of 8/13 equate to 1.62 in decimal odds and a 61.73% chance of the event occurring.
With American odds, you must first calculate the ultimate payout. Suppose the odds are -170. You must risk $170 to win $100 for a total payout of $270. Next, you divide the money risked into the
final possible payout:
170 / 270 = 0.63
Therefore, there is a 63% chance of the event occurring.
Of course, you also have to consider the bookmaker’s edge, which is how they profit. In reality, you’re typically giving up several percent per bet. However, the bookmaker isn’t always right, and if
you learn how to calculate the true odds of an event, crypto betting success could be yours.
Understanding True Odds – Getting an Edge
The trickiest part of sports betting is finding an edge over the bookmaker. The only way this is possible is through understanding the odds. If you believe a bet has a 60% chance of winning (true
odds of 1.67), but the bookie thinks the likelihood is only 50% (implied odds of 2.00), you have a value bet.
Here’s another example. Let’s say you believe the Argentina football team has a 40% chance of beating Brazil in the Copa America. In your eyes, the true odds should be 2.50 (100 / 40). However, a
crypto bookie offers odds of 3.00, suggesting that the Argentines have a 33.33% chance of victory (100 / 3). Therefore, it is clear that backing them at the available odds is a worthwhile exercise.
You can probably see the problem here. How can you possibly know the true probability of a sports outcome? It is easy to do in casino games where the house edge is known. For instance, the house edge
in European (one zero) roulette is 2.7%. Already, you know that you’re at a disadvantage.
You can’t say the same when betting on individual or team sports. Professional bettors tend to create something called an “odds tissue .” The process includes analyzing a host of factors to decide
the real odds of an event. It is a common practice in horse racing, but you can use it in every other sport.
The process is time-consuming but worthwhile when it uncovers a steady stream of value bets. Other bettors pay services that use artificial intelligence to determine when an event moves into ‘value
bet’ territory.
If You Don’t Understand Betting Odds, Crypto Bookmakers Will Win
Casual bettors who aren’t concerned about making a long-term profit perhaps don’t care about the minutiae of odds calculation. However, it’s a good idea to learn how betting odds work to get the best
bang for your buck. As for serious bettors, understanding betting odds is crucial to keep your bankroll intact for as long as possible.
If you don’t know how to read betting odds, you hand the advantage to crypto bookmakers. At that point, you’ll only win through sheer luck.
Here is a table outlining selected odds across the three formats. We have also included the implied probability of winning in each case. Check out this handy calculator , which lets you find the
implied probability of any odds in the three formats.
American Odds Fractional Odds Decimal Odds Implied Probability of Winning
-500 1/5 1.2 83.33%
-425 1/4 1.24 80.95%
-350 2/7 1.29 77.76%
-250 2/5 1.4 71.43%
-200 1/2 1.5 66.67%
-175 4/7 1.57 63.65%
-125 4/5 1.8 55.56%
100 1/1 2.0 50%
+125 5/4 2.25 44.44%
+200 2/1 3 33.33%
+275 11/4 3.75 26.67%
+350 7/2 4.5 22.22%
+600 6/1 7 14.29%
Take particular note of the implied probability of winning. Doing so lets you quickly understand the likelihood of your bet succeeding. If you focus on longer odds bets, lengthy losing streaks are
certain. Therefore, you need to create a bankroll large enough to cope with downturns and psychologically prepare yourself for them.
Are There Only Three Betting Odds Formats?
No. There are many other formats used around the world. Three of the most notable are Hong Kong, Malaysian, and Indonesian odds. They are generally easy to understand and convert into decimal odds.
Which Betting Odds Format is the Best?
There is no single ‘best’ format. It all depends on your preferences. Decimal odds are the simplest solution for individuals looking to calculate their overall returns easily. American odds are also
straightforward once you get the hang of using them. Fractional odds take more getting used to but remain the preferred option amongst many professional gamblers.
Can I Calculate the True Probability of an Event Occurring?
Arguably, you can’t calculate true probability and odds with a degree of certainty. However, a growing number of bettors are using artificial intelligence in the hope that machine learning can bring
them closer to knowing the true odds of an event. Doing so lets them know they have an edge on any particular bet.
Is There Anything I Can Do to Improve the Likelihood of Discovering True Odds?
Yes. Apart from creating an odds tissue, you can use certain metrics to improve your chances of knowing the outcome of an event. Poisson distribution, for instance, is used to measure the probability
of independent events happening a specific number of times within a certain period. You can also learn to calculate the expected value of bets to determine whether you’re getting value.
Should I Bet Differently If I Think a Wager Has a Strong Chance of Winning?
This is a difficult question to answer. On the one hand, it makes sense to bet a little more if you’re confident of a higher chance of success. Many tipsters implement this ‘confidence’ model when
sending selections to customers.
However, there’s a big difference between using your gut instinct and a well-designed betting system. If your data shows that a bet has a far higher edge than usual, and you trust this information,
it is perhaps worth taking the additional risk. In contrast, if you continually wager higher amounts than normal based on nothing more than a feeling, your bank balance will probably plummet sooner
rather than later.
What is a Bookmaker’s Overround?
This is another name for the bookmaker’s profit margin. This edge varies from one bookie to another, so it pays to shop around for the best odds. You can calculate the overround by calculating the
implied probability for each possible outcome in an event. Next, add these figures together; anything above 100 indicates a bookmaker’s profit.
In a tennis match, player A is 1.50 to win, while player B is 2.50. The implied probability of player A winning is 66.66% (100 / 1.50). The implied probability of player B is 40% (100 / 2.50). Add
66.66 and 40 to get 106.66. Subtract 100 from 106.66, and you’re left with 6.66%, which is the bookmaker’s overround for this particular event.
|
{"url":"https://cryptobetting.org/blog/odds/","timestamp":"2024-11-09T07:50:36Z","content_type":"text/html","content_length":"75868","record_id":"<urn:uuid:133cf550-9e90-48f5-80af-b08d93119b3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00489.warc.gz"}
|
ACP Seminar (Astronomy - Cosmology - Particle Physics)
Speaker: Serguey Todorov Petcov (SISSA/INFN, Kavli IPMU)
Title: Predictions for the Dirac CP Violation in the Lepton Sector
Date Wed, Mar 12, 2014, 13:30 - 14:30
Place: Seminar Room A
Related 1161.pdf
After the successful determination of the reactor neutrino mixing angle \mbox{$\theta_{13} \cong 0.16 \neq 0$}, a new feature suggested by the current neutrino oscillation data is a
sizeable deviation of the atmospheric neutrino mixing angle $\theta_{23}$ from $\pi/4$. Using the fact that the neutrino mixing matrix $U = U^\dagger_{e}U_{\nu}$, where $U_{e}$ and $U_{\nu}
$ result from the diagonalisation of the charged lepton and neutrino mass matrices, and assuming that $U_{\nu}$ has a i) bimaximal (BM), ii) tri-bimaximal (TBM) form, or else iii)
Abstract: corresponds to the conservation of the lepton charge $L' = L_e - L_\mu - L_{\tau}$ (LC), we investigate quantitatively what are the minimal forms of $U_e$, in terms of angles and phases it
contains, that can provide the requisite corrections to $U_{\nu}$ so that $\theta_{13}$, $\theta_{23}$ and the solar neutrino mixing angle $\theta_{12}$ have values compatible with the
current data. In the case of the ``standard'' ordering of the 12 and the 23 rotations in $U_e$, the Dirac CP violation phase $\delta$, present in the PMNS matrix $U$, is predicted to have a
value in a narrow interval around i) $\delta \cong \pi$ in the BM (or LC) case, ii) $\delta \cong 3\pi/2$ or $\pi/2$ in the TBM case, the CP conserving values $\delta = 0, \pi, 2\pi$ being
excluded in the TBM case at more than $4\sigma$.
|
{"url":"http://research.ipmu.jp/seminar/?seminar_id=1161","timestamp":"2024-11-14T18:15:18Z","content_type":"text/html","content_length":"15043","record_id":"<urn:uuid:d768b345-1786-41fe-996d-965ec583257e>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00562.warc.gz"}
|
What is a Histogram and What are the Things You Should Know About it? - LittleByties
In order to analyze the numerical data and the frequency of the collection, the graphical forms make the basis of representation. The graphical representations are easy to understand and assess. One
such graphical tool showing the frequency distribution of each value of the data is a histogram. The concept forms an important part of statistics and data handling.
Definition of histogram
Histogram can be understood as a diagrammatic and graphical representation that consists of rectangles which have area proportional to frequency and width is same as the extent of class interval. The
representation organized the group of data points. Though the appearance is similar to the bar graph actually it is not. Histogram can be defined as the representation of grouped frequency
distribution according to the class intervals.
What is the purpose of using a histogram?
The representation is a famous statistical pictorial tool used to summarise data on an interval basis. Often used to give the descriptive data in detailed and precise ways histogram makes the
understanding of the concept simple. The frequency and the length (height) of the histogram are proportional and dependant.
When should you use the concept?
The graph (histogram) works under certain necessary conditions, which are:
• Numerical data.
• Analysis of data distribution is to be worked.
• The change in process from a period to the other.
• Determination about the output.
• Analysis of customer needs according to the process preferred.
• When the processing of data is to be analysed.
How is a histogram different from a bar graph?
A histogram is a representative 2-D figure while the bar graph is a representation in a single dimension.
In a histogram, the area of rectangles gives the measurement of frequency while the length of the bar graph gives frequency but in this width doesn’t get any importance.
The rectangles of the histogram are continuous and joined whereas the bar graphs are separated from each other by continuous and equal spaces.
The given differences can be assessed by anyone after analysing the pictorial representation of both.
Explaining different types of histograms
The classification is basically made on the basis of the distribution of frequency. Even the distributions are classified as normal, skewed, bimodal, multimodal, comb, edge peak, etc. The various
types of histograms can be named uniform histogram, symmetric, probability, and bimodal histogram.
Uniform histogram
This type of histogram is applied in the case of small class intervals and approximately the same frequency of data and the involvement of several peaks. Such representation reflects the consistency
in the data.
Bimodal histogram
As the name itself suggests such a histogram shows two models of the histogram on the same graph. Such graphical representations are used to show different data’s or compare the two different kinds
of information. Both the histogram shows independent data with a gap between the two.
Symmetric histogram
The histogram is symmetric if the y-axis of the histogram and two sides of the same are identical both in shape and size. The representation showing a perfect symmetry is called an asymmetric
histogram. (Being symmetric means the left half of the graph is identical to the right half of the same histogram)
Probability histogram
The probability representation is represented by a probability histogram. The corresponding probability of the data is the same as the area of each rectangle. The length (height) of the histogram
gives the results of probability. The histogram is started by the selection of class intervals.
Construction of histogram
The following steps make the construction of the histogram simple and easy.
• Mark the class intervals on the horizontal line (x-axis) and frequency on the vertical axis (y-axis).
• Mark the identical scales on both the axis.
• Prefer the intervals exclusive in nature.
• Start construction of rectangles according to the class size and intervals and related data (frequency) as heights.
• The height of the figures shows the respective frequency and the intervals are taken equal.
In order to have a better understanding of the topic download Cuemath today. The app also helps you explore various other concepts like linear graph. A straight-line graph is known as a linear graph.
This type of graphs represents linear two variable equations that show the relation of two quantities.
Ikram Ullah is a seasoned content writer and editor hailing from Faisalabad, Pakistan, boasting a decade of rich experience in SEO and content writing. He is particularly distinguished in the realm
of technology content writing, where he excels at demystifying complex technological concepts, making them accessible and engaging for a broad audience. Throughout his career, Ikram has made
significant contributions to various websites, showcasing his commitment to technological advancements and his skill in creating informative and compelling content. His portfolio includes work for
littlebyties.com, republicansunited.com, and invisiblelocs.net, each reflecting his dedication and expertise in the field.
Leave a Comment
|
{"url":"https://littlebyties.com/what-is-a-histogram/","timestamp":"2024-11-07T19:39:25Z","content_type":"text/html","content_length":"112712","record_id":"<urn:uuid:2e22caff-ada0-47e2-9ce7-92bfd4011581>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00677.warc.gz"}
|
Can I define $⊂=$ to show as $subset.eq$?
I am wondering if defining own shortcuts by linking operators (analogously to how <= is lt.eq) is possible, and if so how. Clearly I can’t simply write
let ⊂= = subset.eq
To define a shortcut for ⊆, you can simply define a variable
#let sse = math.subset.eq
As you said, writing a variable can only be defined within specific naming constraints. See
2 Likes
Thanks for your reply! I’m aware of how to define variables, I was just wondering if defining a composition of operators similarly to <= but ⊂= would be possible in some way. Thanks for the link to
the naming constraints, that’s useful. But perhaps there is another way to do this?
hmmm, I don’t think it’s possible in Typst, but you can open an issue at the Typst repo to add new shorthands!
1 Like
You can use the quick-maths package which traverses the equation content and replaces any direct occurrences of given shorthands with their respective symbols. This method is relatively naive, so it
may break when defining more complex shorthands.
#import "@preview/quick-maths:0.1.0": shorthands
#show: shorthands.with(
($⊂=$, $subset.eq$)
$ A ⊂= B $
(Note: I am the author of that package)
3 Likes
This is great :-) thanks for the info and your work on the package
|
{"url":"https://forum.typst.app/t/can-i-define-to-show-as-subset-eq/1389","timestamp":"2024-11-03T16:05:58Z","content_type":"text/html","content_length":"21384","record_id":"<urn:uuid:05dc0711-1229-48c2-a74a-f6d86c092374>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00099.warc.gz"}
|
Details of the Mathematical Database Essay and Comic Competition 2005/06 - www.mathdb.org
Details of the Mathematical Database Essay and Comic Competition 2005/06
Details of
Mathematical Database
Website Resource Competition 2006/07
For a printable version of the regulations, please click here
Please click here for the entry form
I. Aim
The competition aims at arousing the interest of secondary school students in mathematics by encouraging them to create web resources related to mathematics. It also aims at broadening students’
horizons in mathematics and promoting the learning of mathematics outside the classroom.
II. Eligibility and Grouping
1. All full-time secondary school students in Hong Kong (except members of ‘Mathematical Database’) are eligible to join the competition.
2. Participants may join as an individual or in groups (up to a maximum of 6 people).
3. The competition is divided into the Junior Section (Secondary 1 to 3) and the Senior Section (Secondary 4 to 7). For group entries, the section will be determined according to the highest form
among all group members.
III. Format of Entries
1. Participants are free to create any entries which can be posted on the web, including articles, book reviews, poems, stories, riddles, comics, movie clips, games, etc. They may also integrate
more than one of the above formats in the same entry.
2. There is no restriction on the topic and level of difficulty of the entries as long as they are related to mathematics.
3. Texts in the entries should be mainly in Chinese or English. Texts in Chinese should mainly consist of formal written Chinese for the ease of reading.
4. Each participant may submit up to three entries, whether they are individual or group entries.
IV. Specifications of Entries
1. Entries may be submitted in the form of electronic files or hard copies.
2. Articles, book reviews, poems, stores or riddles may be saved as .doc or .txt files (.doc is preferred) using word-processing programmes like Microsoft Word, Quick Word and Open Office.
Accompanying figures may be inserted into the .doc file or saved separately as .jpg files. All electronic submissions will be printed on A4-sized paper for adjudication. Such entries may also be
handwritten neatly on A4-sized paper using black or blue ball pen, or printed on A4-sized paper (in case participants have difficulty in inputting mathematical symbols in the computer file, they
may also include handwritten mathematical symbols on the computer printouts). Alternatively, participants may submit such entries in the form of a webpage by putting all files into a folder or
creating a .zip file.
3. Comics may be saved as .gif, .jpg or .bmp files. All electronic submissions will be printed on A4 paper for adjudication. Such entries may also be submitted in hard copies by drawing directly on
A4-sized paper, or printing on A4-sized paper, or creating a homepage, in which case all files should be saved into a folder or put into a .zip file.
4. Movie clips should be saved as .wmv files. Participants are advised to strike a balance between visual effect and limitations of web traffic in choosing the most suitable resolution. If
participants wish to save movie clips into other file formats, they should e-mail mathdb@gmail.com to enquire.
5. Games should be put in the form of a webpage, and all files should be saved into a folder or in the form of a .zip file.
6. If comics, movies and texts are accompanied by separate texts, the same specifications on articles will apply to the texts part.
7. There is no restriction on the number of words, colours, lengths, etc., while consideration must be taken into so that the entries can be easily put on web.
8. For other types of entries, participants may e-mail mathdb@gmail.com to enquire for any other specifications.
9. Each entry must be accompanied by an entry form during submission. In case a participant submits more than one entry, he/she must submit one entry form for each entry.
V. Submission of Entries
1. Participants may submit their entries with the entry form by e-mailing to mathdb@gmail.com. The size of the e-mail must not exceed 9 MB.
2. Participants may also send their entries or discs containing their entries and the entry form to the following address by mail:
To: Mathematical Database (Website Resource Competition 2006/07) c/o Miss Li Ching Man Department of Mathematics University of Hong Kong
Pokfulam Road, Hong Kong.
3. An e-mail confirmation will be sent within 7 days of the receipt of entries.
4. The deadline for submission is Wednesday, 28th February, 2007, based on the date of the postal chop (in case of hard copy submission) or the time of the e-mail server (in case of electronic
VI. Adjudication
1. All entries must satisfy the required specifications. The criteria for adjudication will be as follows:
Type of entries General Requirement Criteria for Approximate
adjudication Weighting
Articles Write an article to give an interesting and insightful exposition of a certain topic. Articles which emphasise too much on knowledge (e.g. A deep and insightful ★★★
materials commonly found in textbooks) are not suitable. view of the topic
thought-provoking and ★★★
Accuracy and rigour ★★★
Concise explanation of ★★
deep principles
Precise but concise ★
use of words
Book reviews Pick a book (or even a movie or a TV programme) and describe and comment on its mathematical contents, possibly with your own opinion. A mere Insightful personal ★★★
description of its contents is not suitable. view
Introduction of the ★★
mathematical contents
Concise explanation of ★★
deep principles
Accuracy and rigour ★
Stories / Poems / Create a story / one or more poems / one or more riddles related to mathematics. In case of poems and riddles, participants may include a Creativity ★★★
Riddles separate text for explanation.
Relation with ★★★
Lively and interesting ★★
Precise but concise ★
use of words
Comics / Movie clips Create one or a series of comics / a movie clip to bring out a certain theme in mathematics. Preference goes to those comics / movie clips Creativity ★★★
which can bring out the intended message more effectively than using texts. Participants may include a separate text for explanation.
Relation with ★★★
Effective ★★
Lively and interesting ★★
Artistic and technical ★
Games Create a game related to mathematics so that it can be played in common internet browsers. Games which require no additional plug-ins are Creativity ★★★
preferred. Participants may accompany the game with explanations in texts.
Relation with ★★★
Educational value ★★
Lively and interesting ★★
Compatibility in ★
various systems
1. If there are other types of entries or entries combining more than one format, a separate set of adjudication criteria will be suitably created.
2. The panel of adjudicators consists of the following members:
• Prof AU Kwok Keung Thomas (Department of Mathematics, CUHK)
• Dr CHEUNG Ka Luen (The Hong Kong Institute of Education)
• Prof CHENG Shiu Yuen (Department of Mathematics, HKUST)
• Prof LI Kin Yin (Department of Mathematics, HKUST)
• Prof SIU Man Keung (Department of Mathematics, HKU)
VII. Announcement of Results
1. Results of the competition will be announced in the Mathematical Database website (http://www.mathdb.org) in April 2007.
2. The Prizing Ceremony will be held in May 2007. Details will also be announced in the Mathematical Database website in due course.
3. Notification letters will be sent to the schools to which the winners belong.
VIII. Awards and Prizes
1. There are gold, silver and bronze awards in each of the Junior and Senior Sections. The number of awards depends on the quality of the entries with no preset limit. The adjudication of entries
from the Junior Section will be based on a slightly lower standard than those from the Senior Section.
2. The criteria for gold, silver and bronze awards are as follows:
• Gold Award: Outstanding performance in all items of the adjudication criteria
• Silver Award: Outstanding performance in most items of the adjudication criteria
• Bronze Award: Good performance in most items of the adjudication criteria
1. Entries with a very impressive performance in some aspects will be presented a Special Award on top of the original award it gets.
2. Awardees of the special, gold, silver and bronze awards are entitled to the following prizes:
Special Award: $500 book coupon and a grand prize
Gold Award: $300 book coupon and a souvenir
Silver Award: $200 book coupon and a souvenir
Bronze Award: $100 book coupon and a souvenir
IX. Other Details
1. Entries must be the original work of the participants (quotations of others’ works must be specified) and have not been published before in public means such as newspapers, magazines,
competitions and on the web.
2. Submitted entries will not be returned to the participants.
3. We may ask winners to produce proof of their study grade level.
4. The copyrights of the winning entries are jointly owned by the participants and our website.
5. We reserve the right to publish the winning entries in our website, possibly with suitable modifications.
6. Participants who violate the above regulations may be disqualified.
7. In order to be fair, members of ‘Mathematical Database’ may not enter this competition.
8. For enquiries, feel free to email to mathdb@gmail.com.
X. Appendix: Examples of Entries
Here are some examples of topics for entries. Participants may consider following these topics and formats, or they may create entries of any topic or format of entries that can be released on
web, as long as they are related to mathematics.
• Scoring Systems of Soccer Tournaments – Write an article to discuss the various scoring systems used in different soccer tournaments, and discuss the pros and cons of each.
• The Mathematics of ‘Sudoku’ – Write an article to discuss the mathematics of ‘Sudoku’ or its variations, or write a computer program to solve ‘Sudoku’ prolems, or create a ‘Sudoku’ game.
• An Exploration of ‘Geom Lab’ – Create a webpage to discuss experiments in geometry using the ‘Geometric Drawing Pad’ in ‘Mathematical Database’ with demonstrations.
• Create comics or movie clips based on a joke related to mathematics.
• The Mathematics of Da Vinci Code – substantiate the parts in Da Vinci Code that are related to mathematics and include your personal view.
|
{"url":"https://www.mathdb.org/competition/essay/0607/e_competition.htm","timestamp":"2024-11-06T02:33:46Z","content_type":"text/html","content_length":"40121","record_id":"<urn:uuid:6a1e6575-d467-4872-b9ca-0ea4ba0685f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00696.warc.gz"}
|
A box has dimensions of 19 inches long, 1.7 feet wide, and 6 inches high. What is the volume of the box? | Socratic
A box has dimensions of 19 inches long, 1.7 feet wide, and 6 inches high. What is the volume of the box?
1 Answer
The volume is $193.8$ cubic inches
A box is a rectangular prism.
The volume is calculated by the formula
length x width x height
$V = l w h$
$l = 19$in
$w = 1.7$in
$h = 6$in
$\left(19\right) \left(1.7\right) \left(6\right) = V$
$193.8 = V$
Impact of this question
7013 views around the world
|
{"url":"https://socratic.org/questions/a-box-has-dimensions-of-19-inches-long-1-7-feet-wide-and-6-inches-high-what-is-t","timestamp":"2024-11-12T06:14:04Z","content_type":"text/html","content_length":"33521","record_id":"<urn:uuid:fec59422-421f-4ccf-a474-e1e9a658c654>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00403.warc.gz"}
|
Who's That Mathematician? Paul R. Halmos Collection - Page 26
For more information about Paul R. Halmos (1916-2006) and about the Paul R. Halmos Photograph Collection, please see the introduction to this article on page 1. A new page featuring six photographs
will be posted at the start of each week during 2012.
Halmos photographed probabilist Mark Kac (1914-1984) in 1962. According to J. J. O’Connor and E. F. Robertson of the MacTutor Archive, “Mark Kac pioneered the modern development of mathematical
probability, in particular its applications to statistical physics.” Kac earned his Ph.D. at the University of Lwów (Lvov) in Poland in 1937 under the direction of Hugo Steinhaus. He managed to
escape Poland in 1938, when he took up a scholarship at Johns Hopkins University in Baltimore. He was on the mathematics faculty at Cornell University in Ithaca, New York, from 1939 to 1961,
Rockefeller University in New York City from 1961 to 1981, and the University of Southern California in Los Angeles from 1981 onward. (Source: MacTutor Archive)
Richard Kadison (1925-2018) was photographed by Halmos in December of 1950 during the AMS Winter Meeting at the University of Florida in Gainesville, to which he carpooled from the Institute for
Advanced Study in Princeton with Shizuo Kakutani, Irving Segal, and Alan Hoffman. He remembers well that Halmos was also at the meeting:
Paul was sitting next to me as I waited for my turn to speak [during a session of 10-minute talks]. When I was about to leave my seat to give my talk, Paul tugged at my sleeve and said, out of
the side of his mouth in a low voice, with just the hint of a growl, "Don't give an uninvited address." That put me in a wonderfully humorous mood for my lecture!
Kadison wasn't the only one who drove from New Jersey to Florida for the meeting; apparently, IAS permanent faculty member John von Neumann did, too:
A little later at that conference, Dec. 28, von Neumann's birthday, arrived. I knew von Neumann from my stay at the Institute, but only in a formal way. Paul and Shizuo were dear friends of his.
At their suggestion I was invited along to his "birthday party." The four of us drove in von Neumann's large Cadillac to a nearby county that wasn't "dry" for a small celebration; it was the
thrill of a lifetime for me.
Kadison earned his Ph.D. from the University of Chicago (where Halmos was on the faculty) in 1950 with the dissertation “A Representation Theory for Commutative Topological Algebra,” written under
advisor Marshall Stone (pictured on page 4 of this collection), and visited IAS during 1950-52. After teaching at Columbia University from 1952 to 1964, he became Kuemmerle Professor of Mathematics
at the University of Pennsylvania. According to the citation for his 1999 AMS Steele Prize for Lifetime Achievement, “For almost half a century, Dick Kadison has been one of the world leaders in the
subject of operator algebras, and the tremendous flourishing of this subject in the last thirty years is largely due to his efforts.” (Sources: IAS; UPenn Mathematics; “1999 Steele Prizes,” AMS
Notices 46:4 (April 1999), 461-462)
Halmos photographed functional analyst and ergodic theorist Shizuo Kakutani (1911-2004) in 1965. Kakutani earned his Ph.D. in 1941 from the University of Osaka, Japan, where he had taught since 1934.
At the invitation of Hermann Weyl, he spent the two years 1940-42 at the Institute for Advanced Study (IAS) in Princeton, New Jersey, where he met Halmos (page 1 of this collection), Warren Ambrose (
page 1), Joseph Doob (pages 1, 2, 12, 14), Paul Erdös (pages 3, 14), and John von Neumann. Despite Japan and the U.S. being at war beginning Dec. 7, 1941, Kakutani was able to complete his two-year
visit to IAS and return to Japan in 1942. He was on the mathematics faculty at the University of Osaka from 1942 to 1948, at IAS during 1948-49, and at Yale University, where he advised at least 32
Ph.D. students, from 1949 onward. (Sources: MacTutor Archive, Mathematics Genealogy Project)
Algebraist Irving Kaplansky (1917-2006) was photographed by Halmos in July of 1983, probably at the University of Chicago, where Kaplansky was professor of mathematics from 1945 to 1984. Halmos was
also a faculty member at Chicago from 1946 to 1961. Kaplansky, who advised at least 55 Ph.D. students at Chicago, earned his own Ph.D. from Harvard University in 1941 with the dissertation “Maximal
Fields with Valuations,” written under advisor Saunders Mac Lane. He remained at Harvard until 1944, then spent a year at Columbia doing war work before joining the Chicago faculty in 1945. In 1984,
he became director of the Mathematical Sciences Research Institute (MSRI), which at that time was housed at the University of California, Berkeley. According to O’Connor and Robertson of the MacTutor
Archive, “Kaplansky has made major contributions to ring theory, group theory and field theory.” He was president of the American Mathematical Society during 1985 and 1986. (Sources: MacTutor Archive
, AMS Presidents)
Halmos photographed mathematical physicist and operator theorist Tosio Kato (1917-1999) in Chicago on May 20, 1968. Kato earned his D.Sci. in physics in 1951 from the University of Tokyo with the
dissertation “On the convergence of the perturbation method” on partial differential equations. He was a physics faculty member at the University of Tokyo from 1951 to 1962. In 1962, he moved to the
University of California, Berkeley, Mathematics Department, where he spent the rest of his career, advising at least 21 Ph.D. students and publishing his well known Perturbation theory for linear
operators in 1966. (Source: MacTutor Archive)
Halmos photographed Yitzhak Katznelson in August of 1960 at a location he identified only as “Dunes” on the back of the photo. Halmos was still at the University of Chicago at the time, and “Dunes”
in 1960s Chicago probably referred to the Indiana Dunes State Park on Lake Michigan. Born in Jerusalem, Katznelson earned his Ph.D. from the University of Paris in 1959 under advisor Szolem
Mandelbrojt. After teaching at UC Berkeley, Hebrew University (in Jerusalem), Yale, and Stanford, he joined the mathematics faculty of Hebrew University in 1966. In 1988, he moved to Stanford
University, where he is now professor emeritus of mathematics and lists his research interests as harmonic analysis and ergodic theory. He is the author of the AMS Steele Prize-winning book An
Introduction to Harmonic Analysis. (Sources: Mathematics Genealogy Project; Stanford Mathematics; “2002 Steele Prizes,” AMS Notices 49:4 (April 2002), 466-467)
For an introduction to this article and to the Paul R. Halmos Photograph Collection, please see page 1. Watch for a new page featuring six new photographs each week during 2012.
Regarding sources for this page: Information for which a source is not given either appeared on the reverse side of the photograph or was obtained from various sources during 2011-12 by archivist
Carol Mead of the Archives of American Mathematics, Dolph Briscoe Center for American History, University of Texas, Austin.
|
{"url":"https://old.maa.org/press/periodicals/convergence/whos-that-mathematician-paul-r-halmos-collection-page-26","timestamp":"2024-11-11T11:32:52Z","content_type":"application/xhtml+xml","content_length":"128205","record_id":"<urn:uuid:9cce83f4-0fad-489a-a67f-384e0e8dbc22>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00087.warc.gz"}
|
A post by Codie Wood, PhD student on the Compass programme.
This blog post is an introduction to structure preserving estimation (SPREE) methods. These methods form the foundation of my current work with the Office for National Statistics (ONS), where I am
undertaking a six-month internship as part of my PhD. During this internship, I am focusing on the use of SPREE to provide small area estimates of population characteristic counts and proportions.
Small area estimation
Small area estimation (SAE) refers to the collection of methods used to produce accurate and precise population characteristic estimates for small population domains. Examples of domains may include
low-level geographical areas, or population subgroups. An example of an SAE problem would be estimating the national population breakdown in small geographical areas by ethnic group [2015_Luna].
Demographic surveys with a large enough scale to provide high-quality direct estimates at a fine-grain level are often expensive to conduct, and so smaller sample surveys are often conducted instead.
SAE methods work by drawing information from different data sources and similar population domains in order to obtain accurate and precise model-based estimates where sample counts are too small for
high quality direct estimates. We use the term small area to refer to domains where we have little or no data available in our sample survey.
SAE methods are frequently relied upon for population characteristic estimation, particularly as there is an increasing demand for information about local populations in order to ensure correct
allocation of resources and services across the nation.
Structure preserving estimation
Structure preserving estimation (SPREE) is one of the tools used within SAE to provide population composition estimates. We use the term composition here to refer to a population break down into a
two-way contingency table containing positive count values. Here, we focus on the case where we have a population broken down into geographical areas (e.g. local authority) and some subgroup or
category (e.g. ethnic group or age).
Orginal SPREE-type estimators, as proposed in [1980_Purcell], can be used in the case when we have a proxy data source for our target composition, containing information for the same set of areas and
categories but that may not entirely accurately represent the variable of interest. This is usually because the data are outdated or have a slightly different variable definition than the target.
We also incorporate benchmark estimates of the row and column totals for our composition of interest, taken from trusted, quality assured data sources and treated as known values. This ensures
consistency with higher level known population estimates. SPREE then adjusts the proxy data to the estimates of the row and column totals to obtain the improved estimate of the target composition.
An illustration of the data required to produce SPREE-type estimates.
In an extension of SPREE, known as generalised SPREE (GSPREE) [2004_Zhang], the proxy data can also be supplemented by sample survey data to generate estimates that are less subject to bias and
uncertainty than it would be possible to generate from each source individually. The survey data used is assumed to be a valid measure of the target variable (i.e. it has the same definition and is
not out of date), but due to small sample sizes may have a degree of uncertainty or bias for some cells.
The GSPREE method establishes a relationship between the proxy data and the survey data, with this relationship being used to adjust the proxy compositions towards the survey data.
An illustration of the data required to produce GSPREE estimates.
GSPREE is not the only extension to SPREE-type methods, but those are beyond the scope of this post. Further extensions such as Multivariate SPREE are discussed in detail in [2016_Luna].
Original SPREE methods
First, we describe original SPREE-type estimators. For these estimators, we require only well-established estimates of the margins of our target composition.
We will denote the target composition of interest by $\mathbf{Y} = (Y{aj})$, where $Y{aj}$ is the cell count for small area $a = 1,\dots,A$ and group $j = 1,\dots,J$. We can write $\mathbf Y$ in the
form of a saturated log-linear model as the sum of four terms,
$$ \log Y_{aj} = \alpha_0^Y + \alpha_a^Y + \alpha_j^Y + \alpha_{aj}^Y.$$
There are multiple ways to write this parameterisation, and here we use the centered constraints parameterisation given by $$\alpha_0^Y = \frac{1}{AJ}\sum_a\sum_j\log Y_{aj},$$ $$\alpha_a^Y = \frac
{1}{J}\sum_j\log Y_{aj} – \alpha_0^Y,$$ $$\alpha_j^Y = \frac{1}{A}\sum_a\log Y_{aj} – \alpha_0^Y,$$ $$\alpha_{aj}^Y = \log Y_{aj} – \alpha_0^Y – \alpha_a^Y – \alpha_j^Y,$$
which satisfy the constraints $\sum_a \alpha_a^Y = \sum_j \alpha_j^Y = \sum_a \alpha_{aj}^Y = \sum_j \alpha_{aj}^Y = 0.$
Using this expression, we can decompose $\mathbf Y$ into two structures:
1. The association structure, consisting of the set of $AJ$ interaction terms $\alpha_{aj}^Y$ for $a = 1,\dots,A$ and $j = 1,\dots,J$. This determines the relationship between the rows (areas) and
columns (groups).
2. The allocation structure, consisting of the sets of terms $\alpha_0^Y, \alpha_a^Y,$ and $\alpha_j^Y$ for $a = 1,\dots,A$ and $j = 1,\dots,J$. This determines the size of the composition, and
differences between the sets of rows (areas) and columns (groups).
Suppose we have a proxy composition $\mathbf X$ of the same dimensions as $\mathbf Y$, and we have the sets of row and column margins of $\mathbf Y$ denoted by $\mathbf Y_{a+} = (Y_{1+}, \dots, Y_
{A+})$ and $\mathbf Y_{+j} = (Y_{+1}, \dots, Y_{+J})$, where $+$ substitutes the index being summed over.
We can then use iterative proportional fitting (IPF) to produce an estimate $\widehat{\mathbf Y}$ of $\mathbf Y$ that preserves the association structure observed in the proxy composition $\mathbf
X$. The IPF procedure is as follows:
1. Rescale the rows of $\mathbf X$ as $$ \widehat{Y}_{aj}^{(1)} = X_{aj} \frac{Y_{+j}}{X_{+j}},$$
2. Rescale the columns of $\widehat{\mathbf Y}^{(1)}$ as $$ \widehat{Y}_{aj}^{(2)} = \widehat{Y}_{aj}^{(1)} \frac{Y_{a+}}{\widehat{Y}_{a+}^{(1)}},$$
3. Rescale the rows of $\widehat{\mathbf Y}^{(2)}$ as $$ \widehat{Y}_{aj}^{(3)} = \widehat{Y}_{aj}^{(2)} \frac{Y_{+j}}{\widehat{Y}_{+j}^{(2)}}.$$
Steps 2 and 3 are then repeated until convergence occurs, and we have a final composition estimate denoted by $\widehat{\mathbf Y}^S$ which has the same association structure as our proxy
composition, i.e. we have $\alpha_{aj}^X = \alpha_{aj}^Y$ for all $a \in \{1,\dots,A\}$ and $j \in \{1,\dots,J\}.$ This is a key assumption of the SPREE implementation, which in practise is often
restrictive, motivating a generalisation of the method.
Generalised SPREE methods
If we can no longer assume that the proxy composition and target compositions have the same association structure, we instead use the GSPREE method first introduced in [2004_Zhang], and incorporate
survey data into our estimation process.
The GSPREE method relaxes the assumption that $\alpha_{aj}^X = \alpha_{aj}^Y$ for all $a \in \{1,\dots,A\}$ and $j \in \{1,\dots,J\},$ instead imposing the structural assumption $\alpha_{aj}^Y = \
beta \alpha_{aj}^X$, i.e. the association structure of the proxy and target compositions are proportional to one another. As such, we note that SPREE is a particular case of GSPREE where $\beta = 1$.
Continuing with our notation from the previous section, we proceed to estimate $\beta$ by modelling the relationship between our target and proxy compositions as a generalised linear structural model
(GLSM) given by
$$\tau_{aj}^Y = \lambda_j + \beta \tau_{aj}^X,$$ with $\sum_j \lambda_j = 0$, and where $$ \begin{align} \tau_{aj}^Y &= \log Y_{aj} – \frac{1}{J}\sum_j\log Y_{aj},\\
&= \alpha_{aj}^Y + \alpha_j^Y,
\end{align}$$ and analogously for $\mathbf X$.
It is shown in [2016_Luna] that fitting this model is equivalent to fitting a Poisson generalised linear model to our cell counts, with a $\log$ link function. We use the association structure of our
proxy data, as well as categorical variables representing the area and group of the cell, as our covariates. Then we have a model given by $$\log Y_{aj} = \gamma_a + \tilde{\lambda}_j + \tilde{\beta}
\alpha_{aj}^X,$$ with $\gamma_a = \alpha_0^Y + \alpha_a^Y$, $\tilde\lambda_j = \alpha_j^Y$ and $\tilde\beta \alpha_{aj}^X = \alpha_{aj}^Y.$
When fitting the model we use survey data $\tilde{\mathbf Y}$ as our response variable, and are then able to obtain a set of unbenchmarked estimates of our target composition. The GSPREE method then
benchmarks these to estimates of the row and column totals, following a procedure analagous to that undertaken in the orginal SPREE methodology, to provide a final set of estimates for our target
ONS applications
The ONS has used GSPREE to provide population ethnicity composition estimates in intercensal years, where the detailed population estimates resulting from the census are outdated [2015_Luna]. In this
case, the census data is considered the proxy data source. More recent works have also used GSPREE to estimate counts of households and dwellings in each tenure at the subnational level during
intercensal years [2023_ONS].
My work with the ONS has focussed on extending the current workflows and systems in place to implement these methods in a reproducible manner, allowing them to be applied to a wider variety of
scenarios with differing data availability.
[1980_Purcell] Purcell, Noel J., and Leslie Kish. 1980. ‘Postcensal Estimates for Local Areas (Or Domains)’. International Statistical Review / Revue Internationale de Statistique 48 (1): 3–18.
[2004_Zhang] Zhang, Li-Chun, and Raymond L. Chambers. 2004. ‘Small Area Estimates for Cross-Classifications’. Journal of the Royal Statistical Society Series B: Statistical Methodology 66 (2):
479–96. https://doi.org/10/fq2ftt.
[2015_Luna] Luna Hernández, Ángela, Li-Chun Zhang, Alison Whitworth, and Kirsten Piller. 2015. ‘Small Area Estimates of the Population Distribution by Ethnic Group in England: A Proposal Using
Structure Preserving Estimators’. Statistics in Transition New Series and Survey Methodology 16 (December). https://doi.org/10/gs49kq.
[2016_Luna] Luna Hernández, Ángela. 2016. ‘Multivariate Structure Preserving Estimation for Population Compositions’. PhD thesis, University of Southampton, School of Social Sciences. https://
[2023_ONS] Office for National Statistics (ONS), released 17 May 2023, ONS website, article, Tenure estimates for households and dwellings, England: GSPREE compared with Census 2021 data
Student Perspectives: Impurity Identification in Oligonucleotide Drug Samples
A post by Harry Tata, PhD student on the Compass programme.
Oligonucleotides in Medicine
Oligonucleotide therapies are at the forefront of modern pharmaceutical research and development, with recent years seeing major advances in treatments for a variety of conditions. Oligonucleotide
drugs for Duchenne muscular dystrophy (FDA approved) [1], Huntington’s disease (Phase 3 clinical trials) [2], and Alzheimer’s disease [3] and amyotrophic lateral sclerosis (early-phase clinical
trials) [4] show their potential for tackling debilitating and otherwise hard-to-treat conditions. With continuing development of synthetic oligonucleotides, analytical techniques such as mass
spectrometry must be tailored to these molecules and keep pace with the field.
Working in conjunction with AstraZeneca, this project aims to advance methods for impurity detection and quantification in synthetic oligonucleotide mass spectra. In this blog post we apply a
regularised version of the Richardson-Lucy algorithm, an established technique for image deconvolution, to oligonucleotide mass spectrometry data. This allows us to attribute signals in the data to
specific molecular fragments, and therefore to detect impurities in oligonucleotide synthesis.
Oligonucleotide Fragmentation
If we have attempted to synthesise an oligonucleotide $\mathcal O$ with a particular sequence, we can take a sample from this synthesis and analyse it via mass spectrometry. In this process,
molecules in the sample are first fragmented — broken apart into ions — and these charged fragments are then passed through an electromagnetic field. The trajectory of each fragment through this
field depends on its mass/charge ratio (m/z), so measuring these trajectories (e.g. by measuring time of flight before hitting some detector) allows us to calculate the m/z of fragments in the
sample. This gives us a discrete mass spectrum: counts of detected fragments (intensity) across a range of m/z bins [5].
To get an idea of how much of $\mathcal O$ is in a sample, and what impurities might be present, we first need to consider what fragments $\mathcal O$ will produce. Oligonucleotides are short strands
of DNA or RNA; polymers with a backbone of sugars (such as ribose in RNA) connected by linkers (e.g. a phosphodiester bond), where each sugar has an attached base which encodes genetic information
On each monomer, there are two sites where fragmentation is likely to occur: at the linker (backbone cleavage) or between the base and sugar (base loss). Specifically, depending on which bond within
the linker is broken, there are four modes of backbone cleavage [7,8].
We include in $\mathcal F$ every product of a single fragmentation of $\mathcal O$ — any of the four backbone cleavage modes or base loss anywhere along the nucleotide — as well as the results of
every combination of two fragmentations (different cleavage modes at the same linker are mutually exclusive).
Sparse Richardson-Lucy Algorithm
Suppose we have a chemical sample which we have fragmented and analysed by mass spectrometry. This gives us a spectrum across n bins (each bin corresponding to a small m/z range), and we represent
this spectrum with the column vector $\mathbf{b}\in\mathbb R^n$, where $b_i$ is the intensity in the $i^{th}$ bin. For a set $\{f_1,\ldots,f_m\}=\mathcal F$ of possible fragments, let $x_j$ be the
amount of $f_j$ that is actually present. We would like to estimate the amounts of each fragment based on the spectrum $\mathbf b$.
If we had a sample comprising a unit amount of a single fragment $f_j$, so $x_j=1$ and $x_{ke j}=0,$ and this produced a spectrum $\begin{pmatrix}a_{1j}&\ldots&a_{nj}\end{pmatrix}^T$, we can say the
intensity contributed to bin $i$ by $x_j$ is $a_{ij}.$ In mass spectrometry, the intensity in a single bin due to a single fragment is linear in the amount of that fragment, and the intensities in a
single bin due to different fragments are additive, so in some general spectrum we have $b_i=\sum_j x_ja_{ij}.$
By constructing a library matrix $\mathbf{A}\in\mathbb R^{n\times m}$ such that $\{\mathbf A\}_{ij}=a_{ij}$ (so the columns of $\mathbf A$ correspond to fragments in $\mathcal F$), then in ideal
conditions the vector of fragment amounts $\mathbf x=\begin{pmatrix}x_1&\ldots&x_m\end{pmatrix}^T$ solves $\mathbf{Ax}=\mathbf{b}$. In practice this exact solution is not found — due to experimental
noise and potentially because there are contaminant fragments in the sample not included in $\mathcal F$ — and we instead make an estimate $\mathbf {\hat x}$ for which $\mathbf{A\hat x}$ is close to
$\mathbf b$.
Note that the columns of $\mathbf A$ correspond to fragments in $\mathcal F$: the values in a single column represent intensities in each bin due to a single fragment only. We $\ell_1$-normalise
these columns, meaning the total intensity (over all bins) of each fragment in the library matrix is uniform, and so the values in $\mathbf{\hat x}$ can be directly interpreted as relative abundances
of each fragment.
The observed intensities — as counts of fragments incident on each bin — are realisations of latent Poisson random variables. Assuming these variables are i.i.d., it can be shown that the estimate of
$\mathbf{x}$ which maximises the likelihood of the system is approximated by the iterative formula
$\mathbf {\hat{x}}^{(t+1)}=\left(\mathbf A^T \frac{\mathbf b}{\mathbf{A\hat x}^{(t)}}\right)\odot \mathbf{\hat x}^{(t)}.$
Here, quotients and the operator $\odot$ represent (respectively) elementwise division and multiplication of two vectors. This is known as the Richardson-Lucy algorithm [9].
In practice, when we enumerate oligonucleotide fragments to include in $\mathcal F$, most of these fragments will not actually be produced when the oligonucleotide passes through a mass spectrometer;
there is a large space of possible fragments and (beyond knowing what the general fragmentation sites are) no well-established theory allowing us to predict, for a new oligonucleotide, which
fragments will be abundant or negligible. This means we seek a sparse estimate, where most fragment abundances are zero.
The Richardson-Lucy algorithm, as a maximum likelihood estimate for Poisson variables, is analagous to ordinary least squares regression for Gaussian variables. Likewise lasso regression — a
regularised least squares regression which favours sparse estimates, interpretable as a maximum a posteriori estimate with Laplace priors — has an analogue in the sparse Richardson-Lucy algorithm:
$\mathbf {\hat{x}}^{(t+1)}=\left(\mathbf A^T \frac{\mathbf b}{\mathbf{A\hat x}^{(t)}}\right)\odot \frac{ \mathbf{\hat x}^{(t)}}{\mathbf 1 + \lambda},$
where $\lambda$ is a regularisation parameter [10].
Library Generation
For each oligonucleotide fragment $f\in\mathcal F$, we smooth and bin the m/z values of the most abundant isotopes of $f$, and store these values in the columns of $\mathbf A$. However, if these are
the only fragments in $\mathcal F$ then impurities will not be identified: the sparse Richardson-Lucy algorithm will try to fit oligonucleotide fragments to every peak in the spectrum, even ones that
correspond to fragments not from the target oligonucleotide. Therefore we also include ‘dummy’ fragments corresponding to single peaks in the spectrum — the method will fit these to
non-oligonucleotide peaks, showing the locations of any impurities.
For a mass spectrum from a sample containing a synthetic oligonucleotide, we generated a library of oligonucleotide and dummy fragments as described above, and applied the sparse Richardson-Lucy
algorithm. Below, the model fit is plotted alongside the (smoothed, binned) spectrum and the ten most abundant fragments as estimated by the model. These fragments are represented as bars with binned
m/z at the peak fragment intensity, and are separated into oligonucleotide fragments and dummy fragments indicating possible impurities. All intensities and abundances are Anscombe transformed ($x\
rightarrow\sqrt{x+3/8}$) for clarity.
As the oligonucleotide in question is proprietary, its specific composition and fragmentation is not mentioned here, and the bins plotted have been transformed (without changing the shape of the
data) so that individual fragment m/z values are not identifiable.
We see the data is fit extremely closely, and that the spectrum is quite clean: there is one very pronounced peak roughly in the middle of the m/z range. This peak corresponds to one of the
oligonucleotide fragments in the library, although there is also an abundant dummy fragment slightly to the left inside the main peak. Fragment intensities in the library matrix are smoothed, and it
may be the case that the smoothing here is inappropriate for the observed peak, hence other fragments being fit at the peak edge. Investigating these effects is a target for the rest of the project.
We also see several smaller peaks, most of which are modelled with oligonucleotide fragments. One of these peaks, at approximately bin 5352, has a noticeably worse fit if excluding dummy fragments
from the library matrix (see below). Using dummy fragments improves this fit and indicates a possible impurity. Going forward, understanding and quantification of these impurities will be improved by
including other common fragments in the library matrix, and by grouping fragments which correspond to the same molecules.
[1] Junetsu Igarashi, Yasuharu Niwa, and Daisuke Sugiyama. “Research and Development of Oligonucleotide Therapeutics in Japan for Rare Diseases”. In: Future Rare Diseases 2.1 (Mar. 2022), FRD19.
[2] Karishma Dhuri et al. “Antisense Oligonucleotides: An Emerging Area in Drug Discovery and Development”. In: Journal of Clinical Medicine 9.6 (6 June 2020), p. 2004.
[3] Catherine J. Mummery et al. “Tau-Targeting Antisense Oligonucleotide MAPTRx in Mild Alzheimer’s Disease: A Phase 1b, Randomized, Placebo-Controlled Trial”. In: Nature Medicine (Apr. 24, 2023),
pp. 1–11.
[4] Benjamin D. Boros et al. “Antisense Oligonucleotides for the Study and Treatment of ALS”. In: Neurotherapeutics: The Journal of the American Society for Experimental NeuroTherapeutics 19.4 (July
2022), pp. 1145–1158.
[5] Ingvar Eidhammer et al. Computational Methods for Mass Spectrometry Proteomics. John Wiley & Sons, Feb. 28, 2008. 299 pp.
[6] Harri Lönnberg. Chemistry of Nucleic Acids. De Gruyter, Aug. 10, 2020.
[7] S. A. McLuckey, G. J. Van Berkel, and G. L. Glish. “Tandem Mass Spectrometry of Small, Multiply Charged Oligonucleotides”. In: Journal of the American Society for Mass Spectrometry 3.1 (Jan.
1992), pp. 60–70.
[8] Scott A. McLuckey and Sohrab Habibi-Goudarzi. “Decompositions of Multiply Charged Oligonucleotide Anions”. In: Journal of the American Chemical Society 115.25 (Dec. 1, 1993), pp. 12085–12095.
[9] Mario Bertero, Patrizia Boccacci, and Valeria Ruggiero. Inverse Imaging with Poisson Data: From Cells to Galaxies. IOP Publishing, Dec. 1, 2018.
[10] Elad Shaked, Sudipto Dolui, and Oleg V. Michailovich. “Regularized Richardson-Lucy Algorithm for Reconstruction of Poissonian Medical Images”. In: 2011 IEEE International Symposium on Biomedical
Imaging: From Nano to Macro. Mar. 2011, pp. 1754–1757.
Compass students attending the Workshop on Functional Inference and Machine Intelligence (FIMI) at ISM Tokyo
A post by Compass CDT students Edward Milsom, Jake Spiteri, Jack Simons, and Sam Stockman.
We (Edward Milsom, Jake Spiteri, Jack Simons, Sam Stockman) attended the 2023 Workshop on Functional Inference and Machine Intelligence (FIMI) taking place on the 14, 15 and 16th of March at the
Institute of Statistical Mathematics in Tokyo, Japan. Our attendance to the workshop was to further collaborative ties between the two institutions. The in-person participants included many
distinguished academics from around Japan as well as our very own Dr Song Liu. Due to the workshops modest size, there was an intimate atmosphere which nurtured many productive research discussions.
Whilst staying in Tokyo, we inevitably sampled some Japanese culture, from Izakayas to cherry blossoms and sumo wrestling!
We thought we’d share some of our thoughts and experiences. We’ll first go through some of our most memorable talks, and then talk about some of our activities outside the workshop.
Sho Sonoda – Ridgelet Transforms for Neural Networks on Manifolds and Hilbert Spaces
We particularly enjoyed the talk given by Sho Sonoda, a Research Scientist from the Deep Learning Theory group at Riken AIP on “Ridgelet Transforms for Neural Networks on Manifolds and Hilbert
Spaces.” Sonoda’s research aims to demystify the black box nature of neural networks, shedding light on how they work and their universal approximation capabilities. His talk provided valuable
insights into the integral representations of neural networks, and how they can be represented using ridgelet transforms. Sonoda presented a reconstruction formula from which we see that if a neural
network can be represented using ridgelet transforms, then it is a universal approximator. He went on to demonstrate that various types of networks, such as those on finite fields, group
convolutional neural networks (GCNNs), and networks on manifolds and Hilbert spaces, can be represented in this manner and are thus universal approximators. Sonoda’s work improves upon existing
universality theorems by providing a more unified and direct approach, as opposed to the previous case-by-case methods that relied on manual adjustments of network parameters or indirect conversions
of (G)CNNs into other universal approximators, such as invariant polynomials and fully-connected networks. Sonoda’s work is an important step toward a more transparent and comprehensive understanding
of neural networks.
Greg Yang – The unreasonable effectiveness of mathematics in large scale deep learning
Greg Yang is a researcher at Microsoft Research who is working on a framework for understanding neural networks called “tensor programs”. Similar to Neural Tangent Kernels and Neural Network Gaussian
Processes, the tensor program framework allows us to consider neural networks in the infinite-width limit, where it becomes possible to make statements about the properties of very wide networks.
However, tensor programs aim to unify existing work on infinite-width neural networks by allowing one to take the infinite limit of a much wider range of neural network architectures using one single
In his talk, Yang discussed his most recent work in this area, concerning the “maximal update parametrisation”. In short, they show that in this parametrisation, the optimal hyperparameters of very
wide neural networks are the same as those for much smaller neural networks. This means that hyperparameter search can be done using small, cheap models, and then applied to very large models like
GPT-3, where hyperparameter search would be too expensive. The result is summarised in this figure from their paper “Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter
Transfer”, which shows how this is not possible in the standard parametrisation. This work was only possible by building upon the tensor program framework, thereby demonstrating the value of having a
solid theoretical understanding of neural networks.
Statistical Seismology Seminar Series
In addition to the workshop, Sam attended the 88th Statistical Seismology seminar in the Risk Analysis Research Centre at ISM https://www.ism.ac.jp/~ogata/Ssg/ssg_statsei_seminarsE.html. The
Statistical Seismology Research Group at ISM was created by Emeritus Professor Yosihiko Ogata and is one of the leading global research institutes for statistical seismology. Its most significant
output has been the Epidemic-Type Aftershock Sequence (ETAS) model, a point process based earthquake forecasting model that has been the most dominant model for forecasting since its creation by
Ogata in 1988.
As part of the Seminar series, Sam gave a talk on his most recent work (Forecasting the 2016-2017 Central Apennines Earthquake Sequence with a Neural Point Process’, https://arxiv.org/abs/2301.09948)
to the research group and other visiting academics.
Japan’s interest is earthquake science is due to the fact that they record the most earthquakes in the world. The whole country is in a very active seismic area, and they have the densest seismic
network. So even though they might not actually have the most earthquakes in the world (which is most likely Indonesia) they certainly document the most. The evening before flying back to the UK, Sam
and Jack felt a magnitude 5.2 earthquake 300km north of Tokyo in the Miyagi prefecture. At that distance all that was felt was a small shudder…
It’s safe to say that the abundance of delicious food was the most memorable aspect of our trip. In fact, we never had a bad meal! Our taste buds were taken on a culinary journey as we tried a
variety of Japanese dishes. From hearty, broth-based bowls of ramen and tsukemen, to fun conveyor-belt sushi restaurants, and satisfying tonkatsu (breaded deep-fried pork cutlet) with sticky rice or
spicy udon noodles, we were never at a loss for delicious options. We even had the opportunity to cook our own food at an indoor barbecue!
Aside from the food, we thoroughly enjoyed our time in Tokyo – exploring the array of second-hand clothes shops, relaxing in bath-houses, and trying random things from the abundance of vending
Compass students at AISTATS 2023
Congratulations to Compass students Josh Givens, Hannah Sansford and Alex Modell who, along with their supervisors have had their papers accepted to be published at AISTATS 2023.
‘Implications of sparsity and high triangle density for graph representation learning’
Hannah Sansford, Alexander Modell, Nick Whiteley, Patrick Rubin-Delanchy
Hannah: In this paper we explore the implications of two common characteristics of real-world networks, sparsity and triangle-density, for graph representation learning. An example of where these
properties arise in the real-world is in social networks, where, although the number of connections each individual has compared to the size of the network is small (sparsity), often a friend of a
friend is also a friend (triangle-density). Our result counters a recent influential paper that shows the impossibility of simultaneously recovering these properties with finite-dimensional
representations of the nodes, when the probability of connection is modelled by the inner-product. We, by contrast, show that it is possible to recover these properties using an infinite-dimensional
inner-product model, where representations lie on a low-dimensional manifold. One of the implications of this work is that we can ‘zoom-in’ to local neighbourhoods of the network, where a
lower-dimensional representation is possible.
The paper has been selected for oral presentation at the conference in Valencia (<2% of submissions).
Density Ratio Estimation and Neyman Pearson Classification with Missing Data
Josh Givens, Song Liu, Henry W J Reeve
Josh: In our paper we adapt the popular density ratio estimation procedure KLIEP to make it robust to missing not at random (MNAR) data and demonstrate its efficacy in Neyman-Pearson (NP)
classification. Density ratio estimation (DRE) aims to characterise the difference between two classes of data by estimating the ratio between their probability densities. The density ratio is a
fundamental quantity in statistics appearing in many settings such as classification, GANs, and covariate shift making its estimation a valuable goal. To our knowledge there is no prior research into
DRE with MNAR data, a missing data paradigm where the likelihood of an observation being missing depends on its underlying value. We propose the estimator M-KLIEP and provide finite sample bounds on
its accuracy which we show to be minimax optimal for MNAR data. To demonstrate the utility of this estimator we apply it the the field of NP classification. In NP classification we aim to create a
classifier which strictly controls the probability of incorrectly classifying points from one class. This is useful in any setting where misclassification for one class is much worse than the other
such as fault detection on a production line where you would want to strictly control the probability of classifying a faulty item as non-faulty. In addition to showing the efficacy of our new
estimator in this setting we also provide an adaptation to NP classification which allows it to still control this misclassification probability even when fit using MNAR data.
Compass Conference 2022
Our first Compass Conference was held on Tuesday 13^th September 2022, hosted in the newly refurbished Fry Building, home to the School of Mathematics. (more…)
Student Perspectives: Contemporary Ideas in Statistical Philosophy
A post by Alessio Zakaria, PhD student on the Compass programme.
Probability theory is a branch of mathematics centred around the abstract manipulation and quantification of uncertainty and variability. It forms a basic unit of the theory and practice of
statistics, enabling us to tame the complex nature of observable phenomena into meaningful information. It is through this reliance that the debate over the true (or more correct) underlying nature
of probability theory has profound effects on how statisticians do their work. The current opposing sides of the debate in question are the Frequentists and the Bayesians. Frequentists believe that
probability is intrinsically linked to the numeric regularity with which events occur, i.e. their frequency. Bayesians, however, believe that probability is an expression of someones degree of belief
or confidence in a certain claim. In everyday parlance we use both of these concepts interchangeably: I estimate one in five of people have Covid; I was 50% confident that the football was coming
home. It should be noted that the latter of the two is not a repeatable event per se. We cannot roll back time to check what the repeatable sequence would result in.
|
{"url":"https://compass.blogs.bristol.ac.uk/tag/statistics/","timestamp":"2024-11-07T10:23:41Z","content_type":"text/html","content_length":"115987","record_id":"<urn:uuid:1624b269-8f13-45d9-9de3-777b4ba53563>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00478.warc.gz"}
|
Online optimal active sensing control
Online Optimal Active Sensing Control
Paolo Salaris
, Riccardo Spica
, Paolo Robuffo Giordano
and Patrick Rives
Abstract— This paper deals with the problem of active sensing control for nonlinear differentially flat systems. The objective is to improve the estimation accuracy of an observer by determining the
inputs of the system that maximise the amount of information gathered by the outputs over a time horizon. In particular, we use the Observability Gramian (OG) to quantify the richness of the acquired
information. First, we define a trajectory for the flat outputs of the system by using B-Spline curves. Then, we exploit an online gradient descent strategy to move the control points of the B-Spline
in order to actively maximise the smallest eigenvalue of the OG over the whole planning horizon. While the system travels along its planned (optimized) trajectory, an Extended Kalman Filter (EKF) is
used to estimate the system state. In order to keep memory of the past acquired sensory data for online re-planning, the OG is also computed on the past estimated state trajectories. This is then
used for an online replanning of the optimal trajectory during the robot motion which is continuously refined by exploiting the state estimation obtained by the EKF. In order to show the
effectiveness of our method we consider a simple but significant case of a planar robot with a single range measurement. The simulation results show that, along the optimal path, the EKF converges
faster and provides a more accurate estimate than along other possible (non-optimal) paths.
The performance of robots and sensing devices is highly influenced by the quality and amount of sensor information, especially in case of limited sensing capabilities and/or low cost devices. Indeed,
such information is often used to estimate the state of the system. As a consequence, the problem of optimal information gathering has been studied in the literature in a variety of contexts.
In the field of optimal sensor placements, in [1] an observability-based procedure is used to characterise the gyroscopic sensing distribution of the wings of the hawkmoth Manduca sexta. In [2] the
problem of finding the optimal lo-cation of sensors on a wearable sensing glove was addressed in order to minimise the error statistics in the reconstruction of the hand pose.
In the field of localization and exploration for mobile robots, it is important to establish if, for a given system, the observation problem, that consists in finding an estimate of the state of the
robot/environment from the knowledge of the inputs and the outputs over a period of time, admits a solution [3]. For instance, in [4] a complete observability analysis of the planar bearing-only
localization and mapping problem for a nonholonomic vehicle and for all configu-rations of landmarks with known (markers) and unknown
∗ [are] [with] [INRIA] [Sophia-Antipolis] [M´editerran´ee,] [2004]
Route des Lucioles, 06902 Sophia Antipolis, France, e-mail: psalaris,[email protected]. † are with the CNRS at Irisa and Inria Rennes Bretagne Atlantique, Campus de Beaulieu, 35042 Rennes Cedex,
France, e-mail: riccardo.spica,[email protected].
(targets) positions was studied by using the Observability Rank Condition (ORC) tool. The problem is shown to be locally weakly observable[5], if and only if the number of markers is equal or greater
than two. For nonlinear systems, however, the observability may also depend on the inputs. In particular, for the above example, one can show the existence of singular inputs that do not allow the
reconstruction of the state: this occurs with the vehicle aiming at the two markers or at the target directly.
The problem of developing intelligent control strategies applied to the data acquisition process [6] in order to max-imise the amount of information coming from sensors, i.e. to maximise the
“distance” from the singular trajectories, is an active area of research often known as active sensing control, active perception or optimal information gathering. One crucial point in this context
is the choice of an appropriate measure of observabilityto optimise.
In [7] the condition number of the Observability Gramian (OG) is used as cost function to find the optimal observ-ability trajectory for an Unmanned Aerial Vehicle (UAV) with GPS measurement and an
Autonomous Underwater Vehicle (AUV) with range measurements from a single marker. One peculiarity of this work consists in finding an interesting relationship between the entries of the OG and the
geometric properties of the trajectory, which can be useful to characterise the shape of the optimal path. In [8] the authors find optimal observability trajectories for first-order nonholonomic
systems with nonholonomic states as outputs by maximising the smallest eigenvalue of the OG.
By using the concept of entropy, introduced by Shannon, the authors of [9] devised an observability measure and an optimal navigation strategy for a unicycle vehicle with three bearing measurements
w.r.t. known markers. The problem of finding informative paths in Gaussian fields has also been tackled in [10], where the authors propose to maximise a cost function based on the concept of mutual
information between the state and the measurements. In [11], a Bayesian optimi-sation approach for determining the most informative path is used. A search based method for planning in information
space was developed in [12] to provide an adaptive policy for mobile sensors with non-linear sensing models. In [13], the optimal trajectories for a team of robots moving on a plane and tracking a
target by using relative measurements (distances and bearings) were obtained by minimising the trace of the target’s position estimate covariance matrix under suitable constraints.
In the field of simultaneous localization and mapping (SLAM), [14] presents sub-optimal paths that minimise an adaptively weighted combination of the uncertainty of the vehicle pose and of the map
features. In [15], a trajectory
optimisation for target localization by using 3D bearing-only measurements from small unmanned aerial vehicles is proposed. The active sensing problem consists in minimising the trace of the inverse
of the Fisher Information Matrix (FIM). Similarly, in [16], the objective is to find the best trajectories and camera configurations for a group of aerial Dubins vehicles, equipped with Pan-Tilt-Zoom
cameras, that maximise the trace of the information matrix of the Extended Information Filter (EIF) used for estimating the position of a target on the ground.
Because of the difficulty of the problem only few papers tried to tackle active sensing control from an analytic point of view. In [17], the effects of observer motion on estima-tion accuracy for
bearing-only measurements was addressed. In [18], the problem of maximising the smallest eigenvalue of the OG was casted within the calculus of variation theory ([19]) and solved for a flat 2D system
with only one output measurement.
In this paper, we consider a differentiable approximation of the smallest eigenvalue of the OG (i.e. the Schatten norm) as a measure of observability in order to avoid the non-differentiability in
case of repeated eigenvalues. The orig-inality of our method is that it combines an online gradient-descent optimization strategy with a concurrent estimation scheme (an Extended Kalman Filter in our
case) meant to recover an estimation of the true (but unknown) state during motion. The need for an online solution is motivated by the fact that, for a nonlinear system, the observability Gramian is
a function of the state trajectories, which, in a real scenario, are not assumed available. By using an offline optimisation method that relies on the initial estimation, the resulting trajectory
would be sub-optimal – e.g. in a worst-case scenario of a system that admits singular inputs, the optimal trajectory from the estimated initial position may be the singular one from the real initial
In order to make the online optimisation problem tractable from an optimisation point of view, we restrict our attention to the case of non-linear differentially flat systems [20] and we represent
the flat outputs with a family of curves (B-Spline) function of a finite number of parameters. To check the effectiveness of our method we will consider a planar robot with a single nonlinear output
measurement (the squared distance from a marker) for which a partial analytic solution can also be obtained by applying the results in [18] (see the Appendix in [21]) — an analytic analysis that can
serve as a ‘ground truth’ for validating the results of the proposed gradient-based method (which can, instead, be applied to any differentially flat system and output map for which an analytic
analysis may not be possible).
We believe that the formulation of our problem is quite general and the computational efficiency and simplicity of the proposed solution allow applying it to more complex systems than the one
considered in this paper as a case study, as, e.g. unicycles and quadrotor UAVs. Moreover, the method can be further generalised by including the environment (e.g. targets) as state variables to be
estimated and by introducing additional constraints in order to, e.g., avoid
obstacles or reach points of interest.
The paper is structured as follows. In Section II the optimal control problem is introduced while in Section III a solution that combines an online gradient-descent opti-mization strategy with a
concurrent estimation scheme is provided. In Section IV we apply our method to a flat 2D system. Finally, the paper ends with some conclusions.
II. PROBLEM STATEMENT Let us consider a generic nonlinear dynamics
˙q(t) = f (q(t), u(t)), q(t0) = q0 (1)
z(t) = h(q(t)) + ν (2)
where q(t) ∈ Rnrepresents the state of the system, u(t) ∈ U is the control input (U is a subset of Rm), z(t) ∈ Rp is the sensor output (the measurements available through the onboard sensors), f and
h are analytic functions and ν ∼ N (0, R(t)) is a normally-distributed Gaussian output noise with zero mean and covariance matrix R(t) A well-known observability criterion for system (1)–(2), related
to the concept of local indistinguishable states [3], [18], is the Observability Gramian(OG) Go(t0, tf) ∈ IRn×n:
Go(t0, tf) , Z tf t0 Φ(τ, t0)TH(τ )TW (τ )H(τ )Φ(τ, t0) dτ (3) where tf > t0, H(τ ) = ∂h(q(τ ))[∂q(τ )] , and W (τ ) ∈ Rp×p is a symmetric positive definite weight matrix (a design parameter), that
may be used for, e.g. accounting for outputs with different units and/or, as in this paper, for considering the reliability of the outputs with different noise levels. Matrix Φ(t, t0) ∈ Rn×n, also
known as sensitivity matrix, is given as Φ(t, t0) =
∂q[o] and verifies the following differential equation
Φ(t, t0) =
∂f (q(t), u(t))
∂q(t) Φ(t, t0) , Φ(t0, t0) = I. (4) If the (symmetric and positive definite) matrix Go is full rank over the time interval [t0, tf], then system (1)–(2) is locally weakly observable, i.e. it is
possible to (locally) recover the state trajectory q(t) from the knowledge of z(t) and u(t), t ∈ [t0, tf]. The OG, similarly to the well-known Observability Rank (OR) condition [5], can hence be
exploited for verifying the observability of a given nonlinear system. However, while the OR condition can only provide a “binary answer” about the observability of the system, the OG also provides a
measure of the amount of information gathered by the sensors along the trajectory followed during motion [22]. One can then attempt maximization of some performance index of the OG (typically a
function of its eigenvalues) w.r.t. the system inputs in order to produce a system trajectory with maximum information content over a future time horizon.
In this paper, we will consider the smallest eigenvalue of the OG as performance index, i.e. λmin(Go(t0, tf)), and we will determine the optimal control strategy u∗(t), t ∈ [t0, tf] that maximises
λmin(Go(t0, tf)). Indeed, the inverse of the
smallest eigenvalue of the OG is proportional to the maxi-mum estimation uncertainty, and hence, its maximisation is expected to minimize the maximum estimation uncertainty of any estimation strategy
that could be used, e.g. a EKF [8]. By increasing the value of the smallest eigenvalue, we also expect the convergence rate of the observer to increase. However, it is well-known that considering the
smallest eigenvalue of a matrix A as a cost function can be ill-conditioned from a numerical point of view in case of repeated eigenvalues. For this reason, as also done in [23], we will consider the
following cost function (aka Schatten norm), kAkµ= µ v u u t n X i=1 λµ[i](A) (5)
with µ −1, as a differentiable approximation of λmin(A). Moreover, to ensue well-possness of the optimisation prob-lem, we will constrain the solution to be such that the “control effort” (or energy)
needed by the robot for moving along the trajectory from t0 to tf is fixed and equal to ¯E, i.e. E(t0, tf) = Z tf t0 q u(τ )T[M u(τ ) dτ = ¯][E .]1 [(6)]
The rest of the paper is hence devoted to propose an online solution to our problem that will combine an online gradient-descent optimization strategy with a concurrent estimation scheme (an EKF in
our case) meant to recover an estimation ˆ
q(t) of the true (but unknown) state q(t) during motion. The need for an online solution is motivated as follows: the Gramian Go is a function of the whole state trajectory q(t), t ∈ [t0, tf], but
the state and, in particular, the initial condition q(t0), is not assumed available. Therefore, in order to perform an offline optimization of Go, one would necessarily need to rely on some estimated
ˆq(t) (for example, by integrating the system dynamics from an initial estimation ˆ
q(t0)). Since ˆq(t) is only an approximation of the true state evolution q(t), the resulting optimized path would then represent, in general, a sub-optimal one. On the other hand, during the robot
motion, it is possible to exploit a state estimation algorithm, such as a EKF, for improving online the current estimation ˆq(t) of the true state q(t), with ˆq(t) → q(t) in the limit. Availability
of a converging state estimation ˆq(t) then makes it possible to continuously refine (online) the previously optimised path by leveraging the newly acquired information during motion.
Before presenting our proposed solution in the next Sec-tion, we conclude with three remarks.
Remark 1 In general, a closed-form expression for the sensitivity matrix Φ may not be available, since finding a solution for (4) is as complex as finding a solution for(1). However, in some
particular cases of interest (e.g. the
1[Indeed, in general λ]
min Go(t0, tf) could be unbounded from above
without the control effort E.
unicycle) matrixΦ can be found in closed form. For all the other cases, a numerical integration of (4) is required. Remark 2 The integrand in (3) can have full rank only if p ≥ n, i.e. if the number
of available measurements is larger or equal than the number of state variables (such as, for instance, in Structure from Motion (SfM) problems [24]). Whenp < n (as in the case studies reported in
Sections IV), maximization ofλmin(or of its differentiable approximation) is still possible but only in an integral sense (i.e. over the whole time horizonT ) as shown in the reported results. Remark
3 The method we propose in this paper can be used not only with the OG but also with other measures of information, as e.g. the Kalman filter/smoother covariance matrix, related to the mutual
information, or the Fisher information matrix. A sensible conclusion is that all these measures are related and hence the obtained solutions should be similar. Future works will be dedicated to
determine an analytical relationship of equivalency between all these measures.
We assume, as explained before, that a EKF is run by the robot during its motion for producing an estimation ˆq(t) (and associated estimated covariance matrix ˆP (t)) from the collected measurements
and applied inputs. Let ¯t ∈ [t0, tf] be a generic time instant during the robot motion and partition (3) as
Go(t0, tf) = Go(t0, ¯t) + Go(¯t, tf) . (7) The first term Go(t0, ¯t) represents a “memory” of the past information already collected via the available measurements during t ∈ [t0, ¯t] and can be
computed by numerically integrating (3) along the past estimated trajectory. This term is obviously constant and cannot be optimised any longer at time t = ¯t. On the other hand, the second term Go
(¯t, tf) represents the information yet to be collected during t ∈ [¯t, tf] that can, instead, still be optimised.
We note that, although the term Go(t0, ¯t) is a con-stant parameter w.r.t. the optimization variables, one still needs to include it in the cost function since in general λmin(A + B) ≥ λmin(A) + λmin
(B) (Weyl’s inequality). Furthermore, as explained, both terms in (7) are function of the state evolution q(t) which is assumed unknown: therefore, Go(t0, ¯t) must be evaluated on the estimated state
trajectory ˆq(t), t ∈ [t0, ¯t], and Go(¯t, tf) on a “predicted” state trajectory generated from the current ˆq(¯t).
In order to make the problem more tractable from an optimization point of view (and, thus, to better cope with the real-time constraint of an online implementation) we now make two simplifying
working assumptions. First, we restrict our attention to the case of non-linear differentially flat2 [systems [20]: as well-known, for these systems one]
2[The class of flat systems includes some of the most common robotic]
platforms such as, e.g. unicycles, cars with trailers and quadrotor UAVs, and in general any system which can be (dynamically) feedback linearized [25].
can find a set of outputs ζ ∈ Rm, termed flat, such that the state and inputs of the original system can be expressed algebraicallyin terms of these outputs and a finite number of their derivatives.
In the context of this work, the differentially flatness assumption allows avoiding the numerical integration of the nonlinear dynamics (1) for generating the future state evolution ˆq(t), t ∈ [¯t,
tf], from the planned inputs u(t) and the current state estimate ˆq(¯t). Second, we represent the flat outputs (and, as a consequence, also the state and inputs of the considered system) with a
family of parametric curves. This assumption allows then reducing the complexity of the problem from an infinite-dimensional optimization into a finite-dimensional one.
Among the many possibilities, and taking inspiration from [26], in this work we leverage the family of B-Splines [27] as parametric curves. B-Spline curves are linear combinations, through a finite
number of control points xc= (xTc,1, x T c,2, . . . , x T c,N) T ∈ Rm[· N , of basis functions] B[j]λ[: S → R for j = 1, . . . , N . Each B-Spline is given as]
γ(xc, ·) : S → Rm s 7→ N X j=1 xc,jBαj(s, s) = Bs(s)xc (8)
where S is a compact subset of R, Bs(s) ∈ IRm×N. The degree α > 0 and knots s = (s1, s2, . . . , s`) are constant parameters3. Bs(s) is the collection of basis functions and Bα
j is the j-th basis function evaluated in s, obtained by means of the Cox-de Boor recursion formula [27]. In the following, the control points xc will hence become the optimization variables.
Notice that the value s(t) of the parameter s corresponding to the time t depends on the desired timing law along the path. Without loss of generality, in the following we will assume that an
arc-length parametrization is used, and hence s(t) can be simply computed by integrating ˙s(t) = v(xc, s)−1where s(t) is the value of parameter s correspond-ing to the path length t.
We are now able to state the online optimal active sensing control problem expressed in terms of B-Splines.
Problem 1 (On-line active sensing control via B-Spline) Given the non linear system (1)–(2), an estimation ˆq(¯t) of the true state q(¯t) at time ¯t with covariance matrix P (¯t) and a time horizonT
− ¯t = tf− to− ¯t > 0, find the optimal position of the control pointsx∗[c] such that,
x∗c = arg max xc kGo(t0, ¯t) + Go(xc, s)kµ (9) where Go(xc, s) = Z s(tf) s(¯t) QT(xc, σ)R −1 Q(xc, σ) v(xc, σ)dσ
3 [The relation between `, α and N is ` = N − α + 1. α is chosen]
in order to guarantee the continuity of all the state variables that, in turns, depend on the flat outputs and a finite number of their derivatives. Once this property is guaranteed, both α and N can
be chosen as a trade-off between the computational cost and the possibility of obtaining a better trajectory (increasing the value of the smallest eigenvalue of the OG).
and such that Z s(tf)
u(xc, σ)TM u(xc, σ)dσ = ¯E − E(t0, ¯t) (10)
where M is a constant weight matrix and ¯E is a constant design parameter. Moreover
Q(xc, σ) = ∂h(q) ∂q Φ(xc, σ) , Go(t0, ¯t) = Z ¯t t0 Φ(τ, t0)T ∂h(q(τ )) ∂q(τ ) T R−1∂h(q(τ )) ∂q(τ ) Φ(τ, t0)dτ, E(t0, ¯t) = Z s(¯t) s(t0) p u(xc, σ)TM u(xc, σ) dσ . v(xc, σ) = ∂γ(xc, σ) ∂s 2 .
We then now proceed to detail our proposed solution to Problem 1, under the stated assumptions, which will consist in a gradient-based action affecting the location of the control points xc and,
thus, the overall shape of the trajectory followed by the system.
It is important to note that in (9) only the integral over the time interval [¯t, tf] depends on the positions of the control points. The same applies for the computation of (10).
Problem 1 can be solved by adopting an online gradient-descent optimization strategy. We introduce a time depen-dency xc(t) so that the B-Spline path becomes a time varying path. Moreover, we assume
that the control points move according to the following simple update law
˙xc(t) = uc(t), xc(t0) = xc,0, (11) where uc(t) ∈ IRm× N .
Remark 4 During motion, while the observer is updating ˆ
q(¯t), it is also important to guarantee that the B-Spline curves, which define the future state trajectories, pass through the current state ˆq(¯t) at time ¯t. Indeed, ˆq(¯t) depends on the update
law of the EKF and is independent from the B-Spline shaping due to the gradient-descent optimisation strategy. Once this constraint is satisfied, the control point movements, due to the gradient of
the cost functional in(9), while also guaranteeing the control effort constraint, must not violate this requirement. Further local properties at time ¯
t may have to be guaranteed during control point movements. This imposes some continuity constraints on the flat outputs at ¯t and some of their derivatives (those affecting the system state) which,
in turn, results in additional constraints on the motion of the B-Splines coefficients.
The gradient update rule for the control points that solves Problem 1 while guaranteeing Remark 4 can be generated online as
uc(¯t) = uqˆ(¯t) + NuC(uE(¯t) + NuE∇xckGokµ) (12)
In (12), the term uqˆ guarantees that eqˆ(¯t) = q[γ](γ(xc(¯t), s(¯t)), ˙γ(xc(¯t), s(¯t)), . . . )−ˆq(¯t) ≡ 0, i.e. realises the first requirement of Remark 4. Indeed, since
˙ γ(xc(t), s(t)) = ∂qγ(γ(xc(t), s(t))) ∂s s(t) +˙ ∂qγ(γ(xc(t), s(t))) ∂xc uc(t)
one has uqˆ= −J†γ Kqˆeqˆ(¯t) + ∂q[γ](γ(xc(t), s(t))) ∂s [x] c(¯t),s(¯t) ˙s(¯t) ! where, Jγ = q[γ](γ(xc(t),s(t))) ∂xc [x] c(¯t),s(¯t) and Kqˆis a constant parameter. On the other hand, matrix NuC =
I2N − J†CJC is the projector onto the null space of
JC= ∂γ(xc, s) ∂xc , ∂ ∂s ∂γ(xc, s) ∂xc , · · · , ∂ (k) ∂s(k) ∂γ(xc, s) ∂xc xc(¯t),s(¯t)
This projector guarantees that the requirement of maximising the smallest eigenvalue of OG while maintaining E(¯t, tf) =
E − E(t0, ¯t) can be accomplished without changing the primary task eqˆ(¯t) ≡ 0, hence realising the second re-quirement of Remark 4, and also other local properties at γ(xc(¯t), s(¯t)) of the
B-Spline, as e.g. the tangency, the curvature, and so on. For instance, if some nonholonomic constraints must be satisfied for the nonlinear system, the tangency at γ(xc(¯t), s(¯t)) must not be
influenced by the movement of the control points, and matrix NuC would be exploited for enforcing this constraint.
The term uE(¯t) is designed so as to guarantee the control effort constraint, i.e. eE(¯t) = E(¯t, tf) − ( ¯E − E(t0, ¯t)) ≡ 0, where E(t0, ¯t) at time ¯t is considered a constant since it cannot be
modified any longer by moving the control points. As a consequence, uE(¯t) = −KEJEeE(¯t) , where JE= Z s(tf) s(¯t) ∂pu(xc, σ)TM u(xc, σ) ∂xc dσ. Matrix NuE = I2N − J†EJE is the projector onto the
null space of JE. It guarantees that ∇xckGokµ does not affect
the control effort constraint. Finally,
∇xckGokµ= 1−µ µ v u u t n X i=1 λµ[i](Go) n X i=1 µλµ−1[i] (Go)vTi ∂Go ∂xc vi
where, vi is the eigenvector associated to the eigenvalue λi and ∂Go ∂xc = Z s(tf) s(¯t) ∂QT(xc, σ)R−1Q(xc, σ) v(xc, σ) ∂xc dσ IV. SIMULATION RESULTS
In order to prove the effectiveness of our optimal active sensing control strategy, in this section we apply the proposed method to a planar robot that needs to estimate its position by using, as its
only output, the squared distance from the origin of a fixed global reference frame. The objective here is is to determine the most informative trajectory, i.e. the trajectory that, by limiting the
control effort to be ¯E, maximizes the smallest eigenvalue of the OG and hence reduces as much as
possible the maximum estimation uncertainty. Let us hence consider the following dynamic system
˙ x(t) = ux(t) ˙ y(t) = uy(t) z(t) = h(x(t), y(t)) + ν , (13)
that represents a planar robot with position q(t) = [x(t), y(t)]T. The quantity ν ∼ N (0, R) is the output noise and R ∈ IR is the constant variance of ν. We will consider as output of the system the
squared distance, hereafter named range, w.r.t. a marker located at the origin of a global reference frame
z(t) = h(q(t)) + ν = x(t)2+ y(t)2+ ν . (14) The sensitivity matrix Φ(t, t0) = I for (13) while ∂h(q)[∂q] = 2[x(t), y(t)]. As a consequence, the OG is given by
Go(t0, tf) = Z tf t0 4R−1 x(τ )2 [x(τ )y(τ )] x(τ )y(τ ) y(τ )2 dτ (15) With M = I, the control effort constraint in Problem 1 is exactly the length of the path that, without loss of generality, the
vehicle follows at constant speed equal to 1. The N control points that define the path are 2D points, i.e. xc ∈ IR2, i.e. m = n, and p(t) = γ(xc(t), s(t)) ∈ IR2is the planar trajectory of the robot.
In order to improve the estimation of the position of the system, we will use, as explained, a EKF as state estimation algorithm.
As a consequence, the optimal control problem for (13) with output (14) consists in determining the path of length
E ≡ T that maximise the smallest eigenvalue of the OG. The solution to this problem can be obtained by applying the control law given by (12) to the control points.
Fig. 1 compares the estimation performance of a EKF filter when the system moves either along a straight line path, or along the optimal path obtained by solving Problem 1 for the considered system4.
Moreover, in Table I all data and numer-ical results are also reported. It is important to point out that, once an estimation of the robot initial position is available, the optimal path from this
position is obtained (the green line path in Fig. 1, upper right corner) starting from an initial guess that, in our simulation, coincides with the straight line path used for the comparison. Of
course, given the non convexity of the considered optimisation problem, the initial guess will determine towards which local minimum the algorithm will converge. Global optimization procedures (which
could, e.g. iterate over a number of different initial guesses) can clearly be adopted. However, for the sake of space, this analysis is omitted in this paper and left to future developments.
Interestingly, however, we have heuristically verified that the dynamic system considered in this section is not very sensible to this choice.
At the very first iteration, the vehicle only starts moving once the control points have reached their optimal configu-ration, and hence an optimal B-Spline path is obtained for the current estimated
position. Note that, due to estimation
NUMERICAL SIMULATION RESULTS OFFIG. 1. FOR BOTH SIMULATIONS,THE VEHICLE STARTS FROMq(t0) = [1.41, 1.41]T[M]AND THE INITIAL
ESTIMATION ISˆq(t0) = [1.11, 1.91]T[M]WITHPo= 0.5 I . THE INITIAL ESTIMATION ERROR ISe(t0) = [0.3, 0.5]T. THE CONTROL EFFORT ALONG
THE PATH THAT IN THIS CASE COINCIDES WITH THE LENGTH OF THE PATH IS FOR BOTH CASESE(t0, tf) = 5.64 [M]. THE OUTPUT NOISE
COVARIANCE MATRIXR = 3 · 10−1[. T][HE NUMBER OF CONTROL POINTS][N = 5][AND THE DEGREE OF THE][B-S][PLINE][α = 3. N][OTICE THAT THE]
q(tf) [m] ˆq(tf) [m] e(tf) [m]×10−2 RM S(e(tf)) λmin(Go(tf)) λM AX(Go(tf))
Optimal path x(tf) = 0.80 y(tf) = −2.86 ˆ x(tf) = 0.81 ˆ y(tf) = −2.84 ex(tf) = −1.56 ey(tf) = −2.69 2.20 · 10−2 58.33 50.41
Straight line path x(tf) = 7.05 y(tf) = 1.41 ˆ x(tf) = 7.03 ˆ y(tf) = 1.50 ex(tf) = 2.57 ey(tf) = −9.04 6.65 · 10−2 11.03 501.29 -4 -3 -2 -1 0 1 2 3 4 x [m] -3 -2 -1 0 1 2 y [m] Optimal path -2 -1 0
1 2 3 4 5 6 7 x [m] -3 -2 -1 0 1 2 3 y [m]
Straight line path
0 1 2 3 4 5 t [s] -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 Estimation errors [m]
Estimation errors and RMS
ex(t) Optimal path ey(t) Optimal path RMS(e(t)) Optimal path ex(t) Straight line path e
y(t) Straight line path RMS(e(t)) Straight line path
0 1 2 3 4 5 t [s] 0 100 200 300 400 500 600
Eigenvalues of G (From Observer)
Eigenvalues of the observability Grammian
λmin(t) Optimal path λMAX(t) Optimal path λmin(t) Straight line path λMAX(t) Straight line path
0.6 0.7 0.8 0.9 1 1.1 -3.1 -3 -2.9 -2.8 -2.7 -2.6 σ[min]= 0.0051; σ[MAX]= 0.0051 6.8 6.9 7 7.1 7.2 7.3 1.3 1.4 1.5 1.6 1.7 1.8 σ[min]= 0.0006; σ[MAX]= 0.0271
Fig. 1. The estimation performance with the EKF along the optimal path is compared with a simple straight line path (it coincides with the initial guess for the optimization procedure). At the end of
the optimal path the smallest eigenvalue reaches a value that is double w.r.t. the one reached at the end of the straight line path. Moreover, the eigenvalues almost coalesce at the end of the
optimal path (they do not reach exactly also the same value because of the use of the Schatten norm). Notice that the eigenvalues are computed to less than R. Along the optimal path the rms of the
estimation error reaches a smaller value and the error ellipse is uniformly shaped and smaller than along the straight line path. As a consequence, the active sensing control gives rise to a more
precise and accurate estimation.
errors, this path might be sub-optimal. However, while the robot moves along the planned path, the EKF reduces the estimation error and the gradient descent strategy keeps updating online the shape
of the optimal path. As a conse-quence, the final B-Spline path will, in general, differ from the one computed at the beginning (compare the blue line path with the green one, in Fig. 1, upper right
corner) because of the better estimated state provided by the EKF during motion. In Fig. 1, upper right corner, the real robot trajectory and the estimated one are also reported in black and red
lines, respectively. For completeness, also the optimal path from the real initial position of the robot is reported in cyan. Notice that until 1 s of simulation, the straight line path outperforms
the optimal one in terms of convergence rate to zero of the RMS of the estimation error (see Fig. 1, in the bottom left side). Indeed, the optimal path starts
along a direction that is almost unobservable, i.e. almost tangent to the straight line passing through the origin and the initial position of the system. It is clear that along this first part of
the optimal path the smallest eigenvalue has a negligible increase, giving rise to poor information. However, once the system changes direction and aligns to the second part of the path, the smallest
eigenvalue increases rapidly until it reaches the maximum value which is almost equal to the largest eigenvalue. Indeed, at the end, along the optimal path the RMS of the estimation error is three
times smaller than the one reached along the straight line path. This demonstrates the fundamental non-local nature of the observability property, expressed by means of the smallest eigenvalue of OG
for which, obviously, λmin(A + B) 6= λmin(A) + λmin(B).
path are also reported. Since, along the optimal path, the smallest eigenvalue of OG is maximised, the ellipse is much less elongated along the eigenvector associated to the largest eigenvalue of the
covariance matrix. As a consequence, along the optimal path, a more precise and accurate estimation can be obtained. Moreover, as the observation time is enough for the smallest eigenvalue to reach
the value of the largest one (see Fig. 1, in the bottom right corner), the estimation uncertainty ellipse is almost a circle and hence a uniform estimation uncertainty is obtained.
V. CONCLUSIONS AND FUTURE WORKS In this paper, the problem of active sensing control for non-linear differentially flat systems has been tackled. The smallest eigenvalue of the observability Gramian
has been used to quantify the richness of the acquired information. Then, we have represented the flat outputs with a family of B-Spline whose shape can be adjusted by changing a finite number of
parameters. We hence have exploited an online gradient descent strategy to move the control points of such B-Spline in order to actively maximise the smallest eigenvalue of the OG, while at the same
time, an Extended Kalman Filter (EKF) has been used to estimate the system state. By applying our strategy to a planar robot, we have shown that with an EKF the maximum estimation uncertainty and the
convergence rate is significantly reduced along the optimal path, thus giving rise to an improved estimation of the state at the end of the path.
Future works will mainly consist in applying the proposed method to more complex systems as e.g. unicycles and quadrotors with multiples markers and different kinds of measurements. The problem will
be also extended to multiple robot systems. An important step towards the use of our method in SLAM problems is to include the environment within the state of variable to be estimated. It would be
then interesting to observe the differences between the optimal trajectories obtained in case of targets and in case of markers at the same position. As a final point, it will be important to address
Remark 3.
[1] B. T. Hinson and K. A. Morgansen, “Gyroscopic sensing in the wings of the hawkmoth manduca sexta : the role of sensor location and directional sensitivity,” Bioinspiration & Biomimetics, vol. 10,
no. 5, p. 056013, 2015.
[2] M. Bianchi, P. Salaris, and A. Bicchi, “Synergy-based hand pose sensing: Optimal glove design,” International Journal of Robotics Research (IJRR), vol. 32, no. 4, pp. 407–424, April 2013, impact
factor: 2.523, 5-Year Impact factor: 3.206.
[3] G. Besanc¸on, Nonlinear observers and applications. Springer, 2007, vol. 363.
[4] F. A. Belo, P. Salaris, D. Fontanelli, and A. Bicchi, “A complete observability analysis of the planar bearing localization and mapping for visual servoing with known camera velocities,”
International Journal of Advanced Robotic Systems, vol. 10, 2013.
[5] R. Hermann and A. J. Krener, “Nonlinear controllability and observ-ability,” IEEE Transactions on automatic control, vol. 22, no. 5, pp. 728–740, 1977.
[6] R. Bajcsy, “Active perception,” Proceedings of the IEEE, vol. 76, no. 8, pp. 966–1005, Aug. 1988.
[7] B. T. Hinson, M. K. Binder, and K. A. Morgansen, “Path planning to optimize observability in a planar uniform flow field,” in American Control Conference (ACC), June 2013, pp. 1392–1399.
[8] B. T. Hinson and K. A. Morgansen, “Observability optimization for the nonholonomic integrator,” in American Control Conference (ACC), June 2013, pp. 4257–4262.
[9] J. Shan and Q. Sun, “Observability analysis and optimal information gathering of mobile robot navigation system,” in IEEE International Conference on Information and Automation, Aug 2015, pp.
731–736. [10] J. L. Ny and G. J. Pappas, “On trajectory optimization for active sensing in gaussian process models,” in Proceedings of the 48th IEEE Conference on Decision and Control (CDC), held
jointly with the 28th Chinese Control Conference (CCC)., Dec 2009, pp. 6286–6292. [11] R. Marchant and F. Ramos, “Bayesian Optimisation for informative
continuous path planning,” in IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 6136–6143. [12] N. Atanasov, J. Le Ny, K. Daniilidis, and G. J. Pappas, “Information
acquisition with sensing robots: Algorithms and error bounds,” in IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 6447–6454.
[13] K. Zhou and S. I. Roumeliotis, “Multirobot Active Target Tracking With Combinations of Relative Observations,” IEEE Transactions on Robotics, vol. 27, no. 4, pp. 678–695, 2011.
[14] F. Bourgault, A. A. Makarenko, S. B. Williams, B. Grocholsky, and H. F. Durrant-Whyte, “Information based adaptive robotic explo-ration,” in IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), vol. 1, Oct. 2002, pp. 540–545 vol.1.
[15] S. Ponda, R. Kolacinski, and E. Frazzoli, “Trajectory Optimization for Target Localization Using Small Unmanned Aerial Vehicles,” in AIAA Guidance, Navigation, and Control Conference. Reston,
Virigina: American Institute of Aeronautics and Astronautics, Jun. 2012. [16] C. Ding, A. A. Morye, J. A. Farrell, and A. K. Roy-Chowdhury,
“Coordinated sensing and tracking for mobile camera platforms,” in American Control Conference (ACC), June 2012, pp. 5114–5119. [17] S. E. Hammel, P. T. Liu, E. J. Hilliard, and K. F. Gong, “Optimal
ob-server motion for localization with bearing measurements,” Computers & Mathematics with Applications, vol. 18, no. 1-3, pp. 171–180, Jan. 1989.
[18] F. Lorussi, A. Marigo, and A. Bicchi, “Optimal exploratory paths for a mobile rover,” in IEEE International Conference on Robotics and Automation (ICRA), vol. 2, 2001, pp. 2078–2083.
[19] I. Gelfand and S. Fomin, Calculus of variations. Courier Corporation, 2000, translated and edited by R. A. Silverman.
[20] M. Fliess, J. L´evine, P. Martin, and P. Rouchon, “Flatness and defect of nonlinear systems: Introductory theory and examples,” International Journal of Control, vol. 61, no. 6, pp. 1327–1361,
[21] P. Salaris, R. Spica, P. Robuffo Giordano, and P. Rives, “Online Optimal Active Sensing Control,” in International Conference on Robotics and Automation (ICRA), Singapore, Singapore, May 2017.
[Online]. Available:https://hal.inria.fr/hal-01472608
[22] A. J. Krener and K. Ide, “Measures of unobservability,” in Proceedings of the 48th IEEE Conference on Decision and Control, held jointly with the 28th Chinese Control Conference. CDC/CCC, Dec
2009, pp. 6401–6406.
[23] R. Spica and P. Robuffo Giordano, “Active decentralized scale es-timation for bearing-based localization,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October
2016. [24] R. Spica, P. R. Giordano, and F. Chaumette, “Active structure from motion: Application to point, sphere, and cylinder,” IEEE Transactions on Robotics, vol. 30, no. 6, pp. 1499–1513, Dec
[25] A. D. Luca and G. Oriolo, “Trajectory planning and control for planar robots with passive last joint,” International Journal of Robotics Research (IJRR), vol. 21, no. 5–6, pp. 575–590, 2002.
[26] C. Masone, P. R. Giordano, H. H. Blthoff, and A. Franchi, “Semi-autonomous trajectory generation for mobile robots with integral haptic shared control,” in IEEE International Conference on
Robotics and Automation (ICRA), May 2014, pp. 6468–6475.
[27] L. Biagiotti and C. Melchiorri, Trajectory planning for automatic machines and robots. Springer Science & Business Media, 2008.
|
{"url":"https://123dok.org/document/dzx106wy-online-optimal-active-sensing-control.html","timestamp":"2024-11-07T01:16:17Z","content_type":"text/html","content_length":"178027","record_id":"<urn:uuid:efc86e24-42e6-4f58-86b9-781862aa7a42>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00405.warc.gz"}
|
Analog Lab 3 + Minilab MKii - Midi Mapping in Logic Pro X. GRRRRRRR!
I have scoured the internet for an answer to no avail. Admitted novice with midi mapping, but for the life of me, can't find an answer on how to get the AL3 AU plugin in Logic Pro X to work the way
AL3 does in standalone mode. Arturia support has proven worthless thus far. Is there a script/automated way of doing this? (I get the sense there isn't). If the only solution is manual mapping, can
anyone provided detailed instructions? Even better, a youtube video tutorial specific to the ML MKii would get you good karma for life!
Below is an email chain between myself and Arturia support describing the problem a bit more.
When in standalone mode in Analog Lab, the minilab's controller assignments work perfectly. However, in logic, the default mapping is completely illogical and not at all like that in analog lab
(where the knob/pad you adjust on the physical controller turns the same parameter on the 'virtual' controller onscreen).
Admittedly, I'm new to logic and don't know how to manually route the settings, particularly for every virtual instrument in AL3 (if that's even a thing). Is there a 'plug-and-play' script/setting/
whatever that I can 'drop' in logic such that the controls will emulate those in standalone AL3?
I see that for the keyLab there's a an option in the midi control center to set the DAW as Logic. But no such thing exists for the minilab.
I'm frustrated and stuck. any help would be greatly appreciated, the more detailed the better.
Basically, the mappings should be the same than in standalone, and you shouldn't have to configure anything for having them to work.
1. Please confirm that the MiniLab MkII is selected on Analog Lab 3 interface > MIDI Controller:
2. Make sure that the MiniLab MkII is either:
- Not enabled as a control surface in Logic
- That none of the MiniLAb controls have been mapped in Logic MIDI assigment
I have checked all settings are as you said but I’m still confused.
I’m referring to the Analog Lab 3 AU plugin INTERNAL to logic. When no other software is running, I go into logic and select ‘software instrument’-> Instrument->AU Instrument->Arturia->Sound Lab 3->
Stereo, the plug-in pops up. But because I’ve disabled the Minilab as an input device, when I hit a key on the minilab, nothing happens. This would make sense because I’ve disabled it as a input
device but I know it’s capable of producing sound b/c if I click in the plugin on a key with my mouse, it produces a sound. Conversely, when I enable it as an input device, it does ‘work’ but again
the mappings are all off and I’m back where I started.
If AL3 was capable of running as a rewire device, your instructions below would make more sense to me. Can you please clarify that your instructions below are for AL3 as an AU instrument and if not,
what configuration are you speaking of?
Can you please provided step-by-step instructions (like you were talking to a novice) from the first step on how I can use AL3 in conjunction with Logic and have the mappings be consistent.
I apologize if I’m simply missing it, but it’s still not making sense to me.
|
{"url":"https://legacy-forum.arturia.com/index.php?topic=94616.0","timestamp":"2024-11-13T17:48:22Z","content_type":"application/xhtml+xml","content_length":"25657","record_id":"<urn:uuid:3819d8a8-18dd-4f88-9beb-2de4504c312a>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00632.warc.gz"}
|
Data Science Simplified Part 10: An Introduction to Classification Models - DataScienceCentral.com
Webster defines classification as follows:
A systematic arrangement in groups or categories according to established criteria.
The world around is full of classifiers. Classifiers help in preventing spam e-mails. Classifiers help in identifying customers who may churn. Classifiers help in predicting whether it will rain or
not. This supervised learning method is ubiquitous in business applications. We take it for granted.
In this blog post, I will discuss key concepts of Classification models.
Classification Categories
Regression models estimate numerical variables a.k.a dependent variables. For regression models, a target is always a number. Classification models have a qualitative target. These targets are also
called as categories.
In a large number of classification problems, the targets are designed to be binary. Binary implies that the target will only take a 0 or 1 value. These type of classifiers are called as binary
classifiers. Let us take an example to understand this.
Regression models estimate numerical variables a.k.a dependent variables. For regression models, a target is always a number. Classification models have a qualitative target. These targets are also
called as categories.
In a large number of classification problems, the targets are designed to be binary. Binary implies that the target will only take a 0 or 1 value. These type of classifiers are called as binary
classifiers. Let us take an example to understand this. A bank’s loan approval department wants to use machine learning to identify potential loan defaulters. In this case, the machine learning model
will be a classification model. Based on what the model learns from the data fed to it, it will classify the loan applicants into binary buckets:
• Bucket 1: Potential defaulters.
• Bucket 2: Potential non-defaulters.
The target, in this case, will be an attribute like “will_default_flag.” This target will be applicable for each loan applicant. It will take values of 0 or 1. If the model predicts it to be 1, it
means that the applicant is likely to default. If the model predicts it to 0, it means that applicant is likely to not default. Some classifiers can also classify the input into many buckets. These
classifiers are called as multi-class classifiers.
Linear and Non-Linear Classifiers
Let us say that we want to build a classifier that classifies potential loan defaulters. The features of income and credit rating determines potential defaulters.
The diagram above depicts the scenario. For simplicity let us say that the feature space is the intersection of income and credit rating. The green dots are non-defaulter and the pink dots are
defaulters. The classifier learns based on the input features (income and credit ratings) of data. The classifier creates a line. The line splits the feature space into two parts. The classifier
creates a model that classifies the data in the following manner:
• Anyone who falls on the left side of the line is a potential defaulter.
• Anyone who falls on the left side of the line is a potential non-defaulter.
The classifier can split the feature space with a line. Such classifier is called as a linear classifier.
In this example, there are only two features. If there are three features, the classifier will fit a plane that divides the plane into two parts. If there are more than three features, the classifier
creates a hyperplane.
This was a simplistic scenario. A line or a plane can classify the data points into two buckets. What if the data points were distributed in the following manner:
Here a linear classifier cannot do its magic. The classifier needs carve a curve to classify between defaulters and non-defaulters. Such kind of classifiers is called as non-linear classifiers.
There a lot of algorithms that can be used to create classification models. Some algorithms like logistic regression are good linear classifiers. Others like Neural Networks are good non-linear
The intuition of the classifier is the following:
Divide the feature space with a function (linear or non-linear). Divide it such that one part of the feature space has data from one class. The other part of the feature space has data from
other class
We have an intuition of how classifiers work. How do we measure whether a classifier is doing a good job or not? Here comes the concept of the confusion matrix.
Let us take an example to understand this concept. We built a loan-defaulter classifier. This classifier takes input data, trains on it and following is what it learns.
• The classifier classifies 35 applicants as defaulters.
• The classifier classifies 65 applicants as non-defaulters.
Based on the way classifier has performed, four more metrics are derived:
1. From those classified as defaulters, only 12 were actual defaulters. This metric is called True Positive (TP).
2. From those classified as defaulters, 23 were actual non-defaulters. This metric is called False Positive (FP).
3. From those classified as non-defaulters, only 57 were actual non-defaulters. This metric is called True Negative (TN).
4. From those classified as non-defaulters, 8 were actual defaulters. This metric is called False Negative (FN).
These four metrics can be tabulated in a matrix called as The Confusion Matrix.
From these four metrics, we will derive evaluation metrics for a classifier. Let us discuss these evaluation metrics.
Accuracy measures how often the classifier is correct for both true positives and true negative cases. Mathematically, it is defined as:
Accuracy = (True Positive + True Negative)/Total Predictions.
In the example, the accuracy of the loan-default classifier is: (12+57) / 100 = 0.69 = 69%.
Sensitivity or Recall:
Recall measures how many times did the classifier get the true positives correct. Mathematically, it is defined as:
Recall = True Positive/(True Positive + False Negative)
In the example, the recall of the loan-default classifier is: 12/(12+8) = 0.60 = 60%.
Specificity measure how many times did the classifier get the true negatives correct. Mathematically, it is defined as:
Specificity = (True Negative)/(True Negative + False Positive)
In the example, the specificity of the loan-default classifier is: 57/(57+23) = 0.7125 = 71.25%.
Precision measures off the total predicted to be positive how many were actually positive. Mathematically, it is defined as:
Precision = (True Positive)/(True Positive + False Positive)
In the example, the precision of the loan-default classifier is: 12/(12+23) = 0.48 = 48%.
These are a lot of metrics. On which metrics should we rely upon? This question very much depends on the business context. In any case, one metrics alone will not give a full picture of how good the
classifier is. Let us take an example.
We built a classifier that flags out fraudulent transactions. This classifier determines whether a transaction is genuine or not. Historical patterns shows that there are two fraudulent transaction
for every hundred transactions. The classifier we built has the following confusion matrix.
• The Accuracy is 98%
• The Recall is 100%
• Precision is 98%
• Specificity is 0%
If this model is deployed based on the metrics of accuracy, recall, and precision, the company will be doomed for sure. Although the model is performing well, it is, in fact, a dumb model. It is not
doing the very thing that it is supposed to do i.e. flag fraudulent transactions. The most important metrics for this model is specificity. Its specificity is 0%.
Since, a single metric cannot be relied on for evaluating a classifier, more sophisticated metrics are created. These sophisticated metrics are combinations of all the above metrics. A few key ones
are explained here.
F1 Score:
F1-score is the harmonic mean between precision and recall. The regular mean treats all values equally. Harmonic mean gives much more weight to low values. As a result, the classifier will only get a
high F1 score if both recall and precision are high. It is defined as:
F1 = 2x(precision x recall)/(precision + recall)
Receiver Operating Characteristics (ROC) and Area Under Curve (AUC):
Receiver Operating Characteristics a.k.a ROC is a visual metrics. It is a two-dimensional plot. It has False Positive Rate or 1 — specificity on X-axis and True Positive Rate or Sensitivity on
In the ROC plot, there is a line that measures how a random classifier will predict TPR and FPR. It is straight as it has an equal probability of predicting 0 or 1.
If a classifier is doing a better job then it should ideally have more proportion of TPR as compared to FPR. This will push the curve towards the north-west.
Area Under Curve (AUC) is the area that the ROC curve. If AUC is 1 i.e 100%, it implies that it is a perfect classifier. If the AUC is 0.5 i.e. 50%, it implies that the classifier is no better than a
coin toss.
There are a lot of evaluation metrics to test a classifier. A classifier needs to be evaluated based on the business context. Right metrics need to be chosen based on the context. There is no one
magic metric.
In this post, we have seen basics of a classifier. Classifiers are ubiquitous in data science. There are many algorithms that implement classifiers. Each has their own strengths and weaknesses. We
will discuss a few algorithms in the next posts of this series.
Originally published here on September 18, 2017.
|
{"url":"https://www.datasciencecentral.com/data-science-simplified-part-10-an-introduction-to-classification/","timestamp":"2024-11-09T01:14:44Z","content_type":"text/html","content_length":"174835","record_id":"<urn:uuid:4c3d48b2-5a9f-459a-81dc-102db1b3d097>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00661.warc.gz"}
|
Prof. Dr. Atish Dabholkar (ICTP): Quantum black holes: an encounter between Hawking and Ramanujan
Date of publication: 24. 11. 2022
Monday physics colloquium
Velika predavalnica IJS
Pozor: neobičajna termin in kraj // Attention: unusual time and place
Black holes are an astonishing prediction of Einstein's general relativity with bizarre causal and quantum properties. Hawking discovered that in quantum theory a black hole is not really black but
is slowly emitting radiation. Understanding the implication of Hawking radiation has proved to be a very valuable guide in our search to unify general relativity with quantum mechanics and to learn
about the quantum structure of spacetime. Explorations of quantum black holes in string theory have led to unexpected connections with the mathematical structures created by the great Indian
mathematician Ramanujan from a century ago. In this lecture I will describe the fascinating history, physics and mathematics of quantum black holes.
|
{"url":"https://www.fmf.uni-lj.si/en/news/event/804/prof-dr-atish-dabholkar-ictp-quantum-black-holes-an-encounter-between-hawking-and-ramanujan/","timestamp":"2024-11-04T08:54:12Z","content_type":"text/html","content_length":"18935","record_id":"<urn:uuid:48b4eb3c-d7d3-43b0-8de8-a4d2dcda9dfb>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00210.warc.gz"}
|
Cable Theory for Skeptics - Townshend Audio Forum
It does all sound a bit far fetched. Admittedly. But there is a theory or hypothesis for almost every doubt expressed on the subject. In fact, it is often surprising how simple some of the
explanations are.
We could state simply that the better a cable is electrically, the better it will sound. However, the exact analogue of this hypothesis as regards amplifiers (that improved THD figures automatically
implies improved sound) has been well debunked, so we should be rash to claim that. The point is, both for amplifiers and cables, that the ear functions in such a strange manner (as far as anyone
can tell, which isn’t all that far) that “better is not always an obvious direction. In fact, it can be argued very convincingly on philosophical grounds that “better “must always be defined
empirically, i.e. by ear, subjectively.
From the point of view of audio design, this is highly unsatisfactory. What is needed in design is a well defined set of design criteria, and subjective judgement is not by any means the best way to
derive such. However, one must do the best one can, by correlation between many observations of many phenomena under many circumstances, by many observers. Science ends to come down to this. It’s
all very confusing really, especially if one is accustomed to believe in the objectivity of knowledge.
So when we do say the Isolda cable has better electrical properties than other cables, there are two important factors to bear in mind The first is that for a component to be “perfect”, optimum
electrical characteristics are a sine qua non. The second is that the cable design arose by application of theoretical principles to a practical situation, with evaluation carried out, ultimately,
It is the business of impedance matching in audio cables that raises the most eyebrows. Impedance matching is not normally considered an issue at frequencies where the cable length is less than
about a quarter of the signal wavelength. However, cable theory does not in fact predict any variation in wave behaviour with cable length or frequency, as regards reflection from a
mistermination. Signal reflection still occurs, but its effects at low frequencies in short cables are over in such a short time, as a proportion of frequency, that they are generally negligible.
The attached graphs show what happens in various cases when very fast risetime pulses are applied to a length of cable, terminated correctly or otherwise. Basically, the output signal from the cable
rises in a staircase fashion, where the width of each step is equal to the transit time along the cable twice (end to end and back), and the height of each step is dependent on the degree of
mistermination – gross mistermination gives smaller steps, hence more of them, hence a slower rate of rise. Rate of rise is also slowed by high loss dielectric material.
Bearing this in mind, the ideal cable for any AC application is one which is impedance matched to at least one end and which has very low losses in the dielectric. To see how relevant this was to
audio cable, we simply made up some cables complying with this script and tried them out. Unfortunately, the results were sufficiently marked to give us significant pause for thought about how on
Earth the ear/brain manages to pick up such apparently subtle effects so clearly. Even more surprising was the effect of reducing series resistance in the cable from low to even lower. We have yet
to think of a convincing way of demonstrating the effect of this using test instruments.
As mentioned in the Technical Note on Isolda cable, the most intriguing characteristic of the ear is its extremely refined pitch discrimination ability. The author of this note has measured his own
skill at this better than 0.1% frequency shift at 10kHz, and can also tune a concert “A” (440Hz) to about 0.2% absolute. The latter is a less common ability, but the former does not appear to be
exceptional. If, as has been suggested, the ear times cycles of a waveform, this could imply that its resolution is less than 100ns. However it does it, this characteristic bears some thinking
It has been pointed out that it is not possible to match the impedance of a loudspeaker exactly in a cable, as it varies over quite a wide range. This is true; however, it seems reasonable that one
should do the best possible, which means in practice taking an average impedance, in most cases the quoted nominal impedance of the loudspeaker. This minimises the standing wave ratio in the cable.
Since standing wave effects are more important at higher frequencies, it is arguably not vital to allow for low frequency aberrations in impedance characteristic, and many loudspeakers have only
relatively minor impedance variations above about 4kHz.
The top two of the attached graphs, Fig 1 and Fig 2, show the effect of mistermination, comparing pulse response of an 8ohm cable into an 8ohm load with that of a 75ohm cable into the same load.
Both cables have very low loss.
The third graph, Fig 3. shows the effect of poor quality dielectric material: the negative going pulse should be an exact mirror image of the positive going one.
The recent upsurge of interest in audio cables has produced a staggering variety of loudspeaker cables in assorted forms. These cables vary from the simple (house wiring TE) to the complex (MIT
shotgun) and from one extreme of bulk (Van Den Hul SCS 2) to the other (DNM). About the only thing these cables have in common is that they have been designed almost exclusively on the basis of an
incomplete analysis of the factors affecting “cable sound”
The chief factor and the one most frequently overlooked is the hearing mechanism. The old, “safe” assumptions about human hearing (“the ear has a response from 20Hz to 15 or 20kHZ and is not
sensitive to phase, amplitude variations of less than 1db, frequency response nonlinearities of less than 2dB or distortion of less than 0.3%THD”)
Give a drastically incomplete, indeed inaccurate picture; any analogy would be to assert that the Earth is spherical, ignoring completely the equatorial bulge and surface geography. A more nearly
complete description assigns to the ear (and internal sound detection mechanisms} a frequency response (highly nonlinear} from 5Hz to 45kHz, a phase sensitivity of perhaps under a microsecond, a rise
time of 11us, a sensitivity to amplitude variations of only 0.2dB and frequency shifts of less than 0.05% and a sensitivity to certain forms of linear and nonlinear distortion of 0.01%.
Given these parameters, audio design is seen to be a much less straightforward matter than is frequently assumed. Laboratory instruments regularly achieve better accuracy than is demanded of audio
systems, or better frequency response, or better static (linear) distortion, but it is well worth noting that few if any instruments require all of these spefications simultaneously.
The second neglected factor in consideration of audio cables is the interaction of the cable with the transmitting and receiving circuits at either end of it. It is relatively trivial to measure the
loss along a piece of cable by differential methods (but see note 3 below), but this does not take into account the behaviour of the driving amplifier when loaded with the cable. (An extreme case is
the oscillatory behaviour of certain power amplifiers when connected to a loudspeaker via cable of low inductance- in this case; the amplifier requires the cable inductance to act as part of the
Zobel network at its output which maintains stability.) Because of the response of amplifiers, particularly high feedback amplifiers, to very high frequencies, it is necessary to consider cable
interaction to frequencies well above the audio band (note 4).
Classical transmission line theory predicts that a transmission line (cable or waveguide) has an associated characteristic impedance, Z, defined for a lossless line as
2 L
Z = C,
where L and C are inductance and capacitance per unit length of line.
When a transmitting or receiving circuit with an output or input impedance equal to Z is connected to such a line there is a complete transfer of power to and from the line with no reflection. If a
circuit of impedance not equal to Z is connected, there will be some reflection at the end of the cable. This is true for any frequency, “from DC to daylight”.
“Impedance matching” is employed for example in radio antenna circuits, where a 75ohm aerial feeds a 75ohm input via 75ohm cable.
It is normally assumed that impedance matching is only an issue at frequencies where wavelengths are comparable to the length of cable, since only at frequencies around or above this will standing
waves be set up, resulting in drastic power loss and even transmitter damage. However, the wave reflections still occur at lower frequencies, and in the case of a line misterminated at both ends
multiple reflections will travel up and down the cable for a long time, depending on the degree of mismatch and the cable loss.
The majority of loudspeaker cables consist of two conductors side by side, insulated with either PVC or PTFE. This form of construction typically gives a characteristic impedance of around 80ohm,
which is a poor match to the average loudspeaker impedance of 8ohm and a very poor match to the average amplifier output impedance of around 0.2ohm. Hence reflections can be expected, leading to
audible problems. Such widely spaced cables are also susceptible to radio frequency interference and are noticeably sensitive to their surroundings, as the electromagnetic field associated with the
signal passing through them is not confined in space.
By contrast, Townshend “Isolda” Impedance Matched cable has a characteristic impedance close to 8ohm for optimum loudspeaker matching and minimum reflections in the audio band. The cable is also
comprised of two flat strips, a construction which confines the signal’s EM field within the cable and minimises the effect of surrounding objects and interfering fields.
Additionally, the DC resistance of the cable is low; this is another basic factor affecting cable performance, as can be seen from the very simplest analysis. The low DC resistance and minimal, very
high performance dielectric material, ensure that the cable impedance is constant, and losses are low, up to several hundred megahertz.
The Townshend “Isolda” cable satisfies the basic criteria of low loss lumped parameters and matched characteristic impedance. It has very low DC resistance and low loss at high frequencies, a
consideration which tends to result in somewhat improved subjective performance in audio cables for reasons which are not immediately obvious (since “high frequencies” means around the GHz region).
Subjectively, the gains in using these cables are surprisingly marked. Compared with other cables, detail and bass performance are much improved. This gives an interesting insight into
psychoacoustic phenomena, as the measured performance of different cables tends to be very similar at low frequencies, differences only becoming apparent at relatively high frequencies.
Note 1 there has long been confusion in the hi-fi world about “high capacitance cable”. Townshend cables have a very high capacitance by most standards, 600pF/m. However, the point of impedance
matching is that, with an 8ohm load on the end of an 8ohm cable, the amplifier sees only 8ohms (resistive), with no series inductance or shunt capacitance. With high impedance cables, the load is
seen in series with some or all of the cable inductance. With cables of lower than 8ohm impedance, the load is seen in parallel with more or less of the cable capacitance. Of course, this only
applies to a theoretical 8ohm load, unrepresentative of most loudspeakers, but the deviations are relatively small and the approximation is generally valid. In the extreme case where the loudspeaker
impedance tends to infinity, an amplifier might see as much as 6000pF of cable capacitance with 10m of cable. If the amplifier has a high output impedance of 1ohm, the effective time constant is
6ns, equivalent to a -3dB point of 25MHz.
Note 2 Strictly speaking, the equation given above for Z is only accurate when both L and C are loss-free (no series resistance with L or dielectric loss with C). This is true for C as the loss
factor (D) of open circuit capacitance of these cables is less than 0.1% at audio frequencies. However, the quality factor (Q) of the short circuit inductance is less than one at frequencies below
3kHz. This prompts two observations. First, a cable with a Q of 100 at 1kHz (say) would have to use 7500sq.mm in both conductors, and would have thickness of 20cm and width of 38cm. Second,
measurement at audio frequencies shows that the cable does still behave substantially as a constant impedance cable – load it with 8ohm and measure its input impedance, and the result is purely
resistive. Load it with an incorrect impedance, and the load resistor appears in series or parallel with some of the cable inductance or capacitance, depending on the precise resistor value.
Note (3); it is interesting that the “time of flight” of a signal down a typical audio cable should be easily detectable at audio frequencies by differential methods. The speed of light in a typical
cable is 200,000,000 m/s; so for a 5m cable the time of flight is 25ns. Thus if a 10kHz input is applied to such a cable at one end, with wave function
V(in) = sin(20,000pi.t)
The output at the other end will be
V(out) = sin20,000pi.(t+2.5×10 ).
Using sin(a+b) = sina.cosb + cosa.sinb, V(in) – V(0ut) is seen to be
V(diff) = 5×10 pi.cos(20,000pi.t), i.e. there should be a difference waveform 90 degrees out of phase with the input, with amplitude approximately 0.0016.V(in) – about 56dB down.
This prediction was verified in practice, implying that differential testing of cables and amplifiers is perhaps not as simple as it may seem. This differential signal is of course not an error
A typical propagation delay in a sold state amplifier is about 150ns, which in a differential test should give a signal at -40dB at 10kHz, -34dB at 20kHz, and -60dB at 1kHz. A valve amplifier on
test gave a time delay of 700ns, with associated differential signal of -47dB at 1kHz, -27dB at 10kHz and -21dB (9%) at 20kHz.
Note (4) bearing in mind that in the case of a typical amplifier with feedback from output to input the output impedance is subject to the same “time delay” as a signal passing from input to output
(although somewhat counterintuitive, this model is strictly speaking correct), consideration of the effect of cable reflections on a circuit becomes extremely complex.
Note (5); in a very simple experiment, we established that the ear can easily detect a frequency shift of 0.1.% at 10kHz, (This is the best resolution our signal generator would allow.) If one
assumes that the ear discriminates frequencies by measuring time intervals between successive zeros or maxima of a waveform, as is suggested by some psychoacoustic research, then it can apparently
differentiate between the 100us per cycle of 10kHz and the 99.9us per cycle of 10.01kHz. This in turn implies that the effective resolution of the ear is better than 100ns. If so, the reasons for
audible differences between cables are entirely obvious (as, incidentally, are some of the reasons for the inferiority of 44kHz digital recording). The subjective quality of the effects is however
still mysterious.
|
{"url":"https://townshendaudiofiles.com/cable-theory-for-skeptics/","timestamp":"2024-11-08T20:20:11Z","content_type":"text/html","content_length":"141207","record_id":"<urn:uuid:0d94511d-c12c-4699-8cbb-1d0a2c5230be>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00026.warc.gz"}
|
Q. How many five-digit prime numbers can be obtained by using all the digits 1, 2, 3, 4 and 5 without repetition of digits? - Sociology OWL
Q. How many five-digit prime numbers can be obtained by using all the digits 1, 2, 3, 4 and 5 without repetition of digits?
(a) Zero
(b) One
(c) Nine
(d) Ten
Correct Answer: (a) Zero
Question from UPSC Prelims 2020 CSAT Paper
Explanation :
Five-digit prime numbers
To determine how many five-digit prime numbers can be obtained by using all the digits 1, 2, 3, 4, and 5 without repetition, we can use the fact that a number is prime if and only if it is divisible
only by 1 and itself.
Next, consider the sum of the digits of any five-digit number formed using these digits. The sum is 1 + 2 + 3 + 4 + 5 = 15, which is divisible by 3. Therefore, Zero is the answer.
|
{"url":"https://upscsociology.in/q-how-many-five-digit-prime-numbers-can-be-obtained-by-using-all-the-digits-1-2-3-4-and-5-without-repetition-of-digits/","timestamp":"2024-11-04T20:44:16Z","content_type":"text/html","content_length":"188920","record_id":"<urn:uuid:4c966657-4aea-43d1-bb3d-3a62c8cc1071>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00864.warc.gz"}
|
ETHZ_IR_2015_Proj2_G9 Assignment Report
Creative Commons CC BY 4.0
We develop 3 term-based models(Naive tf, log tf, and BM25), unigram language model, and Pointwise Online AdaGrad approach to select top 100 documents of each query. We use MIN((TP+FN),100) as
denominator when calculating AP. We also perform some methods in preprocessing and running stage to obtain better MAP, as well less running time of the whole program. 2 rounds of scanning are needed
in our system.
|
{"url":"https://cs.overleaf.com/articles/ethz-ir-2015-proj2-g9-assignment-report/pcknggfsnbbv","timestamp":"2024-11-12T17:18:32Z","content_type":"text/html","content_length":"44035","record_id":"<urn:uuid:cca3997f-83f6-44d0-bfed-4b98808fc64a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00760.warc.gz"}
|
VTU Control Systems - May 2016 Exam Question Paper | Stupidsid
Total marks: --
Total time: --
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
1(a) What are the properties of good control system?
4 M
1(b) Construct mathematical model for the mechanical system shown in Fig. Q1(b). Then draw electrical equivalent circuit based on F-V analogy.
8 M
1(c) For electrical system shown in Fig. Q1(C), obtain transfer function V[2](s)/V[1](s).
8 M
2(a) List the features function for the block diagram shown in Fig. Q2(b), using block diagram reduction method.
8 M
2(c) For the electrical circuit shown in Fig. Q2(c), obtain over all transfer function using Mason's gain formula.
8 M
3(a) What are static error coefficients? Derive expression for the same.
6 M
3(b) An unity feedback system has \( G(s)=\dfrac{20(1+s)}{s^2(2+s)(4+s)},\) calculate its steady state error co-efficients when the applied input r(t) = 40 + 2t + 5t^2.
6 M
3(c) A R-L-C series circuit is an example of second order function. If R = 1 Ω, α = 1H and C = 1F, find response for a step voltage of 10 V connected as input and output across R.
8 M
4(a) List the advantages and disadvantages of Routh's criterion (R-H-criterion).
4 M
4(b) A unity feedback control system has \( G(s)=\dfrac{k(s+13)}{s(s+3)(s+7)}.\) Using Routh's criterion calculates the range of k for which the system is i) stable ii) has closed loop poles more
negative than -1.
10 M
4(c) Find the range of k for which the system, whose characteristic equation is given below is stable. F(s) = s^3 + (k + 0.5) s^2 + 4ks + 50.
6 M
5(a) Sketch the root locus for unity feedback having \( G(s)=\dfrac{k(s+1)}{s(s+2)(s^2+2s+2)}.\) Determine the range of k for the system stability.
16 M
5(b) Explain how to determine angle of arrival from poles and zeros to complex zeros.
4 M
6(a) What are the limitations of frequency response methods?
4 M
6(b) A control system having \( G(s)=\dfrac{k(1+0.5s)}{s(1+2s)\left ( 1+\dfrac{s}{20} +\dfrac{s^2}{8}\right )}.\) draw bode plot, with k = 4 and find gain margin and phase margin.
16 M
7(a) What is polar plot? Explain procedure to sketch polar plot for type 0 and type 1 systems.
8 M
7(b) Sketch the Nyquist plot of a unit feedback control system having the open loop transfer function \( G(s)=\dfrac{5}{s(1-s)}.\) Determine the stability of the system using Nyquist stability
12 M
8(a) Find the transfer function for a system having state model as given below : \[x=\begin{bmatrix} 0 & 1\\ -2 & -3 \end{bmatrix}x+\begin{bmatrix} 1\\ 0 \end{bmatrix}u\ \ y=[1\ \ 0]x.\]
8 M
8(b) Obtain the state model for the electrical system given in Fig. Q8(b) choosing the state variables as i[1](t), i[2](t) and V[C](t).
12 M
More question papers from Control Systems
|
{"url":"https://stupidsid.com/previous-question-papers/download/control-systems-14884","timestamp":"2024-11-08T11:39:40Z","content_type":"text/html","content_length":"59640","record_id":"<urn:uuid:6fc8006a-0591-4dc5-9e10-0d5b7b4750ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00871.warc.gz"}
|
Tales of the Tails: (Not so) mysterious Heavy Tail worldbeyond the Bell Curve
Climb with me beyond the Bell Curve as we unravel the marvels of heavy tails in this exciting journey.
The scientific world would not be what it is today without the normal distribution. It is the foundation of many statistical models for several good reasons. Most importantly, it appears commonly in
nature. For instance, if you collect height data from people at your workplace or school and create a histogram, you will likely observe the familiar bell curve. This is because human
characteristics, such as weight and height, follow a normal distribution, like many other natural phenomena. One could even go so far as to call the normal distribution nature’s default pattern of
But this is only the tip of the iceberg. For example, roll a die many times and count the average number of times you rolled a six. In the beginning, your results might appear quite chaotic - three
sixes in a row and then none for a long time… However, as you continue rolling, a familiar pattern emerges; the distribution of the average starts to resemble a normal distribution. This phenomenon
turns out to be quite universal as the shape of the original data typically does not matter; if you add up enough of it, the result starts looking like a normal distribution.
This principle is known as Central Limit Theorem (CLT) and it is responsible for the widespread use of normal distributions in many models. So, is this it? Is the normal distribution all you need to
remember from your statistics course? Definitely not! What is more, things can go terribly wrong when we assume that something is normal, when it is not. Let me show you that this is true with one
simple example.
You have probably heard about the financial crisis in 2008. Up until that point, pricing and risk models in finance, such as the Black-Scholes model, relied heavily on the assumption of a normal
distribution. However, this assumption was more often than not violated in practice. In consequence, the models were underestimating the risk of extreme or rare events which is commonly called the
tail risk.
This means that people and financial institutions did not fully realize how bad things could get if an extreme event happened, and therefore were not prepared or insured against it. Although the 2008
crisis was years in the making and already in 2006 we could observe its first effects on the U.S. housing market, it is the fall of Wall Street bank Lehman Brothers in September 2008, the largest
bankruptcy in U.S. history, that tipped the scales.
Chaos ensued as everyone abruptly began losing huge amounts of money, leading to a worldwide financial crisis.
While the causes of the crisis were numerous and complex, it is the underestimation of tail risk and improper risk management that can be considered the primary culprits. So, how do we properly
account for the tail risk? Heavy tails can help with this! Whether this is a totally new territory for you or if you are already versed in heavy tails but seeking an engaging read, join me on this
journey where we (re)discover heavy tails and some of their magical properties and applications.
What are heavy-tails?
Heavy tails, or more precisely heavy-tailed distributions, represent a type of data where the likelihood of extreme events is greater compared to more common distributions, such as the normal or the
exponential distribution. They are used to describe situations where rare or unusual things occur more often than you would think. For example, earthquake magnitudes are heavy-tailed. This means that
small earthquakes occur frequently, almost continuously and typically we do not even notice them. But, once in a while, an extreme earthquake happens, like the Japan earthquake in 2011 or the Indian
Ocean earthquake in 2004. Prediction models based on normal distribution would deem an earthquake on such scale as almost impossible, while two of them already happened in this century, causing
hundreds of thousands of casualties. This shows that heavy tails are crucial to properly understand and model the risk of extreme earthquakes.
The name $``$heavy tail$"$ comes from a visual representation of the distributions (see Figure 1 below). Here we compare the right tails of an exponential, normal, and Pareto distribution — the most
famous example of a heavy-tailed distribution. The line corresponding to the Pareto distribution is highest for large $x$ indicating that the probability of a very large data point or event is higher
than for the other two distributions.
Figure 1: Tail comparison of Pareto, Normal and Exponential distribution.
Heavy tails may seem mysterious simply because they are less known. In the early evolution of probability theory, the focus primarily rested on the elegance of normal distributions and their
widespread applicability. It wasn’t until the 20th century that scientists like Vilfredo Pareto and Paul Lévy began advocating the existence of distributions with heavier tails. However, this was not
enough to convince the scientific world to depart from the comforts of the normal distribution and venture towards the unknown heavy tails.
For many years, heavy-tailed theory was studied only by a few and considered more as a mathematical curiosity rather than a tool that is useful in practice. People simply were not convinced that such
a high likelihood of extreme events could be true in real life. However, nature has mysterious ways of surprising us and this holds true for heavy tails as well. With increasing digitization, we
became more and more capable of collecting and analyzing data and suddenly we realized that there is an entire world of heavy tails beyond the "bell curve" and that examples of heavy tails are found
all around us. To name a few, the following can be heavy-tailed:
• Natural disasters such as magnitudes of earthquake distributions;
• City sizes;
• Packet sizes in Internet traffic;
• Insurance claim sizes;
• Number of connections in real-world networks;
• Sizes of disease outbreaks.
With these new findings, we began adapting our mathematical models to reflect the possibility of rare events that many of the classical approaches ignored. Unfortunately, this revolution has been
primarily driven by catastrophic events like the financial crisis, but better late than never!
Unorthodox properties of heavy-tails
Another reason why, for a long time, people did not believe in the occurrence of heavy tails in practice is their somewhat unorthodox properties. For example, a heavy-tailed distribution can have an
infinite variance, or even an infinite mean. This is problematic for a couple of reasons. First, the classical statistics revolve around averages and variances. We use them to describe and compare
data in a meaningful way, perform hypothesis testing, etc. However, if the mean or the variance is infinite, none of these methods can be applied. Second, imagine that some natural phenomenon has an
infinite mean; think for example of earthquake distributions. If you collect a sample, no matter how large, you will be able to compute its average value and it will always be finite. This is a bit
counterintuitive and makes the estimation of heavy-tailed phenomena less straightforward than that of light-tailed (not heavy-tailed) phenomena.
What about the Central Limit Theorem that “magically” transforms distributions into a normal distribution? Can we use it to make some sense out of the heavy-tailed distributions? Yes! … and no. CLT
requires the variance to be finite, and, as we know by now, not all heavy tails have that. However, it does not mean that there is no regularity to these heavy-tailed distributions. Instead of being
transformed into a normal distribution, they can be transformed to another heavy-tailed distribution with infinite variance.
Yet another non-conforming feature of heavy tails becomes evident when examining Figure 2. There, we took a sample of 1000 data points and for each value $n$ on the $x$-axis, we plotted the sum of
the first $n$ data points in our set. Mathematically, we would call this object a random walk, which is an extremely useful model for analyzing time-dependent processes such as the movement of
particles or stock market prices.
Figure 2: Different behavior of random walks with heavy- and light-tailed increments.
Looking back at the graph, if the samples come from any light-tailed distributions, we could approximate the plot with a straight line. However, for the Pareto case (which is a heavy-tailed
distribution), we observe visible jumps, caused by extreme events. In this case, a straight-line approximation no longer seems like a good idea. It seems that some distributions just do not want to
conform and there is nothing more we can do other than accept them as they are. But that is alright, because, as it turns out, some of their properties are intuitive, well-understood, and can make
analysis quite simple.
Conspiracy vs. Catastrophe Principle
A well-known and intuitive characteristic principle of heavy tails is the catastrophe principle. Let me explain it using an example. If the total wealth of people in a train is a few million dollars,
then most likely you are traveling with one millionaire and the rest of the passengers have an average wealth. This is because wealth distribution is typically heavy-tailed. This example can be
generalized to a catastrophe principle, which tells us that if a sum of heavy-tailed data points is large, it is most likely due to one data point being extremely large, a.k.a a catastrophe.
Now, imagine that you travel in a train and the average height of the passengers is more than two meters. Does that mean that you are traveling with a 10-meter-tall giant? Probably not! It is more
likely that you travel with a basketball team where all players are exceptionally tall. This is an example of a conspiracy principle, as all data points in your height sample “conspired” by having an
above-average height. Height distribution is light-tailed because extremely tall people do not occur due to biological constraints. This is why the conspiracy principle applies in this case. These
two examples illustrate fundamental differences in the behavior of heavy-tailed distributions, as opposed to the light-tailed distributions we are more familiar with.
The intuition that comes from the catastrophe principle is extremely useful when analyzing processes related to heavy tails and especially their extrema. For example, imagine a supermarket queue
where the number of items each customer buys is heavy-tailed. We are interested in the probability of a very large waiting time. Although infinitely many different scenarios could lead to this, we
only need to care about one! Most likely the large waiting time is caused by only one extreme event, for example, a customer who decided to stock up for the entire year. This is where the beauty of
heavy tails lies: using the catastrophe principle we can bring down a complex problem to the analysis of a single instance that is tractable. But there is so much more.
Over the years, this idea has been polished and perfected, resulting in theorems for heavy-tailed processes which allow us to understand more and more complex heavy-tailed problems. The recently
published mathematics book The Fundamentals of Heavy Tails provides a comprehensive account of properties, emergence, and estimation of heavy-tails. This book can help you navigate through the world
of heavy tails and reveal other properties that could not be covered in this text.
To sum up, through this blog, I aim to show that there is more to statistics than the familiar bell curve and other light-tailed distributions. Heavy tails are prevalent, and they adhere to
non-standard yet intuitive principles. What is more, things can go very wrong when we ignore tail behavior. So, the next time you stumble upon heavy tails, do not ignore them; embrace them. Despite
appearances, they turned out to be quite tamable.
Related articles
|
{"url":"https://www.networkpages.nl/tales-of-the-tails-not-so-mysterious-heavy-tail-worldbeyond-the-bell-curve/","timestamp":"2024-11-09T03:48:43Z","content_type":"text/html","content_length":"93390","record_id":"<urn:uuid:83fafc94-a5bb-46f2-9ace-8b7e5232090f>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00199.warc.gz"}
|
Weiss, R., Glösekötter, P., Prestes, E. et al., Hybridisation of Sequential Monte Carlo Simulation with Non-linear Bounded-error State Estimation Applied to Global Localisation of Mobile Robots, J
Intell Robot Syst 99, 335–357 (2020) DOI: 10.1007/s10846-019-01118-7.
Accurate self-localisation is a fundamental ability of any mobile robot. In Monte Carlo localisation, a probability distribution over a space of possible hypotheses accommodates the inherent
uncertainty in the position estimate, whereas bounded-error localisation provides a region that is guaranteed to contain the robot. However, this guarantee is accompanied by a constant
probability over the confined region and therefore the information yield may not be sufficient for certain practical applications. Four hybrid localisation algorithms are proposed, combining
probabilistic filtering with non-linear bounded-error state estimation based on interval analysis. A forward-backward contractor and the Set Inverter via Interval Analysis are hybridised with a
bootstrap filter and an unscented particle filter, respectively. The four algorithms are applied to global localisation of an underwater robot, using simulated distance measurements to
distinguishable landmarks. As opposed to previous hybrid methods found in the literature, the bounded-error state estimate is not maintained throughout the whole estimation process. Instead, it
is only computed once in the beginning, when solving the wake-up robot problem, and after kidnapping of the robot, which drastically reduces the computational cost when compared to the existing
algorithms. It is shown that the novel algorithms can solve the wake-up robot problem as well as the kidnapped robot problem more accurately than the two conventional probabilistic filters.
Rao-Blackwellized Particle Filter SLAM with grid maps in which particles do not contain the whole map but only a part
H. Jo, H. M. Cho, S. Jo and E. Kim, Efficient Grid-Based Rao–Blackwellized Particle Filter SLAM With Interparticle Map Sharing, IEEE/ASME Transactions on Mechatronics, vol. 23, no. 2, pp. 714-724,
DOI: 10.1109/TMECH.2018.2795252.
In this paper, we propose a novel and efficient grid-based Rao-Blackwellized particle filter simultaneous localization and mapping (RBPF-SLAM) with interparticle map shaping (IPMS). The proposed
method aims at saving the computational memory in the grid-based RBPF-SLAM while maintaining the mapping accuracy. Unlike conventional RBPF-SLAM in which each particle has its own map of the
whole environment, each particle has only a small map of the nearby environment called an individual map in the proposed method. Instead, the map of the remaining large environment is shared by
the particles. The part shared by the particles is called a base map. If the individual small maps become reliable enough to trust, they are merged with the base map. To determine when and which
part of an individual map should be merged with the base map, we propose two map sharing criteria. Finally, the proposed IPMS RBPF-SLAM is applied to the real-world datasets and benchmark
datasets. The experimental results show that our method outperforms conventional methods in terms of map accuracy versus memory consumption.
The problem of the interdependence among particles in PF after the resampling step, and an approach to solve it
R. Lamberti, Y. Petetin, F. Desbouvries and F. Septier, Independent Resampling Sequential Monte Carlo Algorithms, IEEE Transactions on Signal Processing, vol. 65, no. 20, pp. 5318-5333, DOI: 10.1109/
Sequential Monte Carlo algorithms, or particle filters, are Bayesian filtering algorithms, which propagate in time a discrete and random approximation of the a posteriori distribution of
interest. Such algorithms are based on importance sampling with a bootstrap resampling step, which aims at struggling against weight degeneracy. However, in some situations (informative
measurements, high-dimensional model), the resampling step can prove inefficient. In this paper, we revisit the fundamental resampling mechanism, which leads us back to Rubin’s static resampling
mechanism. We propose an alternative rejuvenation scheme in which the resampled particles share the same marginal distribution as in the classical setup, but are now independent. This set of
independent particles provides a new alternative to compute a moment of the target distribution and the resulting estimate is analyzed through a CLT. We next adapt our results to the dynamic case
and propose a particle filtering algorithm based on independent resampling. This algorithm can be seen as a particular auxiliary particle filter algorithm with a relevant choice of the
first-stage weights and instrumental distributions. Finally, we validate our results via simulations, which carefully take into account the computational budget.
Varying the number of particles in a PF in order to improve the speed of convergence, with a short related work about adapting the number of particles for other goals
V. Elvira, J. Míguez and P. M. Djurić, “Adapting the Number of Particles in Sequential Monte Carlo Methods Through an Online Scheme for Convergence Assessment,” in IEEE Transactions on Signal
Processing, vol. 65, no. 7, pp. 1781-1794, April1, 1 2017. DOI: 10.1109/TSP.2016.2637324.
Particle filters are broadly used to approximate posterior distributions of hidden states in state-space models by means of sets of weighted particles. While the convergence of the filter is
guaranteed when the number of particles tends to infinity, the quality of the approximation is usually unknown but strongly dependent on the number of particles. In this paper, we propose a novel
method for assessing the convergence of particle filters in an online manner, as well as a simple scheme for the online adaptation of the number of particles based on the convergence assessment.
The method is based on a sequential comparison between the actual observations and their predictive probability distributions approximated by the filter. We provide a rigorous theoretical
analysis of the proposed methodology and, as an example of its practical use, we present simulations of a simple algorithm for the dynamic and online adaptation of the number of particles during
the operation of a particle filter on a stochastic version of the Lorenz 63 system.
A novel particle filter algorithm with an adaptive number of particles, and a curious and interesting table I about the pros and cons of different sensors
T. de J. Mateo Sanguino and F. Ponce Gómez, “Toward Simple Strategy for Optimal Tracking and Localization of Robots With Adaptive Particle Filtering,” in IEEE/ASME Transactions on Mechatronics, vol.
21, no. 6, pp. 2793-2804, Dec. 2016.DOI: 10.1109/TMECH.2016.2531629.
The ability of robotic systems to autonomously understand and/or navigate in uncertain environments is critically dependent on fairly accurate strategies, which are not always optimally achieved
due to effectiveness, computational cost, and parameter settings. In this paper, we propose a novel and simple adaptive strategy to increase the efficiency and drastically reduce the
computational effort in particle filters (PFs). The purpose of the adaptive approach (dispersion-based adaptive particle filter – DAPF) is to provide higher number of particles during the initial
searching state (when the localization presents greater uncertainty) and fewer particles during the subsequent state (when the localization exhibits less uncertainty). With the aim of studying
the dynamical PF behavior regarding others and putting the proposed algorithm into practice, we designed a methodology based on different target applications and a Kinect sensor. The various
experiments conducted for both color tracking and mobile robot localization problems served to demonstrate that the DAPF algorithm can be further generalized. As a result, the DAPF approach
significantly improved the computational performance over two well-known filtering strategies: 1) the classical PF with fixed particle set sizes, and 2) the adaptive technique named
Kullback-Leiber distance.
Combination of several mobile robot localization methods in order to achieve high accuracy in industrial environments, with interesting figures for current localization accuracy achievable by
standard solutions
Goran Vasiljevi, Damjan Mikli, Ivica Draganjac, Zdenko Kovai, Paolo Lista, High-accuracy vehicle localization for autonomous warehousing, Robotics and Computer-Integrated Manufacturing, Volume 42,
December 2016, Pages 1-16, ISSN 0736-5845, DOI: 10.1016/j.rcim.2016.05.001.
The research presented in this paper aims to bridge the gap between the latest scientific advances in autonomous vehicle localization and the industrial state of the art in autonomous
warehousing. Notwithstanding great scientific progress in the past decades, industrial autonomous warehousing systems still rely on external infrastructure for obtaining their precise location.
This approach increases warehouse installation costs and decreases system reliability, as it is sensitive to measurement outliers and the external localization infrastructure can get dirty or
damaged. Several approaches, well studied in scientific literature, are capable of determining vehicle position based only on information provided by on board sensors, most commonly wheel
encoders and laser scanners. However, scientific results published to date either do not provide sufficient accuracy for industrial applications, or have not been extensively tested in realistic,
industrial-like operating conditions. In this paper, we combine several well established algorithms into a high-precision localization pipeline, capable of computing the pose of an autonomous
forklift to sub-centimeter precision. The algorithms use only odometry information from wheel encoders and range readings from an on board laser scanner. The effectiveness of the proposed
solution is evaluated by an extensive experiment that lasted for several days, and was performed in a realistic industrial-like environment.
A variant of particle filters that uses feedback to model how particles move towards the real posterior
T. Yang, P.~G. Mehta, S.~P. Meyn, Feedback particle filter, IEEE Transactions on Automatic Control, 58 (10) (2013), pp. 2465â–2480, DOI: 10.1109/TAC.2013.2258825.
The feedback particle filter introduced in this paper is a new approach to approximate nonlinear filtering, motivated by techniques from mean-field game theory. The filter is defined by an
ensemble of controlled stochastic systems (the particles). Each particle evolves under feedback control based on its own state, and features of the empirical distribution of the ensemble. The
feedback control law is obtained as the solution to an optimal control problem, in which the optimization criterion is the Kullback-Leibler divergence between the actual posterior, and the common
posterior of any particle. The following conclusions are obtained for diffusions with continuous observations: 1) The optimal control solution is exact: The two posteriors match exactly, provided
they are initialized with identical priors. 2) The optimal filter admits an innovation error-based gain feedback structure. 3) The optimal feedback gain is obtained via a solution of an
Euler-Lagrange boundary value problem; the feedback gain equals the Kalman gain in the linear Gaussian case. Numerical algorithms are introduced and implemented in two general examples, and a
neuroscience application involving coupled oscillators. In some cases it is found that the filter exhibits significantly lower variance when compared to the bootstrap particle filter.
A gentle introduction to Box-Particle Filters
A. Gning, B. Ristic, L. Mihaylova and F. Abdallah, An Introduction to Box Particle Filtering [Lecture Notes], in IEEE Signal Processing Magazine, vol. 30, no. 4, pp. 166-171, July 2013. DOI: 10.1109/
Resulting from the synergy between the sequential Monte Carlo (SMC) method [1] and interval analysis [2], box particle filtering is an approach that has recently emerged [3] and is aimed at
solving a general class of nonlinear filtering problems. This approach is particularly appealing in practical situations involving imprecise stochastic measurements that result in very broad
posterior densities. It relies on the concept of a box particle that occupies a small and controllable rectangular region having a nonzero volume in the state space. Key advantages of the box
particle filter (box-PF) against the standard particle filter (PF) are its reduced computational complexity and its suitability for distributed filtering. Indeed, in some applications where the
sampling importance resampling (SIR) PF may require thousands of particles to achieve accurate and reliable performance, the box-PF can reach the same level of accuracy with just a few dozen box
particles. Recent developments [4] also show that a box-PF can be interpreted as a Bayes? filter approximation allowing the application of box-PF to challenging target tracking problems [5].
Implementation of PF SLAM in FPGAs and a good state of the art of the issue
B.G. Sileshi, J. Oliver, R. Toledo, J. Gonçalves, P. Costa, On the behaviour of low cost laser scanners in HW/SW particle filter SLAM applications, Robotics and Autonomous Systems, Volume 80, June
2016, Pages 11-23, ISSN 0921-8890, DOI: 10.1016/j.robot.2016.03.002.
Particle filters (PFs) are computationally intensive sequential Monte Carlo estimation methods with applications in the field of mobile robotics for performing tasks such as tracking,
simultaneous localization and mapping (SLAM) and navigation, by dealing with the uncertainties and/or noise generated by the sensors as well as with the intrinsic uncertainties of the
environment. However, the application of PFs with an important number of particles has traditionally been difficult to implement in real-time applications due to the huge number of operations
they require. This work presents a hardware implementation on FPGA (field programmable gate arrays) of a PF applied to SLAM which aims to accelerate the execution time of the PF algorithm with
moderate resource. The presented system is evaluated for different sensors including a low cost Neato XV-11 laser scanner sensor. First the system is validated by post processing data provided by
a realistic simulation of a differential robot, equipped with a hacked Neato XV-11 laser scanner, that navigates in the Robot@Factory competition maze. The robot was simulated using SimTwo, which
is a realistic simulation software that can support several types of robots. The simulator provides the robot ground truth, odometry and the laser scanner data. Then the proposed solution is
further validated on standard laser scanner sensors in complex environments. The results achieved from this study confirmed the possible use of low cost laser scanner for different robotics
applications which benefits in several aspects due to its cost and the increased speed provided by the SLAM algorithm running on FPGA.
Interesting approach to PF-based localization and active localization when the map contains semantic information
Nikolay Atanasov, Menglong Zhu, Kostas Daniilidis, and George J. Pappas, Localization from semantic observations via the matrix permanent, The International Journal of Robotics Research January–March
2016 35: 73-99, first published on October 6, 2015, DOI: 10.1177/0278364915596589.
Most approaches to robot localization rely on low-level geometric features such as points, lines, and planes. In this paper, we use object recognition to obtain semantic information from the
robot’s sensors and consider the task of localizing the robot within a prior map of landmarks, which are annotated with semantic labels. As object recognition algorithms miss detections and
produce false alarms, correct data association between the detections and the landmarks on the map is central to the semantic localization problem. Instead of the traditional vector-based
representation, we propose a sensor model, which encodes the semantic observations via random finite sets and enables a unified treatment of missed detections, false alarms, and data association.
Our second contribution is to reduce the problem of computing the likelihood of a set-valued observation to the problem of computing a matrix permanent. It is this crucial transformation that
allows us to solve the semantic localization problem with a polynomial-time approximation to the set-based Bayes filter. Finally, we address the active semantic localization problem, in which the
observer’s trajectory is planned in order to improve the accuracy and efficiency of the localization process. The performance of our approach is demonstrated in simulation and in real
environments using deformable-part-model-based object detectors. Robust global localization from semantic observations is demonstrated for a mobile robot, for the Project Tango phone, and on the
KITTI visual odometry dataset. Comparisons are made with the traditional lidar-based geometric Monte Carlo localization.
|
{"url":"https://babel.isa.uma.es/kipr/?tag=particle-filters","timestamp":"2024-11-08T01:51:19Z","content_type":"application/xhtml+xml","content_length":"78278","record_id":"<urn:uuid:559ae688-1c38-4e57-b906-a5f08806bf4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00460.warc.gz"}
|
Synchronization and fluctuations: Coupling a finite number of stochastic units
It is well established that ensembles of globally coupled stochastic oscillators may exhibit a nonequilibrium phase transition to synchronization in the thermodynamic limit (infinite number of
elements). In fact, since the early work of Kuramoto, mean-field theory has been used to analyze this transition. In contrast, work that directly deals with finite arrays is relatively scarce in the
context of synchronization. And yet it is worth noting that finite-number effects should be seriously taken into account since, in general, the limits N→∞ (where N is the number of units) and t→∞
(where t is time) do not commute. Mean-field theory implements the particular choice first N→∞ and then t→∞. Here we analyze an ensemble of three-state coupled stochastic units, which has been widely
studied in the thermodynamic limit. We formally address the finite-N problem by deducing a Fokker-Planck equation that describes the system. We compute the steady-state solution of this Fokker-Planck
equation (that is, finite N but t→∞). We use this steady state to analyze the synchronic properties of the system in the framework of the different order parameters that have been proposed in the
literature to study nonequilibrium transitions.
Bibliographical note
Funding Information:
A.R. acknowledges the financial support of CNPq (Grant No. 308344/2018-9). I.P. acknowledges the financial support of FACEPE (Grant No. BFP-0146-1.05/18). D.E. and J.C. thank funding from
Fondecyt-Chile (Grant No. 1170669).
Publisher Copyright:
© 2020 American Physical Society.
Dive into the research topics of 'Synchronization and fluctuations: Coupling a finite number of stochastic units'. Together they form a unique fingerprint.
|
{"url":"https://investigadores.uandes.cl/en/publications/synchronization-and-fluctuations-coupling-a-finite-number-of-stoc-4","timestamp":"2024-11-04T04:29:55Z","content_type":"text/html","content_length":"60061","record_id":"<urn:uuid:dd5d82ce-7061-4106-9def-075526d6c8a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00354.warc.gz"}
|
Free Betting Odds Calculator
A betting odds calculator helps you make smart bets. You can enter how much you want to bet, and input the odds in American, Fractional, or Decimal format. The calculator will show you how much you
can win or lose. It will also tell you the chances of winning or losing through the probability calculator. This way, you can see the risks and rewards before you bet.
Bet Calculator
Decimal Odds:
American Odds:
Fractional Odds:
Implied Probability:
How To Use A Betting Odds Calculator
Odds calculators, or bet calculators, are typically used to calculate how much your bet will win. By inputting how much you want to bet (your stake), and the odds given, our sports betting calculator
will generate how much your bet will win — AKA a payout calculator. The betting odds calculator will also calculate the implied odds of your bet, so you can understand whether you’re getting a value
bet or not. You can also use a betting odds calculator to:
• Convert American odds to Decimal
• Decimal odds to Fractional
• Fractional odds to Amercian
• Or convert back the other way for all of the above!
Why Should You Convert Odds?
One of the most important aspects of betting is finding the best odds possible. After all, placing good value bets depends on finding odds that net you a worthwhile payout. Usually, this is done by
fining odds that aren’t worth what they should be given the true implied probability. You can find the implied probability with a betting odds calculator, or a probability calculator and compare it
against what you think the probability actually is. In other words, it’s not just about predicting which team will win; you also need to get paid well for the risk you’re taking on. That’s why you
need to know how to use a betting odds calculator.
Typically, odds calculators require you to input one or multiple sets of odds, along with a wager amount, to determine the value of wagers. A variety of calculators exist, depending on the type of
wager or the system you’re attempting to utilize when determining the value of your bets. If you want to find more betting related resources, our betting guide is the right place for you.
How To Find Value Using Bet Calculators
If you want to make money from betting in the long run, you need to be able to find and exploit mistakes by sportsbooks. Essentially, you want to do a bookmaker’s job better than they can. A value
bet occurs when the odds a sportsbook gives to an event, which have implied odds calculated from them, are lower or higher than they should be. When we say “should be” we’re referring to actual
probability — or the probability you can calculate yourself using your own betting model.
A good way to find out the value of a bookmaker’s mistake is through arbitrage betting. If two bookmakers have differing opinions on the favourite for a specific event, you can bet on both outcomes
and gain some profit. That said, sportsbooks are usually pretty quick to detect discrepancies between their book and another’s, so you have to keep your eye on the opening line if you want to find an
arbitrage bet.
Implied Probability & Actual Probability
A bookmaker will thoroughly study all the parameters that can affect the outcome of a certain event. These include player fitness, standings, team or player form, and weather conditions, among
others. Afterwards, they will input these parameters into a predictive model with several types of betting odds calculators that will output a set of odds for each outcome. These odds are what we
call “implied odds”, and they are essentially your target to beat.
You want to study more rigorously and consider variables the bookmaker may not be aware of. You also need to develop a better predictive model and come up with better “true odds” according to your
research and your own odds calculator. Once you’ve calculated the “true odds” for a certain outcome, you need to look up the “implied odds” for it on different sportsbooks and calculate which one has
a more favourable return.
For example, let’s assume that you’ve thoroughly studied an upcoming soccer match using different sports betting calculators and determined that the favourite has a 60% chance of winning. This means
that the true odds of this outcome are approximately -150. If you look around different sportsbooks and find the same outcome at -135 or -140, you should place a bet on it. While the difference might
seem small, it can be very profitable in the long run.
More Types Of Bet Calculators
Most sportsbooks will generally have a built-in betting calculator to allow you to see how much your payout will be before you place your bet. This is particularly useful if your betting strategy
revolves around variable bet sizes rather than a fixed amount. Nevertheless, some specific bet types or betting strategies require more advanced calculations to determine how much your profit will
be. In these cases, some particular sports betting calculators will do all the math for you to maximize your earnings.
Moneyline Odds Calculator
Moneyline betting is a straightforward yet popular form of sports betting. Essentially, it involves picking the outright winner of a game or match, without any point spreads or handicaps. However,
predicting the outcome of a game can be tricky, even for seasoned bettors. To help you, moneyline betting odds calculators are useful tools for calculating potential payouts for moneyline bets.
Simply input the moneyline odds for each team or player and the desired wager amount, and the betting calculator will do the rest. With this information from the odds calculator, you can determine
the potential payout and whether the wager is worth the risk.
Using a moneyline betting calculator can help you make more informed betting decisions. Not only can you assess the risk of a particular wager, but you can also compare potential payouts for
different bets and adjust your strategy accordingly. For instance, if you have a limited budget for sports betting, you can use an implied odds calculator to determine the most profitable bets based
on the available funds. Money line betting odds calculators are also valuable for hedging bets. If you’ve placed a wager on one team or player, you can use a moneyline betting calculator to determine
how much you need to bet on the other team or player to minimize your losses.
Arbitrage Betting Calculator
Arbitrage betting involves wagering on every potential outcome of an event at odds which guarantee a payout. These types of opportunities don’t arise often, but every now and then you might come
across a scenario where hedging your bets guarantees a win. The odds calculator is extremely handy in these situations.
While you’re always guaranteed to make some profit from arbitrage betting, you’ll need an arbitrage bet calculator to maximize your earnings. An arbitrage calculator will give you the amount of money
you need to bet on each outcome to win a certain payout. This is very important for serious bettors who want to maximize their profits from a bookmaker’s mistake. However, keep in mind that some
sportsbooks are not very friendly toward arbitrage bettors, and may reserve the right to limit or outright ban your account.
Kelly Betting Calculator
Advanced and professional bettors know that you need proper bankroll management to make a consistent profit through sports betting. If you just bet random amounts of money based on your “feeling”,
you’ll either miss out on a lot of value or end up ruining your bankroll. That’s why most serious bettors rely on the Kelly Criterion, or Kelly Betting Calculator, to determine their bet size as a
percentage of their bankroll. Since the Kelly Criterion formula can be complicated, people rely on the Kelly bet calculator to do their odds conversions and determine their expected growth and
John Larry Kelly Jr. was a scientist who figured out a way to determine an optimal betting amount according to bankroll and the type of odds you face when sports betting. Since pros tend to work with
a long-term strategy, a Kelly Odds Calculator helps to determine a standard betting amount which performs well. This way, if you tend to win half of your bets at odds around +120, the Kelly Bet
Calculator may suggest a specific percentage of your roll to wager for each event.
Parlay Betting Calculator
When you’re stringing together a multiple bet parlay, you can enter the odds of each individual bet and the stake wagered into a parlay betting calculator to determine the total payout value of the
parlay. Most sportsbooks will provide you with the total amount of money you’ll win from a parlay bet anyways. That said, a proper odds payout calculator for parlays will allow you to make swift
changes and determine the amount you need to bet to gain a certain profit.
This type of parlay calculator is useful when determining the value of different parlay possibilities, while double-checking the value provided by your sportsbook. If your sportsbooks offer a
significantly lower payout than the amount you should get based on the odds, you might want to switch to a different sportsbook. Other than being fun and exciting, parlays are also a great way to bet
on multiple outcomes with low odds that you wouldn’t want to place a single bet on. Just ensure not to include extremely low odds, as they don’t add significant value, but they risk ruining your
Poisson Odds Calculator
One of the more complex type of odds calculations, Poisson probabilities are most often used in sports betting to approximate the strength of offensive and defensive capabilities of a team, compared
to the rest of the competition. Attack strength for an NHL team would be calculated by measuring the average goal production for the entire league and comparing the results with an individual team.
Defensive strength is calculated by comparing the average number of goals allowed league-wide with a single team. These numbers then inform probability distributions, giving you an idea of the
chances that a team will score or allow a specific number of goals. Poisson distributions may be utilized to determine percentages for outright winners, spreads, and over/under bets.
A Poisson betting calculator is also excellent for finding value in team or player prop bets. Using Poisson probabilities and knowing the historical average of a specific prop can help you find the
chance of that outcome being an over or an under. For instance, let’s assume you want to place a bet on Steph Curry’s made 3pt shots. You know his season average of 4.3 3pt shots per game, and the
moneyline is at 4.5 3pt shots. The Poisson calculator will tell you the exact probability for the over/under and what money line odds you should look for to get value.
Round Robin Bet Calculator
A round robin bet calculator informs you of the potential profit and peril of creating a series of multi-team parlay bets. A three-team round robin bet would consist of three two-team parlays. If one
of the three parlays wins, your initial stake is returned, while if two or more parlays win, the profit of your wager jumps. While round robin bets can feel complicated, they are a very useful type
of bet if you want to include multiple games in your slip with lower risk.
Three team round robins tend to be easier to manage than longer round robin bets due to all the different permutations when dealing with four or more wagers,. Essentially, the more teams, the more a
round robin payout calculator becomes vital. That’s where a round robin betting calculator comes in. A round robin bet calculator is an essential tool for anyone who enjoys round robin bets, as it
provides you with the total amount you need to wager and your potential returns in each scenario.
Streak Betting Calculator
A streak betting calculator provides a rough approximation of the chances of winning a series of bets given a specific set of odds. Nobody enjoys losing streaks, but they are a natural part of
long-term sports betting. Whether you’re a novice bettor or a seasoned veteran, proper bankroll management is essential to getting through a long and tough betting season. Ideally, you should have a
large bankroll and wager 1-2% of it on each bet to avoid ruining it during big losing streaks.
A streak betting calculator will give you the chance of a losing streak throughout a set number of games. Let’s assume you want to place 100 bets in an NBA season. Your bets will all have about a 60%
chance to win (-150 odds) and a 40% chance to lose. A betting calculator for loss streaks will determine the total probability of going on a 10-game losing streak. Naturally, you can change the
parameters, like bumping the chance to lose to 45% or determining the probability of a 5-game losing streak.
Spread Bet Calculator
Against the spread betting, also known as point spread betting, is a hot commodity in the world of sports wagering. This style of betting is all about placing your bets on the difference in scores
between two teams. It’s a step above traditional moneyline betting, where all you have to do is pick the winner. With against the spread betting, the margin of victory or defeat is determined your
One of the major draws of against the spread betting is its ability to level the playing field for teams that may be unevenly matched in terms of skill or talent. Thanks to the addition of a point
spread, bettors have a tougher time predicting the winner, which ups the ante and can lead to some serious payouts. Of course, predicting the outcome of a game against the spread is no easy feat,
particularly for rookies. This is where against the spread betting calculators come into play. These spread bet calculators allow you to input the point spread, odds, and the amount you’re willing to
wager. Once you’ve punched in that info, the odds calculator will do the math and tell you what kind of payout you can expect based on your predictions. You can even compare payouts with a spread to
moneyline odds calculator.
By utilizing a point spread calculator, you can make more educated decisions when placing your bets. You’ll have a clearer understanding of the potential payout and can decide whether the risk is
worth it. Additionally, these betting spread odds calculators allow you to compare potential payouts for different bets, making it easier to choose the ones with the best value. All in all, against
the spread betting calculators and spread/moneyline converters are a godsend for anyone interested in sports betting. They offer a detailed look at potential payouts and can guide you towards more
informed, profitable wagers.
Sports Odds Calculator FAQ
How do you calculate odds?
In general, odds are just another way to present a certain outcome’s probability. To calculate betting odds, you need an outcome’s chance (implied odds) and vice versa. You can manually calculate the
probability of a bet winning by converting American or Fractional odds to Decimal odds, and then dividing 100 by that number. However, you should ideally use a betting odds calculator to avoid making
any mistakes.
If I bet $100 how much do I win?
American odds revolve around how much money you can make if you bet $100. A positive moneyline of +110 means that if you wager $100 on a single bet, you will make a profit of $110, and your total
odds payout will be $210.
For negative moneylines, things are a bit more complex but still follow the same basic logic. A moneyline of -110 indicates that you need to wager $110 to make a profit of $100. So, if you bet $100
on a -110 odds, you can expect to win around $90 if your bet succeeds.
If you can’t be bothered with calculations, just use an odd payout calculator to determine your exact profit.
How to determine odds in betting?
Odds in betting are different formats to present the chance of a specific event occurring. Betting odds can be presented in moneyline, fractional, or decimal odds. American money line odds will
either tell you how much you would win for every $100 wagered or how much you need to bet to win $100.
There are plenty of betting odds converters online that can help you switch between the different betting odds formats. The best ones even support conversion to probability percentages.
How to calculate betting odds payout?
An odds payout calculator will show you your profit based on which odds format you use. In general, you want to convert your odds to a decimal format and multiply the decimal odds by the amount of
money you wagered.
For American odds of +150, you’ll need to divide 150 by 100 and add 1 to the result. This gives you a decimal odds of 2.50. Then if you wagered $100, you multiply 2.50 by 100, which results in a
total payout of $250 or a profit of $150.
How to calculate betting returns?
Betting returns can either be presented as your profit or your total payout if you win a bet. Different odds formats will calculate your profit (American money line odds) or your total payout amount
(decimal odds), which includes your wager.
To calculate your betting returns, just input your stake and the odds of your bet into an odds payout calculator. It will give you all the necessary information regarding how much money you’ll make.
How do I figure out the odds of winning?
To determine your chances of winning a bet, you need to calculate its implied probability. To do this, you’ll need to know the betting odds of your sportsbook and convert them into decimal odds. Then
you’ll need to divide 100 by the resulting number to define the probability of winning the bet.
Another faster option is to use a bet calculator to do your conversion, as well as figure out the exact amount of money you’ll win.
How are odds calculated in sports?
Bookmakers need to consider several parameters to provide odds for a specific event. They usually rely on statistics, advanced formulas, and their wealth of experience to determine a probability for
each outcome of the game. Afterwards, they do the odds conversion for the different formats and pick the odds for the game.
How do English odds work?
English odds, also known as fractional odds, are a traditional way of representing betting odds in the UK and Ireland. They are expressed as a fraction, such as 3/1 or 7/2. The first number
represents the amount you can win, and the second number represents the amount you need to bet. For example, if the odds are 3/1, and you bet $10, you can win $30 (your original bet plus $20 profit).
If the odds are 7/2, and you bet $10, you can win $35 (your original bet plus $25 profit). The higher the first number in the fraction, the less likely the outcome is expected to happen.
How do I calculate a sports bet payout?
To calculate a sports bet payout, you first need to understand the odds of the bet. The odds represent the likelihood of the outcome happening and are expressed as a ratio or a fraction. Different
types of odds include American, decimal, and fractional. Once you understand the odds, you can use a simple formula to calculate your potential payout. For example, with fractional odds of 5/2, you
would multiply your bet amount by 5 and then divide by 2 to get your potential payout. So, a $10 bet would result in a potential payout of $25 (10 x 5 / 2). It’s important to remember that this is
only a potential payout and doesn’t guarantee a win.
How do I calculate each way odds?
Calculating each-way odds is a bit more complicated than calculating regular odds. Each-way betting is common in horse racing and involves placing two equal bets, one on the horse to win the race and
another on the horse to place (usually to finish in the top two, three or four, depending on the number of runners in the race). To calculate the each-way odds, you first need to understand the place
terms, which are determined by the bookmaker. Then, you can use an each-way calculator to determine the potential payout. The calculator will take into account the place terms, the odds of the horse
to win and to place, and your stake amount to calculate your potential payout. The each-way payout is typically a fraction of the win payout.
How do bookies calculate odds?
Bookmakers use a variety of factors to calculate odds, including past performance, current form, and other relevant factors that may affect the outcome of the event. They will also take into account
the amount of money being bet on each potential outcome. Bookmakers aim to set odds that will attract an equal amount of money on both sides of the bet, so they can make a profit regardless of the
What are the rules of an accumulator bet?
An accumulator bet, also known as a parlay or multi-bet, is a type of bet that combines multiple selections into one wager. In order to win an accumulator bet, all selections in the bet must be
correct. If one selection loses, the entire bet loses. The odds of each selection are multiplied together to determine the overall odds of the accumulator bet. The potential payout for an accumulator
bet is much higher than a single bet, as the odds are multiplied together. However, the risk is also higher as the bettor must correctly predict multiple outcomes. Each sportsbook may have their own
rules regarding maximum and minimum selections allowed in an accumulator bet.
How do you calculate odds in football betting?
Football betting odds are typically represented as decimals, fractions, or American odds. Decimal odds show the total payout, including the initial stake, while fractional odds show the profit
relative to the stake. American odds show the amount of profit on a $100 bet. To calculate the potential payout from a football bet, multiply the stake by the odds. For example, if the odds are 1.5
and the stake is $10, the potential payout is $15 ($10 x 1.5). To calculate the profit, subtract the stake from the potential payout. In this case, the profit would be $5 ($15 – $10). Different
sportsbooks may use different odds formats, so it’s important to understand the odds format being used.
|
{"url":"https://bc-cdn.canadasportsbetting.ca/betting-resources/tools/odds-calculator/","timestamp":"2024-11-14T08:23:21Z","content_type":"text/html","content_length":"140570","record_id":"<urn:uuid:e06d91a6-b740-4f10-9e9b-f642d9e685c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00620.warc.gz"}
|
Topic: Arrays
Input: n = 3, arr[] = {1, 2, 3}
Output: 1
Explanation: If the index returned is 2, then the output
printed will be 1. Since arr[2] = 3 is greater than
its adjacent elements, and there is no element after
it, we can consider it as a peak element.
No other index satisfies the same property,
so answer will be printed as 0.
public int peakElement(int[] arr,int n)
for(int i=0;i<n-1;i++){
return i;
return n-1;
Time Complexity : O(n)
Space Complexity : O(1)
๐ Thankyou for being part of 200 days of DSA.
|
{"url":"https://preetikaprakash.hashnode.dev/dsa-day-1200","timestamp":"2024-11-03T12:57:53Z","content_type":"text/html","content_length":"83017","record_id":"<urn:uuid:ba080f46-d333-45da-b52c-3e8c17c28b0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00636.warc.gz"}
|
The Power of Creative Mathematical Thinking
Math That Connects Where We’re Going to Where We’ve Been: Recursion Builds Bridges Between Ideas From Across Different Math Classes.
Illustration by Robert Neubecker for Quanta Magazine
Patrick Honner in Quanta: Say you’re at a party with nine other people and everyone shakes everyone else’s hand exactly once. How many handshakes take place?
This is the “handshake problem,” and it’s one of my favorites. As a math teacher, I love it because there are so many different ways you can arrive at the solution, and the diversity and
interconnectedness of those strategies beautifully illustrate the power of creative thinking in math.
One solution goes like this: Start with each person shaking every other person’s hand. Ten people, with nine handshakes each, produce 9 × 10 = 90 total handshakes. But this counts every handshake
twice — once from each shaker’s perspective — so the actual number of handshakes is 902=45. A simple and lovely counting argument for the win!
There’s also a completely different way to solve the problem. Imagine that the guests arrive one at a time, and when they get there, they shake hands with everyone present. The first person has no
hands to shake, so in a one-person party there are zero total handshakes. Now the second person arrives and shakes hands with the first person. This adds one handshake to the total, so in a
two-person party, there are 0 + 1 = 1 total handshakes. When the third person arrives and shakes hands with the first two guests, this adds two handshakes to the total. The fourth person’s arrival
adds three handshakes to the total, and so on.
This strategy models the sequence of handshakes recursively, meaning that each term in the sequence is defined relative to those that come before it. You’re probably familiar with the Fibonacci
sequence, the most famous recursive sequence of all. It starts out 1, 1, 2, 3, 5, 8, 13, 21, and continues on with each subsequent term equal to the sum of the previous two.
More here.
|
{"url":"https://despardes.com/power-of-creative-mathematical-thinking/","timestamp":"2024-11-13T12:31:03Z","content_type":"text/html","content_length":"75006","record_id":"<urn:uuid:4df98fa2-8857-490b-a17c-66604152c9f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00023.warc.gz"}
|
3.6 Direct Comparison Test
(problem 1) Determine if the series converges or diverges:
$\;\displaystyle {\sum _{n=1}^\infty \frac {1}{n^3 + 1}}$
Which series should we compare this to?
$\displaystyle \sum _{n=1}^\infty \frac {1}{n^2}$ $\displaystyle \sum _{n=1}^\infty \frac {1}{n^3}$ $\displaystyle \sum _{n=1}^\infty \frac {1}{3^n}$
Which way does the comparison go?
$\displaystyle {\frac {1}{n^3 + 1} \leq \frac {1}{n^3}}$ for $n \geq 1$ $\displaystyle {\frac {1}{n^3 + 1} \geq \frac {1}{n^3}}$ for $n \geq 1$
Describe the behavior of the series $\displaystyle {\sum _{n=1}^\infty \frac {1}{n^3 + 1}:}$
Converges by DCT Diverges by DCT No Conclusion from DCT
|
{"url":"https://ximera.osu.edu/math/calc2Book/calc2Book/directComparison/directComparison","timestamp":"2024-11-03T03:00:35Z","content_type":"text/html","content_length":"81945","record_id":"<urn:uuid:46597de8-f4d3-4af7-bfcd-9110fbfe666b>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00430.warc.gz"}
|
Bearish Butterfly
or 41,250 by 11/15/2024
As of 11/08/2024 YTD 16,800
Indus: 43,989 +259.65 +0.6% +16.7% or 15,700 by 11/15/2024
Trans: 17,354 +143.48 +0.8% +9.2% 1,075
Utils: 1,032 +20.02 +2.0% +17.0% or 1,000 by 11/15/2024
Nasdaq: 19,287 +17.32 +0.1% +28.5% 19,000
S&P 500: 5,996 +22.44 +0.4% +25.7% or 17,600 by 11/15/2024
or 5,600 by 11/15/2024
As of 11/08/2024 YTD
Indus: 43,989 +259.65 +0.6% +16.7%
Trans: 17,354 +143.48 +0.8% +9.2%
Utils: 1,032 +20.02 +2.0% +17.0%
Nasdaq: 19,287 +17.32 +0.1% +28.5%
S&P 500: 5,996 +22.44 +0.4% +25.7%
or 41,250 by 11/15/2024
or 15,700 by 11/15/2024
or 1,000 by 11/15/2024
or 17,600 by 11/15/2024
or 5,600 by 11/15/2024
Bulkowski on the Bearish Butterfly Pattern®
Revised on 6/29/18. Statistics updated on 8/28/2020.
This article describes my analysis of the bearish butterfly pattern as described by publicly available information and common sense rules to determine valid patterns. Additional rules may or may not
improve performance. I tested the pattern using only the below identification guidelines.
Important Results for the Bearish Butterfly
Overall performance rank for downward moves (1 is best): 4 out of 5 (versus other Fibonacci patterns, only)
Break even failure rate: 27%
Average drop: 13%
Percentage reversing at point D: 86%
The above numbers are based on over 1,000 perfect trades in bull markets. See the glossary for definitions.
Bearish Butterfly Identification Guidelines
Characteristic Discussion The Bearish Butterfly
XAB Price drops from X (see figure on the right, not drawn to scale) to valley A, then retraces up to B. The BA retrace of XA measures 78.6%. Retraces
ABC Price retraces from valley A to peak B then drops to C. Retrace BC as a function of BA should be a Fibonacci ratio between and including 38.2% to 88.6%. I list
the qualifying ratios in the chart.
BCD After peaking at B, price drops to C followed by an extension to D. The DC/BC extension measures one of the Fibonacci ratios from 161.8% to 224%.
XAD The extension AD as a percentage of XA is 127 but I allow plus or minus 3 percentage points (124% to 130%) of this to qualify.
The bearish butterfly pattern follows the ratios listed in the figure and as described in the above table. You'll need a computer running pattern finding software to locate them.
Bearish Butterfly Trading Tips
Trading Explanation
Fibonacci Use the length of the XA move to help predict the price at which the stock will turn at D. AD should be 1.27 times as long as XA.
Short Once price turns at D, short the stock. Because this pattern has such a high failure rate, I don't suggest shorting a stock only because it shows a bearish butterfly. Find other reasons
why you think the stock will drop. However, 86% will turn lower at D.
Stop Use a close above D as the stop location.
Measure After price peaks at D, price drops to A 24% of the time, to B 76% of the time, and to C 38% of the time. You can use those percentages as targets.
Bearish Butterfly Trading Example
Let's take a look at a trading example.
I show the butterfly on the daily chart of Teva Pharmaceutical. X is at 32.48-31.72, A is at 28.49-27.60, B is at 31.83-30.36, C is 30.93-30.03 and D is 33.82-32.18, high to low, respectively on the
peak or valley. Price reaches the low (L) on 9/6/2017 at 15.22, for a drop of 55% below the high at D.
The AB retrace of XA should be 78.6%. I found the high-low range at peak B includes the 78.6% value, so I count it as a valid retracement.
For the math, using the high at X and low at A, high at B and low at A gives (31.83 - 27.60)/(32.48 - 27.60) or 87%. Using the low at B changes the answer to 57%. Because the high-low range on the
price bar at peak B includes the 78.6% ratio, I accept that the price bar at B includes the correct Fibonacci retracement for the butterfly.
I use the same methodology for the other ratios: BC/BA, DC/BC. Ratio DA/XA must be within 3 percentage points of the target which it is exactly in this example. That is, DA/XA = (33.82 - 27.60)/
(32.48 - 27.60) or 127%.
I tried other methods to identify the Fibonacci turns, but this high-low range worked best and it makes intuitive sense. So that's what I used for all butterfly recognition selections.
Would you be able to spot this as a butterfly if the labels were not attached?
Price makes a strong push downward, bottoming at L.
-- Thomas Bulkowski
See Also
Support this site! Clicking any of the books (below) takes you to Amazon.com If you buy ANYTHING while there, they pay for the referral.
Legal notice for paid links: "As an Amazon Associate I earn from qualifying purchases."
My Novels
My Stock Market Books
Bumper's Story
Trading Basics Foresight
Head's Law
Remember Me
© 2005-2024 by Thomas N. Bulkowski. All rights reserved.
Disclaimer: You alone are responsible for your investment decisions. See
for more information.
Some pattern names are registered trademarks of their respective owners.
Butterfly is a registered trademark of Scott Carney.
If it's not broke, fix it until it is.
|
{"url":"https://thepatternsite.com/ButterflyBear.html","timestamp":"2024-11-11T03:43:29Z","content_type":"text/html","content_length":"21675","record_id":"<urn:uuid:aac6ecd5-77ff-4d4f-b21a-893aa851563f>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00400.warc.gz"}
|
U.S. natural interest rate stuck at 0%: evidence and consequences | Macrosynergy
Federal Reserve research supports the view that the natural rate of interest in the U.S. has not recovered from its plunge to an unprecedented historical low of close to zero after the great
recession. This bodes for protracted problems with the zero lower bound emphasizing the ongoing importance of asset purchases and other non-conventional policy options for central bank credibility.
Laubach, Thomas, and John C. Williams (2016). “Measuring the Natural Rate of Interest Redux,” Finance and Economics Discussion Series 2016-011. Washington: Board of Governors of the Federal Reserve
This paper contributes to the basic understanding of the forces that perpetuate non-conventional monetary policy, as outlined in the related summary page.
Previous posts on the subject dealt with the deflationary bias at the zero lower bound (view post here) and the need for particularly easy monetary policy as a form of central bank risk management (
view post here).
The below are excerpts from the paper. Headings, links and cursive text have been added. Some acronyms and technical terms have been replaced by simplified language.
What is the natural rate of interest?
“We [define]…the natural rate as the real short-term interest rate consistent with the economy operating at its full potential once transitory shocks to aggregate supply or demand have abated.
Implicit in this definition is the absence of upward or downward pressures on the rate of price inflation relative to its trend. Our definition takes a ‘longer-run’ perspective, in that it refers to
the level of real interest rates expected to prevail, say, five to 10 years in the future, after the economy has emerged from any cyclical fluctuations and is expanding at its trend rate.”
“[The figure below] portrays a highly stylized model of the determination of the natural rate. The downward-sloping line, labelled the IS [investment-saving] curve, shows the negative relationship
between aggregate spending and the real interest rate. The vertical line indicates the level of potential GDP. At the intersection of the IS curve and the potential GDP line, real GDP equals
potential, and the real interest rate equals the natural rate of interest.”
How to estimate the natural rate of interest?
“The natural rate of interest may change over time owing to highly persistent structural shifts in aggregate supply and demand… there are myriad influences on the natural rate, including, but not
limited to, productivity growth, demographics, and the evolution of the global economy…One observation stands out from…[time series charts]: there are sizable swings in average real interest rates
that persist for decades.”
“Although [line charts with moving averages and filters] could…work well at estimating the natural rate of interest when inflation and economic activity are relatively stable, they are likely to be
unreliable during periods when this is not the case, For example, during the late 1960s and much of the 1970s, inflation trended steeply upward in the United States, which suggests that the real
funds rate was below the natural rate on average. Similarly, real interest rates were very high during the period of the Volcker disinflation of the early 1980s, when inflation fell sharply.”
“In light of these problems…we instead use a multivariate model that explicitly takes into account movements in inflation, output, and interest rates. In the Laubach-Williams (2003) model, the
natural rate of interest is implicitly defined by the absence of inflationary or deflationary pressures…Specifically, the natural rate is assumed to depend on the estimated contemporaneous trend
growth rate of potential output and a time-varying unobserved component that captures the effects of other unspecified influences on the natural rate.”
“Roughly speaking, the model…[relates the output gap to] its own lags and…the difference between the actual real interest rate and the natural rate…If the output gap turns out to be lower than
expected, the model responds by reducing the estimate of the natural rate…The output gap estimate in turn is informed by an estimated Phillips curve…In particular, if inflation turns out lower than
predicted…the level of potential output is being revised up (that is, for a given level of real GDP, the output gap is revised down).”
What happened after the great recession 2008/2009?
“The ex post real fed funds rate–defined as the nominal effective federal funds rate less the percent change in the personal consumption expenditures price index over the prior year– has averaged
about 2% over the past 50 years.”
“Since then start of the Great Recession the natural rate of interest has fallen to, and remained at, historically very low levels near zero. This is in part explained by a significant decline in the
trend growth rate of the economy…We find no evidence that the natural rate has moved back up even with the economy close to fully recovered from the Great Recession. These results are robust to
alternative approaches to estimating the natural rate of interest and the output gap.”
“With core inflation remaining surprisingly stable in the face of sharp declines of real GDP below the trend implied by the pre-crisis trend growth rate of around 3 percent, the model assigned…a
large share to declines in potential output and its trend growth rate. In fact, from mid-2008 to mid-2009, the model saw the level of potential output contracting by 2¼ percent, and the estimate of
the trend growth rate declined by nearly ½ percentage point. The slow pace of GDP growth over the subsequent three years, combined with stable inflation, reduced the trend growth estimate further, to
roughly 2¼ percent.”
“With the federal funds rate close to zero from early 2009 on, and core inflation averaging around 1-1/2 percent, the implied real rate gap would have been -3-1/2 percent. Such a large negative real
rate gap would have predicted a much sharper rebound in the output gap than actually occurred. The estimate of the natural rate of interest therefore fell rapidly to ½ percent in mid-2009, and then
continued to decline to around zero by the end of 2010, cutting the implied real rate gap to about -1-1/2 percent. This represents an unprecedented decline and an historical low level of the natural
rate over the past half century for which we have estimates.”
What are the consequences for monetary policy?
“If the natural rate were to remain as low as it has been since 2008, episodes in which short-term interest rates would be constrained from below would become more frequent and long-lasting, and
unconventional policy tools may continue to play an important role in the future.”
“While the use of large-scale asset purchases appears to have been a powerful policy tool when short-term rates were constrained by the zero lower bound following the financial crisis, it is unclear
whether a permanent expansion of the central bank’s balance sheet would permanently reduce longer-term interest rates and thereby increase the natural rate of interest.”
“Central banks could aim to reduce nominal interest rates below zero, as has been done to a limited extent in several European jurisdictions. Even though a number of institutional hurdles may make it
difficult to reduce nominal interest rates to levels that might be called for in response to a major recession, negative short-term interest rates in combination with forward guidance and asset
purchases would provide central banks with a potent set of tools to respond to undesirably low inflation and economic weakness.”
|
{"url":"https://macrosynergy.com/research/u-s-natural-interest-rate-stuck-at-0-evidence-and-consequences/","timestamp":"2024-11-09T10:11:17Z","content_type":"text/html","content_length":"187372","record_id":"<urn:uuid:8493a95c-30c6-488e-be9b-f0e749b8a1b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00826.warc.gz"}
|
Stochastic Crossover Indicator Momentum Trading Strategy
1. Stochastic Crossover Indicator Momentum Trading Strategy
Stochastic Crossover Indicator Momentum Trading Strategy
, Date: 2024-04-28 11:57:14
This strategy uses the crossover signals of the Stochastic Oscillator to identify potential buying and selling opportunities. When the %K line of the Stochastic Oscillator crosses above the %D line
and the %K value is below 20, the strategy generates a buy signal. Conversely, when the %K line crosses below the %D line and the %K value is above 80, the strategy generates a sell signal. The
strategy is applied to a 5-minute time frame.
Strategy Principle
The Stochastic Oscillator consists of the %K line and the %D line. The %K line measures the position of the closing price relative to the high and low prices over a specified period. The %D line is a
moving average of the %K line, used to smooth the %K line and generate more reliable signals. When the %K line crosses the %D line, it indicates a change in price momentum, which can be interpreted
as a potential buy or sell signal. This strategy uses the crossovers of the Stochastic Oscillator to identify potential trend reversals or momentum changes. When the %K line crosses above the %D line
and the %K value is below 20 (indicating oversold conditions), the strategy generates a buy signal. Conversely, when the %K line crosses below the %D line and the %K value is above 80 (indicating
overbought conditions), the strategy generates a sell signal. This approach attempts to capture shifts in the trend before a price reversal occurs.
Strategy Advantages
1. Simplicity: The strategy is based on a widely used technical indicator and is easy to understand and implement.
2. Trend identification: By using the crossovers of the Stochastic Oscillator, the strategy can identify potential trend reversals and momentum changes.
3. Overbought/oversold signals: By combining the crossovers of the Stochastic Oscillator with overbought/oversold levels, the strategy attempts to identify extreme conditions before a price reversal
Strategy Risks
1. False signals: The Stochastic Oscillator may generate false signals, leading to unprofitable trades.
2. Lag: As a lagging indicator, the Stochastic Oscillator may generate signals after the price has already reversed.
3. Lack of trend confirmation: The strategy may generate frequent trading signals in choppy markets, resulting in overtrading and potential losses.
Strategy Optimization
1. Trend confirmation: Additional technical indicators or price action analysis can be incorporated to confirm the trend before generating trading signals. This can help filter out false signals in
choppy markets.
2. Dynamic parameters: The parameters of the Stochastic Oscillator can be dynamically adjusted based on market volatility or other market conditions to optimize the strategy’s performance.
3. Risk management: Proper stop-loss and position sizing controls can be implemented to limit potential losses and protect profits.
The Stochastic Crossover Indicator Momentum Trading Strategy uses the crossovers of the Stochastic Oscillator to identify potential buying and selling opportunities while considering the overbought/
oversold state of the asset. Although the strategy is simple and can identify trend reversals, it may also generate false signals and lack trend confirmation. By incorporating trend confirmation
indicators, dynamic parameter optimization, and risk management, the strategy’s performance can be further enhanced. However, it is essential to thoroughly test and evaluate the strategy under
different market conditions before implementation.
start: 2024-03-28 00:00:00
end: 2024-04-27 00:00:00
period: 1h
basePeriod: 15m
exchanges: [{"eid":"Futures_Binance","currency":"BTC_USDT"}]
strategy("Stochastic Crossover Buy/Sell", shorttitle="Stochastic Crossover", overlay=true)
// Stochastic Oscillator Parameters
length = input(14, title="Stochastic Length")
smoothK = input(3, title="Stochastic %K Smoothing")
smoothD = input(3, title="Stochastic %D Smoothing")
// Calculate %K and %D
stoch = stoch(close, high, low, length)
k = sma(stoch, smoothK)
d = sma(k, smoothD)
// Plot Stochastic Lines
plot(k, color=color.blue, linewidth=2, title="%K")
plot(d, color=color.red, linewidth=2, title="%D")
// Stochastic Crossover Buy/Sell Signals
buySignal = crossover(k, d) and k < 20 // Buy when %K crosses above %D and %K is below 20
sellSignal = crossunder(k, d) and k > 80 // Sell when %K crosses below %D and %K is above 80
// Plot Buy/Sell Arrows
plotshape(series=buySignal, style=shape.triangleup, location=location.belowbar, color=color.green, size=size.small, title="Buy Signal")
plotshape(series=sellSignal, style=shape.triangledown, location=location.abovebar, color=color.red, size=size.small, title="Sell Signal")
// Entry and Exit Points
strategy.entry("Buy", strategy.long, when=buySignal)
strategy.close("Buy", when=sellSignal)
strategy.entry("Sell", strategy.short, when=sellSignal)
strategy.close("Sell", when=buySignal)
|
{"url":"https://www.fmz.com/strategy/449698","timestamp":"2024-11-03T10:47:45Z","content_type":"text/html","content_length":"14869","record_id":"<urn:uuid:3f3140f6-9d51-45ac-bee2-a465284a60af>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00648.warc.gz"}
|
ECE5505 Digital Test and Verification Notes Part II
This blog serves as the 2nd and last part for the class ECE5505 Digital Test and Verification.
D Flip Flop
Delay/Data flip flop: The flip flop remains in its current state until its receives a signal like clock that switches it to opposite state. The clock single required for the synchronous version of D
flip flops but not for the asynchronous one.
Clock Q(n) D(n) Q(n+1) state
\uparrow\, \gg 0 0 0 0 hold
0 1 1 reset
1 0 0 reset
1 1 1 hold
\overline{PR} \overline{CLR} CLK D Q \overline{Q}
0 1 X X 1 0
1 0 X X 0 1
0 0 X X X X
1 1 \uparrow 1 1 0
1 1 \uparrow 0 0 1
1 1 0 X
Iterative Logic array
Step Time Frame Procedure optimization
1️⃣ PI=PI+FF
1 0 2️⃣ PO=PO+FF reduce flip flop state vars
3️⃣ PODEM for PIs
1️⃣Propagate to actual PO
2️⃣ frame by frame reduce propagate frame nums
2 \geqslant 1 3️⃣ TG for frames \geqslant 1backwards delete illegal/unreachable states
🛑states could be unjustifiable, find alternative
4️⃣Additional constraints on TF0
✴️input seq with FF states (x) to justify TF0
3 -1 1️⃣ justify FF states at time frame 0
🛑states could be unjustifiable, find alternative
Deterministic Sequential ATPG
Sequential circuit with Drivability in addition to observability, controllability, drivability means drive to D or -D from g with respect to controllability.
S-Graph: vertices are FFs, edges, if acylic(not going back to itself), faulty state is always initializable, d_{seq}, num of FFs on the longest path in s-graph, non-FF fault in acyclic circuit has at
most d_{seq}+1 vectors.
D algorithm – Combinational ATPG in DFT (VLSI)
extened backward implication
Extended Backward Implication^1
forward implication: if all input values are known or one is controlling value, the output can be determined.
backward implication: calculate input implication by the output value.
if output=1, implicates all gate inputs are 1, add implications of setting these inputs to 1.
if ouput=0, implicates at least one of the input is 0, need to add implications of setting each of these inputs to 0 and then union to make sure at least one of inputs is 0. It is trying to constrain
the implication than to conclude in case where implication to one input is 0 and implication to another input is 1. Perhaps this is because implication of only input to 0 cannot guarantee the output
to be 0. Implications needs to be propagated through gates like deductive fault simulation.
\underbrace{\text{FIRE}}_{\substack{\text{Fault Independent}\\ \text{REdundency Identification}}}&=
\underbrace{\{\text{S}_{a=0}\} \cap \{\text{S}_{a=1}\}}_{\substack{\color{blue}{\text{Faults that are untestable}}}} & \\[4ex]
&=\{\underbrace{\overline{\text{EXCT}_{a=0}}}_{\substack{\text{unexcitable faults}\\\text{when } a=0}}\cup\underbrace{\overline{\text{PROP}_{a=0}}}_{\substack{\text{unpropagatable faults}\\\text{when } a=0}}\}\cap
\{\underbrace{\overline{\text{EXCT}_{a=1}}}_{\substack{\text{unexcitable faults}\\\text{when } a=1}}\cup\underbrace{\overline{\text{PROP}_{a=1}}}_{\substack{\text{unpropagatable faults}\\\text{when } a=1}}\}
1. Zhao, J-K., Elizabeth M. Rudnick, and Janak H. Patel. “Static logic implication with application to redundancy identification.” In Proceedings. 15th IEEE VLSI Test Symposium (Cat. No.
97TB100125), pp. 288-293. IEEE, 1997.📁[↩]
|
{"url":"https://blog.wangxm.com/2024/10/ece5505-digital-test-and-verification-notes-part-ii/","timestamp":"2024-11-10T01:15:31Z","content_type":"text/html","content_length":"86653","record_id":"<urn:uuid:5b5ad879-a613-4f7d-9b60-eabe6824b45d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00893.warc.gz"}
|
Algorithm Classifications in Machine Learning | R-bloggersAlgorithm Classifications in Machine LearningAlgorithm Classifications in Machine Learning
Algorithm Classifications in Machine Learning
[This article was first published on
Data Science Tutorials
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
The post Algorithm Classifications in Machine Learning appeared first on Data Science Tutorials
What do you have to lose?. Check out Data Science tutorials here Data Science Tutorials.
Algorithm Classifications in Machine Learning, There is a vast array of algorithms available in the field of machine learning that can be utilized to comprehend data.
One of two categories can be used to group these algorithms:
1. Creating a model to estimate or predict an output based on one or more inputs is the first step in supervised learning algorithms.
2. Unsupervised Learning Algorithms: These algorithms analyze inputs to identify patterns and relationships. There is no output that is “supervised.”
The differences between these two categories of algorithms are explained in this tutorial along with numerous examples of each.
Supervised Learning Algorithms
When we have one or more explanatory variables (X1, X2, X3,…, Xp) and a response variable (Y), and we want to create a function that defines how the explanatory variables and the response variable
relate to one another:
Y = f(X) + ε
where ε is a random error term independent of X with a mean of zero, and where f is systematic information that X provides about Y.
supervised learning algorithms often fall into one of two categories:
1. Regression: Continuous output variable (e.g. weight, height, time, etc.)
2. Classification: The output variable has a categorical nature (e.g. male or female, pass or fail, benign or malignant, etc.)
We employ supervised learning methods for two key causes:
1. Prediction:
To anticipate the value of a response variable, we frequently use a set of explanatory variables (e.g. using square footage and number of bedrooms to predict home price)
2. Inference:
We might be interested in learning how an explanatory variable’s value affects a response variable. For instance, how much does the average home price rise when the number of bedrooms increases by
We may employ many techniques for estimating the function f, depending on whether our objective is inference or prediction (or a combination of both).
For instance, while linear models are simpler to read, non-linear models may provide more accurate predictions.
The most popular supervised learning algorithms are listed below.
1. Linear regression
2. Logistic regression
3. Linear discriminant analysis
4. Quadratic discriminant analysis
5. Decision trees
6. Naive Bayes
7. Support vector machines
8. Neural networks
Unsupervised Learning Algorithms
When we have a list of variables (X1, X2, X3,…, Xp), we can use an unsupervised learning technique to simply search for underlying structures or patterns in the data.
Unsupervised learning algorithms typically fall into one of two categories:
1. Clustering:
Using these kinds of algorithms, we try to identify “clusters” of related observations in a dataset.
This is frequently utilized in the retail industry when a business wants to find groups of customers with similar buying preferences so that it may develop targeted marketing campaigns that appeal to
those groups of customers.
2. Association:
We look for “rules” that can be applied to create associations using these kinds of algorithms. Retailers might, for instance, create a rule that says, “If a customer buys product X, they are very
likely to also buy product Y.”
The most popular unsupervised learning algorithms are listed below.
1. Principal component analysis
2. K-means clustering
3. K-medoids clustering
4. Hierarchical clustering
5. Apriori algorithm
The types of machine learning algorithms are represented in the diagram below.
Further Resources:-
Because the greatest way to learn any programming language, even R, is by doing.
Random Forest Machine Learning Introduction – Data Science Tutorials
How do augmented analytics work? – Data Science Tutorials
How to Find Optimal Clusters in R? – Data Science Tutorials
The post Algorithm Classifications in Machine Learning appeared first on Data Science Tutorials
Learn how to expert in the Data Science field with Data Science Tutorials.
|
{"url":"https://www.r-bloggers.com/2022/09/algorithm-classifications-in-machine-learning/","timestamp":"2024-11-09T11:21:48Z","content_type":"text/html","content_length":"90005","record_id":"<urn:uuid:23931f1a-86c9-45f0-b119-5cd0bd87d667>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00479.warc.gz"}
|
Are Mispricings Long-Lasting or Short-Lived Evidence from S & P 500 Index ETF Options
Are Mispricings Long-Lasting or Short-Lived? Evidence from S & P 500 Index ETF Options ()
1. Introduction
Finance literature documents a number of evidences suggesting the mispricing of options. Given a steep smile in the implied volatility of S & P 500 index option, the out-of-the-money (OTM) options
seem to be expensive [1] [2] . For example, shorting the zero beta straddles/strangles offered a return of 3.15 percent per week [3] [4] . Also, widespread violations of stochastic dominance by
1-month S & P 500 index call options imply that any risk-averse trader can improve expected utility by writing call options net of transaction costs and bid-ask spread [5] [6] [7] . Santa-Clara and
Saretto [8] find that strategies involving short positions in options generally compensate the investor with Sharpe ratios as high as 1.69.
In spite of a large body of literature attempting to identify the mispricing of options, questions still remain: are mispriced options always mispriced until maturity? If not, how long does the
mispricing period last? How often do options move from mispriced to fairly priced and vice versa?
Answering these questions may shed light on the underlying mechanism of such mispricing. If the “mispricings” were the results of model mis-specifica- tions, e.g. unknown risk factors, or errors in
estimating parameters, they should be present for a long duration, as the flawed pricing models generate systematic pricing biases. If such “mispricings” were simply market temporary inefficiency due
to market frictions or overreactions, they should be short-lived as arbitragers can take advantages of such opportunities quickly. The answer of long-last- ing or short-lived mispricing is also
crucial to practitioners since it determines how soon the arbitrage strategies will pay off. If prices converge towards fair value slowly, it may take too long to realize any profit.
This paper investigates the time series properties of option mispricings using high frequency bid-ask prices. After constructing option pricing bounds based on stochastic dominance, this paper
provides evidences that most violations of the stochastic dominance upper bounds of Constantinides and Perrakis [7] last no more than 10 trading hours. This study also identifies that options move in
and out of the pricing bounds frequently during the last few days before maturity. The results are robust to the different parameters and assumptions in estimating the bounds.
This paper contributes to the literature by showing that mispricings in options are mostly short-lived. This means that the observed widespread overpricing in options might be the result of temporary
inefficiency (e.g. transaction costs, overreaction, liquidity etc.) rather than a model mis-specification, such as estimation biases of the parameters, or an overlooked persistent risk factor. It
supports the option pricing bounds derived in Constantinides and Perrakis [7] .
The dataset used in this study differs from prior literatures. Most existing studies are based on the historical end-of-day mid prices of index options, retrieved from Option Metric database. The
data used in this study are unique in two ways. Firstly, this study chooses S & P 500 index ETF (SPY: NYSE) as the underlying security, as they have high liquidity and small bid-ask spread. Secondly,
the dataset in this study comes from Interactive Broker trading platform. With a real-time electronic trading platform, it provides live bid/ask quotes synchronized with AMEX, CBOE and other large
exchanges. The observations in this dataset come from the historical snapshots of the quotes every fifteen minutes. Compared to traditional Option Metrics database, the datasets contain bid/ask
quotes with high frequency and are of higher quality. Due to limitation of the database, the sample period is restricted to half a year. However, the total observations amount to over 40,000 quotes.
The paper is organized as follows. The first section presents the pricing bounds on option prices imposed by stochastic dominance as in Constantinides and Perrakis [7] and examines the underlying
assumptions. The next section describes the data and the experiment design. The empirical results are shown in Section III and Section IV checks their robustness. In the last section of this study, I
discuss the implications of the results and conclude.
2. Option Pricing Bounds Imposed by Stochastic Dominance
Constantinides and Perrakis [7] investigate the restrictions on option prices imposed by stochastic dominance. They conclude that options prices should stay within a set of bound in equilibrium.
Otherwise, any trader can increase expected utility by trading in the options, the index, and the risk-free bond - hence violates the conditions of equilibrium.
In this study, the term “mispricing” is defined as the option prices which violate the restrictions in Constantinides and Perrakis [7] .
2.1. Assumptions to Derive the Option Pricing Bounds
To derive the option pricing bounds, Constantinides and Perrakis [7] assumes that the utility-maximizing and risk-averse agents are capable to hold and trade only two representative securities in the
market, a stock index and a bond. Stock trades incur proportional transaction costs.
They search for the possible prices of the bond, stock, and derivatives at a given point such that those prices support an increasing and concave utility function. If the combination of prices fails
to support the utility function, then any trader can increase expected utility by trading in the options, the index, and the riskless asset. The violation of such bounds are called inconsistent with
stochastic dominance as it implies that at least one risk-averse agent, regardless of the form of utility function, increases expected utility by trading the options.
2.2. Pricing Bounds on Call Options
This section presents the pricing bounds on call options without proof. At any time t prior to expiration T, the upper bound on the price of a call option is given by
where ${S}_{t}$ is the underlying price at time t, K is the strike, k is the proportional transaction costs, and ${R}_{S}$ is the expected return of the underlying per period.
For the lower bound,
$c\left({S}_{t},t\right)={\left(1+\delta \right)}^{t-T}{S}_{t}-\frac{K}{{R}_{f}^{T-t}}+\frac{E\left[\mathrm{max}\left(K-{S}_{T},0\right)|{S}_{t}\right]}{{R}_{S}^{T-t}}$(2)
where ${R}_{f}$ is the gross risk-free return per period, and d is the dividend yield.
Constantinides and Perrakis [9] also derived the option pricing bounds imposed by stochastic dominance on put options. However, empirical evidences suggest that the violations of bounds on puts are
sparse [6] . Consequently, this research limits our focus on the violations of option pricing bounds on call option.
3. Data and Methodology
This research intends to find the mispricings implied by upper/lower bounds of option prices, and to examine the time series properties of such violations. This section first presents the methodology
and dataset to construct the option pricing bounds. Then, we discuss about the criteria to identify the mispricings when both ask and bid price are present.
3.1. Estimation of Option Pricing Bounds
There are three steps involved to implement the empirical test: estimating input parameters and distribution; feed into Equation (1) and (2) to derive the upper/lower bounds; compare them with market
prices determine the time series pattern of violations.
Input Parameters and Return Distribution. To calculate the option pricing bounds as stated in the previous section, the primary challenge is to estimate the conditional distribution of the underlying
index return. This paper employs several techniques to achieve the task: bootstrapping, GARCH model, and adjusted Chicago Board Options Exchange Volatility (VIX) index.
The first approach is to bootstrap from one month (22 trading days) overlapping index returns with a rolling window of six months (132 trading days), such that each day is the beginning of another
one-month return. This is a widely accepted approach in financial investment industry.
In the second way, the conditional distribution comes from the forecast of ARMA(1,1)-GARCH(1,1) model with error terms distributed as skewed student’s t. An ARMA(m, n)-GARCH(p, q) process model the
index return as a stationary ARMA(m, n) process, and the conditional volatility as a GARCH(p, q) process. The deviation from traditional assumptions of normal distributed error terms allows for the
negative skewness and excessive kurtosis observed in actual index return.
The third estimation approach parallels the bootstraps approach, but rescales the distribution such that the volatility of expected returns matches the adjusted VIX index. This method adopts the VIX
index as the benchmark volatility because VIX reflects the market expectation of one-month ahead volatility. The rationales of the adjustment of VIX index are that VIX generally overpredicts the
realized volatility. As a result, the last approach sets the adjusted VIX as the fitted values of regressing VIX index on realized volatility^1.
The input parameters used to calculating the option pricing bounds are summaries as follows (Table 1). Risk Free Rate ( ${R}_{f}$ ) is chosen as the three-month T-bill Rate, with an average of 0.18%
during the sample period. Dividend Yield (d) parameter is retrieved from market S & P 500 Dividend Yield, with a mean of 2.14% during the sample period. The proportional transaction Cost (k) is fixed
at 0.3%. This is based on the best estimation of two senior derivatives traders with more than 10 years of experiences in an assets management firm.
Calculation of Pricing Bounds. Finally, after estimating the statistical distribution of index return, the calculation of the term $E\left[max\left({S}_{T}-K,0\right)|{S}_{t}\right]$ in equation (1)
and (2) requires Monte Carlo simulation techniques. Control variants and antithetic variants techniques are employed in Monte Carlo simulations to reduce the variance. The typical number of
replications is 800,000^2.
Criteria of Determining Mispricings. As this research studies the typical duration of option mispricings, it necessitates the criteria to characterize mispricings and reasonably priced options. The
definition of mispricings is straightforward. When both bid and ask prices are present, the under-pricing is defined as Ask Price < Lower Bound, and overpricing is identified if Bid Price > Upper
bound. Because the ticker size in CBOE is set at $0.01, this requires that ask price should be less than the lower bound by over $0.01 to be qualified as under-pricing. Similar condition applies to
The criteria determining the reasonably priced options are a little tricky when both bid and ask prices present. When the ask price goes below the upper bound and the bid goes beyond the lower bound,
it is unambiguous to claim such
Table 1. An example of the input parameters.
*Source: Author’s computation. This table describes an example of the input parameters to calculate the bounds on June-08-2012 at 4:00 PM. The first part of table summarizes the statistics of
estimated return distribution under three different approaches. The last part of the table provides other input parameters. The adjusted VIX approach is the same as the Bootstrap, except rescaling
the distribution to match the variance to the adjusted VIX. T-bill Rate is obtained from U.S. Department of Treasure, other data are from Interactive Broker trading platform.
option is fairly priced. However, if ask price exceeds the upper bounds while bid sits between the upper and lower bounds, it is unclear whether such option is mispriced. Although the ask price
appears to be overpriced, by shorting it, individual investors can only receive the premium equivalent to the bid price, which stands reasonably between the upper and lower bounds. Similar arguments
can also pertain to the case when ask goes above lower bound, and bid sits beneath the lower bound.
To avoid any vagueness, this paper claims an option to be reasonably priced when both its bid and ask prices stand within the bounds. The duration of mispricings, as a result, is simply the span
between the time when an option becomes mispriced for the first time, and the time when it subsequently turns back to be reasonably priced.
3.2. Data
The dataset in this study comes from Interactive Broker trading platform, which contains high frequency option bid-ask prices in a realistic trading environment from January 2011 to June 2012.
Interactive Broker is one of the largest internet-based discount trading brokers in the world. With a real-time electronic trading platform, it provides live bid/ask quotes synchronized with AMEX,
CBOE and other large exchanges. The dataset in this research comes from the historical snapshots of the quotes every fifteen minutes (See Figure 1 for an example). Due to limitation of the database,
the sample period is restricted to half a year. However, the total observations amount to over 40,000 quotes.
This study choose S & P 500 index ETF (SPY: NYSE) as the underlying security, as they have high liquidity and small bid-ask spread. The sample includes
Figure 1. An snapshot example of live trading quotes. Source from Interactive Broker; 2012-06-27 at 10:02 am.
options with various maturities, spanning from three months to three days. This research then applies option pricing bounds for each option at each time, in order to see the time series properties of
The following filters apply to the dataset. Firstly, remove all quotes smaller than $0.05 to reduce the tick size effects. Secondly, eliminate the options spanning over an ex-dividend day to address
the differences between American and European calls. Lastly, check for potential entry errors such as nonmonotonic option premium as strike increases.
3.3. Caveats
Obviously, there are many other possible ways of estimating the statistical distribution of the S & P 500 index returns other than these three models listed here. One caveat of the empirical results
of “mispricings”, as a result, is simply that the options market is priced with a different probability distribution than any of the three estimated probability distributions. Nevertheless, this may
not be a major concern. I argue that if “mispricings” would result from inappropriate estimated return distribution, the identified “mispricings” should be quite frequent and persistent. Yet, the
final results indicate the opposite. The last section will conduct further robustness checks on the issue.
Another noticeable concern is that the option pricing bounds stated in section 2 were derived specifically to price European options. Yet, the options on Index ETFs are American style. Although our
option pricing bounds would underestimate the true price, several empirical designs may ease such concern. Note that an American call is identical to a European one if there is no dividend.
Fortunately, the S & P 500 index ETF has a schedule of dividend payment, i.e. approximately every three months. In order to eliminate the effect of dividend, the sample used in this study remove all
options covering an ex-dividend date. Consequently, the resulted sample contains options with a maximum maturity of three months.
4. Empirical Results
This section describes the pattern of observed violations for the pricing bounds. The first part of the exercise identifies that more than one third of the options are mispriced, especially for
out-of-money options. The next exercise find that the average duration of mispricing lasts around 5.5 trading hours, and the prices move in and out of the bounds frequently as maturity approaches.
4.1. The Frequency of Violations
Table 2 lists the pricing bounds along with the bid/ask prices for an option with a 12 days maturity on 5th July, 2012 at 10:19AM. For each different strike prices, the last column checks for the
mispricing according to pricing bounds shown in Equation (1) and (2).
Table 3 summarizes the pattern of violations of the option pricing bounds
Table 2. An example of the live quotes and pricing bounds.
*Source: Data retrieved from Interactive Broker on 5th July, 2012 at 10:19AM. The underlying (SPY) prices were 136.46 then. The option will expire on 20th July, 2012. The upper and lower bounds are
calculated according to the ARMA(1,1)-GARCH(1,1) approach.
from July 2011 to July 2012. The violations are displayed as the percentage of the total number of quotes in each moneyness range. While a majority of the quotes are reasonably priced, on average,
19.53 % of the quotes are overpriced and 6.33% of the quotes are underpriced according to the bounds. The larger proportion of violating upper bounds suggests the options have a tendency to be
costly. Similar to Czerwonko, Jackwerth, and Perrakis [5] , the results show that a majority of the identified overpricing are OTM options, regardless of the methods of estimating the bounds. This
implies that any risk-averse trader can
Table 3. Percentage of violations of the pricing bounds out of the total number of Quotes: SPY.
*Source: Author’s computation. The table displays the percentages of bid/ask quotes violating the pricing bounds out of the total number of observed quotes under different estimation methods. The
under pricing is defined as ask price < lower bound, and overpricing is identified if bid price > upper bound. Numbers are in percentage.
improve expected utility by writing those “mispriced” call options net of transaction costs and bid-ask spread.
An experiment not shown in Table 3 indicates that the typical violation size of the bounds is between $0.01 to $0.05 for 65% of the total violations. The violations are widespread, with a proportion
of approximately 30%, when maturity approaches (less than one week).
4.2. The Duration of Violations
This section further investigates the time series properties of violations of pricing bounds. Specifically, Table 4 illustrates the average duration of the violations under different estimation
approaches for options with different time to maturity ( $T-t$ ).
On average, the duration of a mispricing persists less than two trading days. This implies that the majority of the mispricings disappear in a short period of time, which refutes the prediction of a
persistent model misspecification. The bootstrapping method identifies a longer duration than others as it usually produces wider bounds.
Strikingly, although the violations of the lower bounds are only occasional, they are likely to be more persistent than overpricing. This suggests a possible misspecification of the lower bounds.
Moreover, as the maturity date comes closer, the interval of the mispricings period diminishes considerably, from around 13 trading hours to 4 hours, regardless of the methodology to estimate
Table 4. Average duration of violating the pricing bounds (in Trading Hours).
*Source: Author’s computation. This table illustrates the average duration of the violations under different estimation approaches for options with different time to maturity (T − t). The duration of
a mispricing is defined as the span between the time when an option becomes mispriced for the first time, and the time when it subsequently turns back to be reasonably priced. The claim of a
reasonably priced option represents the case when both its bid and ask prices stand within the bounds.
the bounds. These agree with the arguments pinpointing the irrationality of investors shortly before maturity [10] .
As argued before, the “mispricings” may be either the results of model misspecifications or temporary irrationality of investors. Our primary results indicate that violations of the pricing bounds
are typically short-lived. These results favor the later hypothesis, supporting the option pricing bounds derived in Constantinides and Perrakis [7] .
4.3. Discussion
This section conducts several robustness checks which may undermine the results.
In the first place, the short-lived violations may purely results from overestimation of upper bounds or underestimate the lower bounds. For example, the upper bounds could be so high that only
sporadic extreme market fluctuations are documented. To ease this concern, this paper manually adjusts the pricing bounds downwards. The results show that only when decreasing the bounds by as much
as $0.15 could we observe an average mispricing duration of five trading days.
In addition, the Monte Carlo simulation to calculate the pricing bounds may also lead to frequent short term violations of the bounds, as the bounds fluctuate across time simply because of the
sampling errors. To address this issue, the simulation in this paper employs a fixed seed in the random number generator. Thus, this procedure could only bias the duration of the violations upward,
as the sampling errors persist through time.
5. Conclusions
A number of literatures have documented evidences suggesting the mispricing of options. Since it is hard to believe the markets are inefficient for a long term, the observed “mispricing” might either
result from transitory market inefficiency or from model misspecifications.
After constructing option pricing bounds based on stochastic dominance, this paper examines the time series properties of option mispricings using high frequency bid-ask quotes. This study
contributes to the literature by showing that most violations of the stochastic dominance upper bounds of Constant inides and Perrakis [7] last no more than 10 trading hours. The results imply that
the observed widespread mispricing in options might be the result of temporary inefficiency (e.g. transaction costs, overreaction, liquidity etc.) rather than a model misspecification, such as
estimation biases of the parameters, or an overlooked persistent risk factor.
Possible extension that could substantiate the results obtained in this paper could be establishing a high-frequency trading rule and testing its profitability. As the results of this paper suggest
that the mispricings are mostly short-lived, traders could profit from fast convergence of mispriced options. Another possible future improvement could be including high-frequency quotes for a longer
sample period. I leave these extensions to future endeavors.
^1The VIX is usually higher than realized volatility. To adjust VIX, I run a linear regression of VIX on realized volatility. This relationship is stable across time. As a result, throughout all the
time periods, I use ${\text{VIX}}_{\text{adj}}=0.958\ast \text{VIX}-2.472$ as one of the input to model the return volatility.
^2One concern of the Monte Carlo simulation is that it may lead to frequent short term violations of the bounds, since the bounds may fluctuate across time simply because of the sampling errors. To
address this issue, the simulation in this paper employs a fixed seed in the random number generator. Thus, this procedure could only bias the duration of the violations upward, as the sampling
errors persist through time.
|
{"url":"https://scirp.org/journal/paperinformation?paperid=82438","timestamp":"2024-11-05T05:45:59Z","content_type":"application/xhtml+xml","content_length":"112959","record_id":"<urn:uuid:4471ab82-b33d-4b0d-82f5-c7e7c606931c>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00652.warc.gz"}
|
To view this video please enable Javascript
The introduction to probability within the GMAT prep context lays the foundational understanding necessary for tackling probability questions, emphasizing the concept's nature, its mathematical
definition, and the significance of randomness in probability calculations.
• Probability is fundamentally a ratio or fraction, representing the number of successful outcomes over the total number of possible outcomes.
• The value of a probability ranges between 0 and 1, where 0 indicates impossibility and 1 indicates certainty of an event occurring.
• Real-world probabilities usually fall between 0 and 1, reflecting the nuanced nature of most events.
• A simple example used to illustrate probability calculation is determining the likelihood of a month name containing the letter 'R', which is 2/3.
• The concept of randomness is crucial in probability, defined by the unpredictability of individual events but predictability of the overall pattern of events.
Understanding Probability Basics
The Mathematical Definition of Probability
Illustrating Probability with a Simple Example
The Significance of Randomness in Probability
|
{"url":"https://gmat.magoosh.com/lessons/1021-intro-to-probability","timestamp":"2024-11-09T22:27:38Z","content_type":"text/html","content_length":"106052","record_id":"<urn:uuid:45f1b8a2-156f-4728-ac13-f1d742d57224>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00398.warc.gz"}
|
Computational Geodynamics: Miscellaneous
\[ \require{color} \newcommand{\dGamma}{\mathbf{d}\boldsymbol{\Gamma}} \newcommand{\erfc}{\mbox{\rm erfc}} \newcommand{\Red}[1]{\textcolor[rgb]{0.7,0.0,0.0}{#1}} \newcommand{\Green}[1]{\textcolor
[rgb]{0.0,0.7,0.0}{ #1}} \newcommand{\Blue}[1]{\textcolor[rgb]{0.0,0.0,0.7}{ #1}} \newcommand{\Emerald}[1]{\textcolor[rgb]{0.0,0.7,0.3}{ #1}} \]
Rayleigh-Taylor Instability & Diapirism
Diapirism is the buoyant upwelling of rock which is lighter than its surroundings. This can include mantle plumes and other purely thermal phenomena but it often applied to compositionally distinct
rock masses such as melts infiltrating the crust (in the Archean) or salt rising through denser sediments.
Salt layers may result from the evaporation of seawater. If subsequent sedimentation covers the salt, a gravitionally unstable configuration results with heavier material (sediments) on top of light
material (salt). The rheology of salt is distinctly non-linear and also sensitive to temperature. Once buried, the increased temperature of the salt layer causes its viscosity to decrease to the
point where instabilities can grow. Note, since there is always a strong density contrast between the two rock types, the critical Rayleigh number argument does not apply – this situation is always
unstable, but instabilities can only grow at a reasonable rate once the salt has become weak.
The geometry is outlined above in the Figure above . We suppose initially that the surface is slightly perturbed with a form of
$$onumber w_m = w_{m0} \cos kx$$
where \( k \) is the wavenumber, \( k=2\pi / \lambda \), \( \lambda \) being the wavelength of the disturbance. We assume that the magnitude of the disturbance is always much smaller than the
The problem is easiest to solve if we deal with the biharmonic equation for the stream function. Experience leads us to try to separate variables and look for solutions of the form
$$onumber \psi = \left( A \sin kx + B \cos kx \right ) Y(y)$$
where the function $Y$ is to be determined. The form we have chosen for \(w_m\) in fact means $A=1,B=0$ which we can assume from now on to simplify the algebra.
Substituting the trial solution for $\psi$ into the biharmonic equation gives
$$onumber \frac{d^4 Y}{d y^4} -2k^2 \frac{d^2 Y}{dy^2} +k^4 Y = 0$$
which has solutions of the form
$$onumber Y = A \exp(m y)$$
where $A$ is an arbitrary constant. Subtituting gives us an equation for $m$ $$onumber m^4 - 2 k^2 m^2 + k^4 = (m^2 - k^2)^2 = 0 \label{eq:diapaux}$$
$$onumber m = \pm k$$
Because we have degenerate eigenvalues (i.e. of the four possible solutions to the auxilliary equation (\ref{eq:diapaux}), two pairs are equal) we need to extend the form of the solution to
$$onumber Y = (By+A) \exp(m y)$$
to give the general form of the solution in this situation to be
$$\psi = \sin kx \left ( A e ^ {- ky} + B y e ^ {- ky} + C e ^ {ky} + D y e ^ {ky} \right )$$
or, equivalently,
$$\psi = \sin kx \left ( A _ 1 \cosh ky + B _ 1 \sinh ky + C _ 1 y \cosh ky + D _ 1 y \sinh ky \right )$$
\textrm{or, equivalently} \psi &= \sin kx \left ( A _ 1 \cosh ky + B _ 1 \sinh ky + C _ 1 y \cosh ky + D _ 1 y \sinh ky \right ) \label{eq:biharmsoln2}
This equation applies in each of the layers separately. We therefore need to find two sets of constants ${A_1,B_1,C_1,D_1}$ and ${A_2,B_2,C_3,D_4}$ by the application of suitable boundary conditions.
These are known in terms of the velocities in each layer, $\mathbf{v}_1 = \mathbf{i} u_1 +\mathbf{j} v_1$ and $\mathbf{v}_2 = \mathbf{i} u_2 +\mathbf{j} v_2$:
\begin{align} u_1 = v_1 &= 0 \;\;\; \text{ on } \;\;\; y = -b \
u_2 = v_2 &= 0 \;\;\; \text{ on } \;\;\; y = b \end{align}
together with a continuity condition across the interface (which we assume is imperceptibly deformed}
\[ $$onumber u_1 = u_2 \;\;\; \text{ and } \;\;\; v_1 = v_2 \;\;\; \text{ on } \;\;\; y = 0$$ \]
The shear stress (\( \sigma_{xy}\) ) should also be continuous across the interface, which, if we assume equal viscosities, gives
$$onumber \frac{\partial u_1}{\partial y} + \frac{\partial v_1}{\partial x} = \frac{\partial u_2}{\partial y} + \frac{\partial v_2}{\partial x} \;\;\; \text{on} \;\;\; y = 0$$
and, to simplify matters, if the velocity is continuous across $y=0$ then any velocity derivatives in the $x$ direction evaluated at $y=0$ will also be continuous (i.e. $\partial v_2 / \partial x = \
partial v_1 / \partial x$). The expressions for velocity in terms of the solution (\ref{eq:biharmsoln2}) are
\begin{align} u = -\frac{\partial \psi}{\partial y} & = -\sin kx \left( (A_1 k + D_1 + C_1 k y) \sinh ky + (B_1 k + C_1 + D_1 ky) \cosh ky \right) \
v = \frac{\partial \psi}{\partial x} & = k \cos kx \left( (A_1 +C_1 y)\cosh ky + (B_1 +D_1 y) \sinh ky \right) \end{align}
From here to the solution requires much tedious rearrangement, and the usual argument based on the arbitrary choice of wavenumber $k$ but we finally arrive at
\begin{multline} \psi_1 = A_1 \sin kx \cosh ky + \
A_1 \sin kx \left[ \frac{y}{k b^2} \tanh kb \sinh ky + \left( \frac{y}{b} \cosh ky \frac{1}{kb} \sinh ky \right) \cdot \left( \frac{1}{kb} + \frac{1}{\sinh bk \cosh bk} \right) \right] \times \
\left[ \frac{1}{\sinh bk \cosh bk} - \frac{1}{(b^2k^2} \tanh bk \right] ^{-1} \label{eq:raytays1} \end{multline}
The stream function for the lower layer is found by replacing $y$ with $-y$ in this equation. This is already a relatively nasty expression, but we haven’t finished since the constant $A_1$ remains.
This occurs because we have so far considered the form of flows which satisfy all the boundary conditions but have not yet considered what drives the flow in each layer.
To eliminate $A_1$, we have to consider the physical scales inherent in the problem itself. We are interested (primarily) in the behaviour of the interface which moves with a velocity $\partial w / \
partial t$. As we are working with small deflections of the interface,
$$onumber \frac{\partial w}{\partial t} = \left. v \right| _ {y=0}$$
Consider what happens when the fluid above the interface is lighter than the fluid below – this situation is stable so we expect the layering to be preserved, and if the interface is disturbed the
disturbance to decay. This implies that there must be a restoring force acting on an element of fluid which is somehow displaced across the boundary at $y=0$ (Figure above).
This restoring force is due to the density difference between the displaced material and the layer in which it finds itself. The expression for the force is exactly that from Archimedes principle
which explains how a boat can float (only in the opposite direction)
$$onumber \left. F_2 \right|_{y=0} = \delta x g w (\rho _ 2 - \rho _ 1)$$
which can be expressed as a normal stress difference (assumed to apply, once again, at the boundary). The viscous component of the normal stress turns out to be zero – proven by evaluating $\partial
v / \partial y$ at $y=0$ using the expression for \( \phi \) in equation (\ref{eq:raytays1}). Thus the restoring stress is purely pressure
\[ $$onumber \left. P_2 \right|_{y=0} = g w (\rho_2 - \rho_1)$$ \]
The pressure in terms of the solution (so far) for $\psi$ is found from the equation of motion in the horizontal direction (substituting the stream function formulation) and is then equated to the
restoring pressure above. \[ $$onumber (\rho_1-\rho_2) g w = -\frac{4 \eta k A_1}{b} \cos kx \left(\frac{1}{kb} + \frac{1}{\sinh bk \cosh bk} \right) \cdot \left( \frac{1}{\sinh bk \cosh bk} - \frac
{1}{(b^2k^2} \tanh bk \right)^{-1}$$ \] which allows us to substitute for $A_1$ in our expression for $\partial w / \partial t$ above. Since $A_1$ is independent of $t$, we can see that the solution
for $w$ will be of a growing or decaying exponential form with growth/decay constant coming from the argument above.
\[ $$onumber w(t) = w_0 \exp((t-t_0)/\tau)$$ \] where \[ $$onumber \tau = \frac{4 \eta}{(\rho_1-\rho_2) g b} \left( \frac{1}{kb} + \frac{1}{\sinh bk \cosh bk} \right) \cdot \left( \frac{1}{k^2b^2} \
tanh kb - \frac{1}{\sinh kb \cosh kb} \right)^{-1}$$ \]
So, finally, an answer – the rate at which instabilities on the interface between two layers will grow (or shrink) which depends on viscosity, layer depth and density differences, together with the
geometrical consideration of the layer thicknesses.
A stable layering results from light fluid resting on heavy fluid; a heavy fluid resting on a light fluid is always unstable (no critical Rayleigh number applies) although the growth rate can be so
small that no deformation occurs in practice. The growth rate is also dependent on wavenumber. There is a minimum in the growth time as a function of dimensional wavenumber which occurs at $k b =
2.4$, so instabilities close to this wavenumber are expected to grow first and dominate.
Remember that this derivation is simplified for fluids of equal viscosity, and layers of identical depth. Remember also that the solution is for {\rm infinitessimal} deformations of the interface. If
the deformation grows then the approximations of small deformation no longer hold. This gives a clue as to how difficulties dealing with the advection term of the transport equations arise. At some
point it becomes impossible to obtain meaningful results without computer simulation. However, plenty of further work has already been done on this area for non-linear fluids, temperature dependent
viscosity \&c and the solutions are predictably long and tedious to read, much less solve. When the viscosity is not constant, the use of a stream function notation is not particularly helpful as the
biharmonic form no longer appears. \Emerald{(e.g. read work by Ribe, Houseman et al.)}
The methodology used here is instructive, as it can be used in a number of different applications to related areas. The equations are similar, the boundary conditions different.
Post-Glacial Rebound
In the postglacial rebound problem, consider a viscous half space with an imposed topography at $t=0$. The ice load is removed at $t=0$ and the interface relaxes back to its original flat state.
This can be studied one wavenumber at a time — computing a decay rate for each component of the topography. The intial loading is computed from the fourier transform of the ice bottom topography. The
system is similar to that of the diapirs except that the loading is now applied to one surface rather than the interface between two fluids.
Phase Changes in the mantle
A different interface problem is that of mantle phase changes. Here a bouyancy anomaly results if the phase change boundary is distorted. This can result from advection normal to the boundary
bringing cooler or warmer material across the boundary.
The buoyancy balance argument used above can be recycled here to determine a scaling for the ability of plumes/downwellings to cross the phase boundary.
Sensitivity Kernels for Surface Observables
The solution method used for the Rayleigh Taylor problem can also be used in determining spectral Green’s functions for mantle flow in response to thermal perturbations. This is a particularly
abstract application of the identical theory.
Folding of Layered (Viscous) Medium
If a thin viscous layer is compressed from one end then it may develop buckling instabilities in which velocities grow perpendicular to the plane of the layer. If the layer is embedded between two
semi-infinite layers of viscous fluid with viscosity much smaller than the viscosity of the layer, then Biot theory tells us the wavelength of the initial buckling instability, and the rate at which
it grows.
The fold geometry evolves as \[ $$onumber w=w_m \cos(kx) e^{\frac{t}{\tau_a}}$$ \] where \[ $$onumber \tau_a = \frac{1}{\bar{P}}\left[ \frac{4 \eta_0}{k} + \frac{\eta_1 h^3}{3k^2} \right]$$ \] and
the fastest growing wavenumber is \[ $$onumber k = \frac{6}{h}\left( \frac{\eta_1}{\eta_0} \right)^{\frac{1}{3}}$$ \]
For large deformations we eventually must resort to numerical simulation.
Gravity Currents
Gravity currents can occur when a viscous fluid flows under its own weight as shown in the Figure above.
We assume that the fluid has constant viscosity, $\eta$ and that the length of the current is considerably greater than its height. The fluid is embedded in a low viscosity medium of density $\rho-\
Delta \rho$ where $\rho$ is the density of the fluid itself.
The force balance is between buoyancy and viscosity. The assumptions of geometry allow us to simplify the Stokes equation by assuming horizontal pressure gradients due to the surface slope drive the
\[ \begin{equation} \nonumber \nabla p = \eta\nabla^2 u \approx g \Delta \rho \frac{\partial h}{\partial x}
\end{equation} \]
We assume near-zero shear stress at the top of the current to give \[ $$onumber \frac{\partial u}{\partial z} (x,h,t) = 0$$ \] and zero velocity at the base of the current. Hence \[ $$onumber u
(x,z,t) = -\frac{1}{2} \frac{g \Delta \rho}{\eta} \frac{\partial h}{\partial x} z(2h-z)$$ \]
Continuity integrated over depth implies \[ $$onumber \frac{\partial h}{\partial t} + \frac{\partial }{\partial x} \int_0^h u dz = 0$$ \] Combining these equations gives \[ $$onumber \frac{\partial
h}{\partial t} -\frac{1}{3} \frac{g \Delta \rho}{\eta} \frac{\partial }{\partial x} \left( h^3 \frac{\partial h}{\partial x} \right) = 0$$ \] Finally, a global constraint fixes the total amount of
fluid at any given time \[ $$onumber \int_0^{x_N(t)} h(x,t)dx = qt^\alpha$$ \] The latter term being a fluid source at the origin, and $x_ {N(t)}$ the location of the front of the current. A
similarity variable can be used to transform this problem: \[ $$onumber u = \left( \frac{1}{3} g\Delta \rho q^3 / \eta \right)^{-\frac{1}{5}} x t^{-(3\alpha +1) / 5}$$ \] giving a solution of the
form \[ $$onumber h(x,t) = u_N^{2/3} (3q^2 \eta / (g\Delta\rho))^{1/5} t^{(2\alpha -1) / 5} \phi(u/u_N)$$ \] where $\nu_N$ is the value of $\nu$ at $x=x_N(t)$. Substituting into the equation for $\
partial h / \partial t$ we find that $\phi(\nu/\nu_N)$ satisfies \[ $$onumber \phi({u}/{u_N}) = \left[ \frac{3}{5}(3\alpha+1)\right]^{\frac{1}{3}} \left(1-\frac{u}{u_N} \right)^{\frac{1}{3}} \left[ 1
- \frac{3\alpha-4}{24(3\alpha+1)}\left(1-\frac{u}{u_N} \right) + O \left(1-\frac{u}{u_N} \right)^2 \right]$$ \] Which has an analytic solution if $\alpha=0$ (only constant sources or sinks) \[ \
nonumber \begin{split} \phi({\nu}/{\nu_N}) &= \left( \frac{3}{10}\right)^{\frac{1}{3}} \left( 1-\left(\frac{\nu}{\nu_N}\right)^2 \right)^{\frac{1}{3}}
\nu_N &= \left[ \frac{1}{5} \left( \frac{3}{10}\right)^{\frac{1}{3}} \pi^{\frac{1}{2}} \Gamma (1/3) / \Gamma (5/6) \right]^{-\frac{3}{5}} = 1.411 \end{split} \end{equation} \] For all other values of
$\alpha$ numerical integration schemes must be used for \( \phi \). It is also possible to obtain solutions if axisymmetric geometry is used.
|
{"url":"http://www.moresi.info/pages/ComputationalGeodynamics/TheoreticalBackground/MathPhysicsBackground-4/","timestamp":"2024-11-05T15:38:47Z","content_type":"text/html","content_length":"29100","record_id":"<urn:uuid:37f55b2f-5ff2-44f4-9aca-f77004d06796>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00587.warc.gz"}
|
Please Help. PROXY.HORST.SV 79exinjst.a9.exe Exmodula virus.
The picture displays a heapload of virusses of the same kind, Although Avast can find them, the program is unable to remove it.
Even Avast customer support let me down,i tried to (i don’t have a local isp provider e-mail) send the chested zip files to Virus@Avast.com, however when you make a zip called virus.zip, hotmail
blocks off that its being send, gmail also blocks off i mailed about this to riley@avast.com Adam Riley(Technical Support) and i never heard anything back.
Why can’t Avast auto-upload the new virusses that it finds to Avast headquarters server?
Its so annoying, and Avast can’t get rid of this virus on my computer =( Anyway back to the virus.
If you zoom in on the pictures, you can see an e-mail adress, but it isn’t mine, its automatically generated by this virus, the moment you open up your hotmail or gmail, you will find that gmail is
already logged in with a to you totally unknown mail adress. That’s seemingly what the virus does. Removing the exe’s from the temporary directory is useless , they seem to be generated (from what
i’ve seen) from a file called setup.exe, on deleting that same file, it will just be generated back there next time you restart your computer.
it goes under a wide range of number-character executables like 58exgmtxt.1.exe or 37exinjs.a9.exe.
What is this and how do i get rid of it? Kind reminder is that doing a boot scan, and moving them into quarantine does not help. Avast at this moment is unable to remove this virus as it seems to be
stuck in the running memory, windows wil not allow this file to be deleted.
People will blame, even due to privacy, if files from your computer are sent to avast, infected or false positive.
You can ‘add’ the file to Chest and send it from there.
If a virus is replicant (coming and coming again), you should:
I mean it would be nice if the option to upload was there (disabled by default) but your right about the privacy issues that you declared.
1. Disable System Restore on Windows ME or Windows XP. System Restore cannot be disabled on Windows 9x and it’s not available in Windows 2k. After boot you can enable System Restore again after step
2. Clean your temporary files. You can use CleanUp or the Windows Advanced Care features for that.
3. Schedule a boot time scanning with avast. Start avast! > Right click the skin > Schedule a boot-time scanning. Select for scanning archives. Boot. Other option is scanning in SafeMode (repeatedly
press F8 while booting).
Currently im up to step 3
Will report back asap.
Hi Darketernal,
This may well be a variation on this virus:
Subsitute xxexgmregxx.exe for *exmodula.exe and the clean up instructions may work.
Or try AVG Anti-spyware or SuperAntiSpyware as Tech suggested.
followed up till step 3, made bootscan, to no avail (virus persisted)
• Tried Frog clean, to no avail (virus persisted)
• did search on exgmreg in registry and on the entire computer, it does not exist.
Unbelievable (not joking) within 1 second after install startup AVG makes immediate notice about 25exinjs.a9.exe. And informs that unknowingly by its user that the computer is being used as a bot to
attack other computers. risk = high , name Proxy.Horst.sv
Consequently it found another Trojan.Small.edz after scanning the local harddisks
I was a Lavasoft Spyware user, but have stepped today over into using AVG as i already was wondering why no one mentioned it ,probably due to the inferiority compared to AVG.
Now lets see if my computer is clean again.
Right now, avgas is much better than Lavasoft for sure…
Actually it was mentioned in Tech’s very first response. It’s a pretty common reccommendation around here
My computer didn’t come back clean, even after several AVG scans, when rebooting, the problem resurrects fo
AGV deleted the small edz ,however the Proxy.Horst.sv stays mighty persistant in the temporary even after AVG -reboot- removals.
I assume (maby wrongfully) that this has the Meaning that the virus still resides somewhere else on my computer.
Am currently downloading superantispyware. WIll post results asap.
AVG did a good deal of work, a noticable increase of computerspeed could be felt when working with the computer again.
Did you follow steps 1 and 2?
Maybe you’re infected with rootkits. Try AVG antirootkit and Panda antirootkit.
Rootkit is a ‘hidden’ malware.
After the rootkit scans please post a HijackThis log.
Click here to download HJTsetup.exe
[*]Save HJTsetup.exe to your desktop.
[*]Doubleclick on the HJTsetup.exe icon on your desktop.
[*]By default it will install to C:\Program Files\Hijack This.
[*]Continue to click Next in the setup dialogue boxes until you get to the Select Addition Tasks dialogue.
[*]Put a check by Create a desktop icon then click Next again.
[*]Continue to follow the rest of the prompts from there.
[*]At the final dialogue box click Finish and it will launch Hijack This.
[*]Click on the Do a system scan and save a logfile button. It will scan and the log should open in notepad.
[*]Click on “Edit > Select All” then click on “Edit > Copy” to copy the entire contents of the log.
[*]Come back here to this thread and Paste the log in your next reply.
[*]DO NOT have Hijack This fix anything yet. Most of what it finds will be harmless or even required.
-SUPERAntiSpyware detected these infected items that AVG did not.
• at this moment im having trouble typing, a (program?) is stealing the focus of my cursor making this wepssss sorry will post back later im having trouble.
• http://forum.avast.com died on me.
• Lost internet connection.
After running SUPERAntiSpyware again after reboot, my computer was to no avail still infected.
I was up to step 3, i thank you all for your kind support. Please notify that im doing all the actions of your recommendations as soon as possible. The scans unfortunatly take some time , and because
the problem persists even after the scans it takes even more time to push thru the solution given in here.
I will get a hijack this log for you people available as soon as possible along with all the other recommendated solutions.
Hijackthis log.
Logfile of HijackThis v1.99.1
Scan saved at 11:53:20 PM, on 5/17/2007
Platform: Windows XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v7.00 (7.00.6000.16414)
Running processes:
D:\ANTI INTERNET\Avast\aswUpdSv.exe
D:\ANTI INTERNET\Avast\ashServ.exe
D:\Program Files\Java\jre1.6.0_01\bin\jusched.exe
D:\ANTI INTERNET\AVG Anti-Spyware 7.5\avgas.exe
D:\ANTI INTERNET\SUPERAntiSpyware.exe
D:\ANTI INTERNET\AVG Anti-Spyware 7.5\guard.exe
D:\ANTI INTERNET\Avast\ashMaiSv.exe
D:\ANTI INTERNET\Avast\ashWebSv.exe
D:\Program Files\Internet Explorer\iexplore.exe
D:\Program Files\Internet Explorer\iexplore.exe
D:\ANTI INTERNET\Hijackthis\HijackThis.exe
R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://go.microsoft.com/fwlink/?LinkId=69157
R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896
R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896
R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157
R1 - HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings,ProxyServer = 124.2.62.2:3128
R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = Koppelingen
O2 - BHO: (no name) - {4228DD80-1480-4191-AD71-F4172DC30B73} - D:\WINDOWS\system32\pjgwfsuc.dll (file missing)
O2 - BHO: SSVHelper Class - {761497BB-D6F0-462C-B6EB-D4DAF1D92D43} - D:\Program Files\Java\jre1.6.0_01\bin\ssv.dll
O2 - BHO: (no name) - {7E853D72-626A-48EC-A868-BA8D5E23E045} - (no file)
O2 - BHO: IECatcher Class - {B930BA63-9E5A-11D3-A288-0000E80E2EDE} - G:\MASSDO~1\MDHELPER.DLL
O4 - HKLM\..\Run: [avast!] D:\ANTIIN~1\Avast\ashDisp.exe
O4 - HKLM\..\Run: [NvCplDaemon] RUNDLL32.EXE D:\WINDOWS\System32\NvCpl.dll,NvStartup
O4 - HKLM\..\Run: [NvMediaCenter] RunDLL32.exe NvMCTray.dll,NvTaskbarInit
O4 - HKLM\..\Run: [SigmatelSysTrayApp] sttray.exe
O4 - HKLM\..\Run: [KernelFaultCheck] %systemroot%\system32\dumprep 0 -k
O4 - HKLM\..\Run: [SunJavaUpdateSched] "D:\Program Files\Java\jre1.6.0_01\bin\jusched.exe"
O4 - HKLM\..\Run: [.nvsvc] D:\WINDOWS\system\smss.exe /w
O4 - HKLM\..\Run: [LVCOMSX] D:\WINDOWS\system32\LVCOMSX.EXE
O4 - HKLM\..\Run: [!AVG Anti-Spyware] "D:\ANTI INTERNET\AVG Anti-Spyware 7.5\avgas.exe" /minimized
O4 - HKCU\..\Run: [SUPERAntiSpyware] D:\ANTI INTERNET\SUPERAntiSpyware.exe
O8 - Extra context menu item: + &Mass Downloader: download this file - G:\Mass Downloader\Add_Url.htm
O8 - Extra context menu item: + Mass Downloader: download &All files - G:\Mass Downloader\Add_All.htm
O9 - Extra button: (no name) - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - D:\Program Files\Java\jre1.6.0_01\bin\ssv.dll
O9 - Extra 'Tools' menuitem: Sun Java Console - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - D:\Program Files\Java\jre1.6.0_01\bin\ssv.dll
O9 - Extra button: Mass Downloader - {0FD01980-CCCB-11D3-80D4-0000E80E2EDE} - G:\Mass Downloader\massdown.exe
O9 - Extra 'Tools' menuitem: &Mass Downloader - {0FD01980-CCCB-11D3-80D4-0000E80E2EDE} - G:\Mass Downloader\massdown.exe
O9 - Extra button: Run IMVU - {d9288080-1baa-4bc4-9cf8-a92d743db949} - D:\Documents and Settings\'\Menu Start\Programma's\IMVU\Run IMVU.lnk
O9 - Extra button: Yahoo! Messenger - {E5D12C4E-7B4F-11D3-B5C9-0050045C3C96} - G:\yahoo bah\Messenger\YahooMessenger.exe
O9 - Extra 'Tools' menuitem: Yahoo! Messenger - {E5D12C4E-7B4F-11D3-B5C9-0050045C3C96} - G:\yahoo bah\Messenger\YahooMessenger.exe
O9 - Extra button: Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - D:\Program Files\Messenger\msmsgs.exe
O9 - Extra 'Tools' menuitem: Windows Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - D:\Program Files\Messenger\msmsgs.exe
O11 - Options group: [INTERNATIONAL] International*
O16 - DPF: {17492023-C23A-453E-A040-C7C580BBF700} (Windows Genuine Advantage Validation Tool) - http://go.microsoft.com/fwlink/?linkid=39204
O16 - DPF: {4F1E5B1A-2A80-42CA-8532-2D05CB959537} (MSN Photo Upload Tool) - http://by119fd.bay119.hotmail.msn.com/resources/MsnPUpld.cab
O16 - DPF: {6414512B-B978-451D-A0D8-FCFDF33E833C} (WUWebControl Class) - http://update.microsoft.com/microsoftupdate/v6/V5Controls/en/x86/client/wuweb_site.cab?1159664718311
O16 - DPF: {6E32070A-766D-4EE6-879C-DC1FA91D2FC3} (MUWebControl Class) - http://update.microsoft.com/microsoftupdate/v6/V5Controls/en/x86/client/muweb_site.cab?1159664707218
O18 - Protocol: livecall - {828030A1-22C1-4009-854F-8E305202313F} - D:\PROGRA~1\MSNMES~1\MSGRAP~1.DLL
O18 - Protocol: msnim - {828030A1-22C1-4009-854F-8E305202313F} - D:\PROGRA~1\MSNMES~1\MSGRAP~1.DLL
O18 - Protocol: skype4com - {FFC8B962-9B40-4DFF-9458-1830C7DD7F5D} - D:\PROGRA~1\COMMON~1\Skype\SKYPE4~1.DLL
O20 - Winlogon Notify: !SASWinLogon - D:\ANTI INTERNET\SASWINLO.dll
O21 - SSODL: WPDShServiceObj - {AAA288BA-9A4C-45B0-95D7-94D524869DB5} - D:\WINDOWS\system32\WPDShServiceObj.dll
O23 - Service: avast! iAVS4 Control Service (aswUpdSv) - ALWIL Software - D:\ANTI INTERNET\Avast\aswUpdSv.exe
O23 - Service: avast! Antivirus - ALWIL Software - D:\ANTI INTERNET\Avast\ashServ.exe
O23 - Service: avast! Mail Scanner - Unknown owner - D:\ANTI INTERNET\Avast\ashMaiSv.exe" /service (file missing)
O23 - Service: avast! Web Scanner - Unknown owner - D:\ANTI INTERNET\Avast\ashWebSv.exe" /service (file missing)
O23 - Service: AVG Anti-Spyware Guard - Anti-Malware Development a.s. - D:\ANTI INTERNET\AVG Anti-Spyware 7.5\guard.exe
O23 - Service: iPod Service - Apple Inc. - D:\Program Files\iPod\bin\iPodService.exe
O23 - Service: NVIDIA Display Driver Service (NVSvc) - NVIDIA Corporation - D:\WINDOWS\System32\nvsvc32.exe
O23 - Service: avast! Mail Scanner - Unknown owner - D:\ANTI INTERNET\Avast\ashMaiSv.exe" /service (file missing)
O23 - Service: avast! Web Scanner - Unknown owner - D:\ANTI INTERNET\Avast\ashWebSv.exe" /service (file missing)
Avast Service files missing? at first glance, that can’t be a good thing. I put all my spyware programs in my special ANTI INTERNET folder
HJT beta 2.0 resolves the missing file as it doesn’t show the /service but you need to ignore any references to 023 entries for avast, this is a bug in the HJT 1.99.1. Hijackthis is searching for ‘C:
\Arquivos de programas\Alwil Software\Avast4\ashMaiSv.exe" /service’ (including double quotes and ‘/service’ parameter) as a file, this causes ‘file missing’, because only present is ‘C:\Arquivos de
programas\Alwil Software\Avast4\ashMaiSv.exe’.
-70exgmtxt.exe horse tries to load itself in the temp, but so
far AVG disallows it.
-D/d WindowsCare , see what it does.
Windows Care solved(that is if its really true what it says) 32 000 problems ,patience is a virtue i guess.
-Rebooted, virus is still in my system.
-d/l spyware terminator didn’t seem to do much , the virus remains.
-did a square deep scan, wonder what will showup.
-a-square is a program that i am(even tho its free) unfortunatly not satisfied with, it asks information at start-up before you can use it (asif your not having enough trouble with the virus already)
and it traces and delcares directories of stuff that i use as problems.
-At first glance the most satisfying programs where AVG + Superantispyware + Windows Care,however non of them where able to get fully rid of the virus.
This completes up step 1 to 5.
-The above where the downsides,on a positive note even tho the virus persisted, my system has become insanly fast. I experienced a significant lag in my computer, even my moms laptop was faster then
my highspeed hand build computer. I can say with certainty that that no longer is the case.
It may be that there is something that is hiding it or restoring it so this might be worth checking out.
See, anti-rootkit, detection, removal & protection http://www.antirootkit.com/software/index.htm.
• BlackLight - It can detect rootkits like Rootkit Revealer but can also remove them. http://www.f-secure.com/blacklight/
• PANDA ROOTKIT CLEANER - Panda Rootkit Cleaner - http://research.pandasoftware.com/blogs/images/AntiRootkit.zip also see http://research.pandasoftware.com/blogs/research/archive/2007/04/02/
Panda-AntiRootkit-Released.aspx or http://www.pandasoftware.com/.
• AVG ANTI-ROOTKIT - AVG Anti-Rootkit http://free.grisoft.com/doc/avg-anti-rootkit-free/lng/us/tpl/v5.
Used Rafaels solution.
This was the sequence of actions I used to get rid of these damn files:
Check the processes of Windows Task Manager for .exe files with numbers followed by "exmodula" plus a letter, for example:
As it was written above, this name varies, in my computer I had several different files, some using "exmodulaf" and "exmodulag". End the process.
Next, go to your
C:\Documents and Settings\Rafael\Local Settings\Temp\
where "Rafael" varies according to the username on your computer. You´ll find several files that follow the format described above. (**exmodula*.exe). Delete them.
Now perform a search on your registry for the "exmodula" word you´ll probably find references to it in the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\SharedAccess\Parameters\FirewallPolicy\StandardProfile\AuthorizedApplications\List key. In this key you´ll find something like this:
C:\DOCUME~1\Rafael\LOCALS~1\Temp\46exmodulag.exe:*:Enabled:Microsoft Update
What this key does is to create a fake entry on Windos Firewall under the name "Windows Update" for each new **exmodula*.exe file it creates. Remove this entry from the registry.
I thought this was enough, but no, those damn files kept coming back after a while!
So I ran HijackThis 1.99.1 (wonderful little program by the way) and it found the file smss.exe (file responsible for automatic windows updates) running in the C:\WINDOWS\system\ folder, wich is wrong. This file is responsible for generating the **exmodula*.exe files. Delete it.
NOTICE: the smss.exe file running under C:\WINDOWS\system32\ is a legal file, do not touch it!
Now search your registry for smss.exe and you´ll find references to it under these keys, delete them.
Congratulations, it´s done. I hope Google will find this answer, the only reference to this trojan was made here. :)
Going to do restart to see if it works
PROBLEM SOLVED! , working method = rafaels method.
Why Avast, and all the other scanners are unable to ditch this virus is beyond me.
Honestly people, thank you so much for your effort and time
I hope Avast reads the forum and includes an option for auto-upload virusses on selection = yes.
I mean , sadly this thread wasn’t much about avast at all, it was an enormous cocktail of spybot downloads and manual override. But it was worth it
You guys rock!!!
Thanks for the follow up
|
{"url":"https://community.avast.com/t/please-help-proxy-horst-sv-79exinjst-a9-exe-exmodula-virus/612261","timestamp":"2024-11-10T16:08:29Z","content_type":"text/html","content_length":"73545","record_id":"<urn:uuid:c8b2b62b-4710-48c1-806c-69fb187026fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00228.warc.gz"}
|
Errata for Data Structures & Algorithms in Kotlin 2nd Edition
Creating this topic to catch any typos and bugs in the 2nd Edition of Data Structures & Algorithms in Kotlin.
There seem to be some problems with the challenge problem for Chapter 11, that is,
implementing a function ArrayList.findIndices(value: T) where T: Comparable
which returns a range of indices satisfying this[index] == value.
// -> This will throw a StackOverflowException
// with the subroutine call
// startIndex(2, 0..3) going into an infinite recursion
// on the empty range 2..1
The problem here is with missing a check for emptiness of the range.
A second problem with both subfunctions startindex and endIndex on the pages 221 and 222 is the checking for the middleIndex lying on either boundary of the list.
val arr = arrayListOf(0,1)
// → returns null where it should return 0..0
Here the first computed middleIndex 1 is already the right boundary of the list.
Then the following will falsely return null in the previous example:
// 2
if (middleIndex == 0 || middleIndex == size – 1) { // ?????
return if (this[middleIndex] == value) {
} else {
null // ?????
I didn’t quite succeed in fixing every issue with the book implementation though.
There seem to be yet more issues with corner cases.
I did however find a solution which is loosely based on the approach from the following video lecture
Here’s an alternative solution:
* binaryFindFirst is only applicable on 'this' with argument prop if
* this.map{ prop(this,it) } == [false, ... , false, true, ... , true]
* or
* this.map{ prop(this,it) } == [false, ... , false]
* or
* this.map{ prop(this,it) } == [true , ... , true]
* In other words:
* If there exist indices i such that prop(this, i) == true
* then the list 'this' is partitioned such that all elements j with
* prop(this, j) == false lie on the left, and those indices
* i with prop(this,i) == true lie on the right.
fun <T> List<T>.binaryFindFirst(
prop: (l: List<T>, ind: Int) -> Boolean
): Int? {
var res: Int? = null
var l = 0
var r = this.lastIndex
while(l <= r) {
val m = l + (r - l)/2
if (prop(this,m)) {
res = m
r = m - 1
l = m + 1
return res
* binaryFindLast is only applicable on 'this' with argument prop if
* this.map{ prop(this,it) } == [true, ... , true, false, ... , false]
* or
* this.map{ prop(this,it) } == [false, ... , false]
* or
* this.map{ prop(this,it) } == [true , ... , true]
* In other words:
* If there exist indices i such that prop(this, i) == true
* then the list 'this' is partitioned such that all elements j with
* prop(this, j) == false lie on the right, and those indices
* i with prop(this,i) == true lie on the left.
fun <T> List<T>.binaryFindLast(
prop: (l: List<T>, ind: Int) -> Boolean
): Int? {
var res: Int? = null
var l = 0
var r = this.lastIndex
while(l <= r) {
val m = l + (r - l)/2
if (prop(this,m)) {
res = m
l = m + 1
r = m - 1
return res
* findIndices only applies to sorted lists.
fun <T: Comparable<T>> List<T>.findIndices(value: T): IntRange? {
// 1
val left = binaryFindFirst { list, index ->
list[index] >= value
} ?: return null
// 2
if (this[left] != value) return null
// 3
val right = binaryFindLast { list, index ->
list[index] <= value
} ?: return null
// 4
return left..right
1. The list must by assumption be sorted and any indices k
such that list[k] >= value are on the right.
So binaryFindFirst is applicable.
2. By assumption the list is sorted.
Thus if the first index k such that list[k] >= value
does not satisfy list[k] == value, then list[k] > value
and there can be no index j satisfying
list[j] == value. Hence we can return null.
3. The list is by assumption sorted and all indices k such that
list[k] <= value are on the left (if existant).
So binaryFindLast is applicable.
4. If there exists any index k with list[k] == value,
then all indices j within left..right satisfy list[j] == value.
I guess technically for this problem we don’t need to pass both list and index to the lambdas. But this way the binaryFindFirst and binaryFindLast methods are also applicable in situations, where
the property does not only depend on the list’s value at the current index.
For instance, there is a problem from the above video around minute 14:
Assume we are given a formerly sorted array from small to big, that was changed by applying a circular shift, e.g. [6,7,9,15,19,2,3] is the sorted array [2,3,6,7,9,15,19] after a two-fold circular
left shift.
Find the smallest element in such a rotated sorted array.
In this case we can just use our binaryFindFirst method to find the first element smaller or equal to the leftmost entry.
fun <T: Comparable<T>> List<T>.findMinInRotatedList(): Int =
binaryFindFirst { list, index -> list[index] <= list[0] }!!
Here the property does not only depend on the value at current index,
but also on the value at 0.
[Edited for better names and readability.]
Hi, great book, massive help so far.
There seems to be a double print of the code associated with Creating a Vertex, in Chapter 19 - Graphs. The code for createVertex() is printed twice in the code section, but each with a different set
of steps.
The same thing happens in the same chapter, couple of pages further along, in the Visualize an adjacency matrix section, the toString() method is printed twice in the code section
Since I’m a new user here I cannot attach more than 1 media element but you can see this in the section mentioned above
Thanks a lot for all your efforts, this book helped a lot in everything
yes. thanks for information
I already bought the 1st edition, how different is the 2nd edition??
Is there an “errata” page that can save me another $60 ???
@jellodiil There seems to be a mistake in Chapter 5 Queues, in DoubleLinkedList.kt’s remove() method:
The code is:
fun remove(node: Node<T>): T {
val prev = node.previous
val next = node.next
if (prev != null) {
prev.next = node.previous
} else {
head = next
next?.previous = prev
if (next == null) {
tail = prev
node.previous = null
node.next = null
return node.value
However, I believe it’s meant to be the following code below. The change is inside the first if statement:
fun remove(node: Node<T>): T {
val prev = node.previous
val next = node.next
if (prev != null) {
prev.next = node.next
} else {
head = next
next?.previous = prev
if (next == null) {
tail = prev
node.previous = null
node.next = null
return node.value
If a node is removed, then the previous node’s next value should point to the removed node’s next value: prev.next = node.next Does this seem accurate? Trying to use the current implementation
produces errors if you remove elements from a Linked List and try to print out the new list.
Thanks for reporting this!
I’ve noted it for the team to have a look!
1 Like
Hey guys! Please kindly visit my site : https://www.aimeealways.com/
Hey guys. Seems to me that you have an error in the code on page 196 when removing TrieNode. Adding parent = parent.parent at the bottom of the while loop(right below the current = parent statment)
will do the trick.
Page 227.
Refactor suggestion:
replace this block of code
val leftChildIndex = index(element, leftChildIndex(i))
if (leftChildIndex != null) return leftChildIndex // 4
val rightChildIndex = index(element, rightChildIndex(i))
if (rightChildIndex != null) return rightChildIndex // 5
return null // 6
with this:
return index(element, leftChildIndex(i))?: index(element, rightChildIndex(i))
Page 230.
replace this:
val inverseComparator = Comparator { o1, o2 → // 2
override fun compare(o1: Int, o2: Int): Int =
with this:
val inverseComparator = Comparator { o1, o2 → // 2
I’m having a great time reading your article. I found this to be an interesting and instructional piece, thus I believe it to be really beneficial. Star Trek Picard Season 3 Leather Jacket
I was also looking for such a content for myself so I found a way to find it comfortably from this site. Thats why I like this site and this Bape Pink Hoodie site and clothes because there are many
good clothes available here.
I am here to enjoy and learning new thing from Kodeco and this thread has a outstanding knowledge. I also want to share somthing stylish try James Bond No Time To Die Duster Coat.
I was facing a a lot of problem to understanding a Data Structure and Algoritn kotlin but this thread is so outstanding because my all doubts has been cleared. In the thanks i want to recommend you a
best outfits for winte season try this Reality Check Kevin Hart Hoodie.
I found a simple method to locate this kind of stuff on this website while looking for it for myself. Because there are so many fantastic possibilities, I really like this website, especially the
area on Bradley Cooper The Hangover Black Suit and clothing.
The praise that this post is receiving is well-deserved. Truly amazing! abigail spencer extended family
|
{"url":"https://forums.kodeco.com/t/errata-for-data-structures-algorithms-in-kotlin-2nd-edition/143193","timestamp":"2024-11-05T08:56:21Z","content_type":"text/html","content_length":"53199","record_id":"<urn:uuid:3459348e-d266-4948-92d6-de76860d9dc4>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00050.warc.gz"}
|
Electric Potential | Brilliant Math & Science Wiki
Do you know why water flows when there is a slope? The reason turns out to be the difference in the heights of the two regions, which means there is a difference in the potential energy, causing the
water to flow so that a state of equilibrium is attained. For a detailed analysis see this.
But what is potential? Well, we have already seen in our mechanics wikis about its definition and we know that it's a form of energy.
But it takes many different forms in physics, and this wiki is concerned about the electrical definition of the term potential and its applications. As potential is form of energy, which is a scalar
quantity, calculations are easier than those involving forces which is a vector. Basically electric potential is defined as the work done in moving a point charge from one point to another point
under a constant electric field, and we find the formula to be \(V=W/Q\).
Motivation decided: Energy is a scalar and enables easier calculations than sticking to forces.
Gravitational Potential
Let us begin our journey about electric potential energy by relating it to the gravitational potential energy because the electric potential is very closely related to the gravitational potential. We
know from Newton's inverse square law that
\[\vec{F_g}=-\dfrac{GMm}{r^2}\hat r.\]
We had also seen that the potential energy gained by an object when raised from a point \(h_A\) to \(h_B\) through a height of \(\Delta h\) was equivalent to the work done against gravity to bring
the object to that point:
\[W_g=\displaystyle\int \vec{F_g} \cdot\vec{ds}=-\displaystyle\int_{h_A}^{h_B} mg \cdot dh = -mg\Delta h.\]
Now let us try to define a function that gives us the gravitational potential difference between two points. But how do we define it? Well, we will just assume that the gravitational field at a given
point is \(\frac{F_g}{m}=g\). Then
\[V_g=-\displaystyle\int_{h_A}^{h_B} \dfrac{\vec{F_g}}{m}\cdot \vec{ds}=\displaystyle\int_{h_A}^{h_B} \vec{g}\cdot\vec{ds}.\]
Electric Potential
Now, let us try to co-relate the Gravitational potential to the topic of our concern. If you have read coulomb's law you would have come across this equation which gives us the force experienced
between two charges:
\[\vec{F_e}=\dfrac{kq_1q_2}{r^2}\hat r\]
We can also define the same function for electric potential and find the electric potential difference, where \(V_e\) is the potential difference function, which defines the negative work done in
moving a test charge from a point \(a\) to \(b\):
\[V_e=-\displaystyle\int_{a}^{b} \dfrac{\overrightarrow{F_e}}{q}\cdot \vec{dl}=\displaystyle\int_{a}^{b} \vec{E}\cdot\vec{dl}.\]
We had previously defined the electric potential as the work done by the electric field in moving a point charge through a certain distance. And we also know that the electric field and work are
defined as
\[ \vec{E}=\dfrac{\overrightarrow{F}}{q},\ \ W=\overrightarrow F\times \vec{dl} \implies V&=\int_{a}^{b} \overrightarrow{E}\cdot\vec{dl}\\ &=\int_{a}^{b}\dfrac{\overrightarrow F}{q}\cdot\vec{dl}\\ &=
\dfrac{W}{q}. \]
A 9-V battery has an electric potential difference of \(9\text{ V}\) between the positive and negative terminals. How much kinetic energy in J would an electron gain if it moved from the negative
terminal to the positive one?
Details and Assumptions:
• The charge on the electron is \(-1.6 \times 10^{-19}~\mbox{C}\).
• You may assume energy is conserved (so no drag or energy loss due to resistance for the electron).
Relationship between Potential and E-field
There is an important relationship between electric field and potential. To understand this, make sure you have read Electric Fields and Electrostatics.
\[V_a - V_b = \int_a^b{\overrightarrow{E}.\overrightarrow{dl}}\]
If the electric field \(\overrightarrow{E}\) at various points is known, we can use the above equation to calculate the potential differences between any two points. In some cases, we may need to
find the electric potential at a point rather than potential difference. In that case, the potential is calculated as the work done in bringing a unit positive charge from infinity to the given
We have
\[V_a - V_{\infty} = \int_a^{\infty}{\overrightarrow{E}.\overrightarrow{dl}}.\]
We know that \(V_{\infty} = 0\). So this equation can be written as
\[V_a = \large{-} \int_{\infty}^a{\overrightarrow{E}.\overrightarrow{dl}}. \]
In some cases, the potential at point is given and we may need to find electric field at that point. In such cases, we have the relation
\[\overrightarrow{E} = - \overrightarrow{\nabla}V.\]
This is read \(“\overrightarrow{E}\) is the negative gradient of \(V."\) The quantity \(\overrightarrow{\nabla}V\) is called the potential gradient.
A conservative field is one that can be written as the gradient of a scalar potential \(V\) as
\[\vec{E}=\nabla V.\]
The scalar potential of the electric field is known to be \(V=\frac{kQ}{r}\). The gradient in spherical coordinates is \(\nabla = {\partial \over \partial r}\hat{r} + {1 \over r}{\partial \over \
partial \theta}\hat{\theta} + {1 \over r\sin\theta}{\partial \over \partial \varphi}\hat{\varphi}\), so
\[\nabla V ={\partial V \over \partial r}\hat{r} + {1 \over r}{\partial V \over \partial \theta}\hat{\theta} + {1 \over r\sin\theta}{\partial V \over \partial \varphi}\hat{\varphi}.\]
Evaluate each of the partial derivatives of the scalar potential equation, \(V=\frac{kQ}{r}:\)
\[ \dfrac{\partial V }{\partial r} &=- \dfrac{kQ}{r^2}\\ \dfrac{\partial V }{\partial \theta} &= 0\\ \dfrac{\partial V }{\partial \varphi} &= 0. \]
Using these partial derivatives with the definition for the gradient, only the \(\hat{r}\) component survives, since the other components contain partial derivatives that are equal to 0:
\[ \vec{E} &={\partial V \over \partial r}\hat{r} + {1 \over r}{\partial V \over \partial \theta}\hat{\theta} + {1 \over r\sin\theta}{\partial V \over \partial \varphi}\hat{\varphi}\\ &=-\dfrac
{kQ}{r^2}\hat{r}. \]
Note: The negative sign is the reason that the relationship between the electric field and scalar potential is actually written \(\vec{E}=-\nabla V.\)
The following examples are applications of the potential function:
In the \( xy\)-plane, the electric potential at a point \((x,y) \) is given by the relation \( V(x,y) = x^{2}y^{3} + xy^{5} \). Find the electric field vector at the point \( (2,1) \).
We are given that \( V(x,y) = x^{2}y^{3} + xy^{5} \). Using the relation \( \vec{E} =-\vec{\nabla}V, \)
\[ \vec{E} &= - \dfrac{\partial}{\partial x}\left[x^{2}y^{3} + xy^{5}\right] \hat{i} - \dfrac{\partial}{\partial y}\left[x^{2}y^{3} + xy^{5}\right] \hat{j} \\ &= -\left[2xy^{3} + y^{5}\right] \
hat{i} - \left[3x^{2}y^{2} + 5xy^{4}\right] \hat{j}. \]
Substituting the point \( (x,y) = (2,1), \)
\[ \vec{E} &= -\left[2\cdot2\cdot(1)^{3} + (1)^{5}\right]\hat{i} - \left[3\cdot(2)^{2}\cdot(1)^{2} + 5\cdot(2)\cdot(1)^{4}\right]\hat{j} \\ &= -5\hat{i} - 22\hat{j} \\ \left|\vec{E}\right| &= \
sqrt{25+484} \\&= \sqrt{509} \\ \Rightarrow \vec{E} &= \sqrt{509}\hat{r}, \]
where \( \hat{r} = -\dfrac{5}{\sqrt{509}}\hat{i} -\dfrac{22}{\sqrt{509}}\hat{j}. \)
In three dimensional space, the electric field at a certain point is given by
\[\vec{E} = \dfrac{5}{y}\hat{i} - 5 \dfrac{x}{y^{2}}\hat{j} + 8\hat{j}. \]
If \( V_{a}\) and \(V_{b} \) are the electric potentials at the points \( A=(1,1,1) \) and \( B=(2,2,2), \) respectively, find \( V_{b} - V_{a} \) given that the potential at the point \( (0,1,0)
\) is 5 volts.
Using the relation \( V(x,y,z) = \displaystyle \int \vec{E}\cdot\vec{dl} + c, \)
\[ V(x,y,z) &= \displaystyle \int \left[\dfrac{5}{y}\hat{i} - 5 \dfrac{x}{y^{2}}\hat{j} + 8\hat{j}\right]\cdot\left[dx\hat{i} + dy\hat{j} + dz\hat{k}\right] + c \\ &= \displaystyle \int \left[5\
dfrac{dx}{y} - 5\dfrac{x}{y^{2}}dy + 8dz \right] + c \\ &= \displaystyle \int \left[ 5\dfrac{ydx -xdy}{y^{2}}\right] + 8z + c \\ &= \displaystyle \int \left[5d\dfrac{x}{y}\right] + 8z + c \\ &= 5
\cdot\dfrac{x}{y} + 8z + c\text{ (volts)}. \]
Given that \( V(0,1,0) = 5 \), \(5 = 0 + 0 + c \implies c = 5, \)
\[ V(x,y,z) &= 5\cdot\dfrac{x}{y} + 8z + 5 \\ V_{a} &= V_{1,1,1} = 5\cdot\dfrac{1}{1} + 8(1) + 5 = 5 + 8 + 5 = 18 \\ V_{b} &= V_{2,2,2} = 5\cdot\dfrac{2}{2} + 8(2) + 5 = 5 + 16 + 5 = 26 \\ V_{b}
- V_{a} &= 26 -18 = 8 \text{ (volts)}. \]
Finding Electric Potential in Different Conditions
Potential due to a Point Charge
Let's compute the potential at any point A located at a distance of \(r_a\) from the point charge Q.
Using our relation \( \displaystyle V_a = \large{-} \int_{\infty}^a{\overrightarrow{E}\cdot \overrightarrow{dl}}, \) we can easily solve it.
\(\overrightarrow{E}\) for a point charge is given by \(\displaystyle \overrightarrow{E} = \frac{kQ}{r^2}\). Plugging it in the equation \( \displaystyle V_a = \large{-} \int_{\infty}^a{\
overrightarrow{E}\cdot \overrightarrow{dl}}, \) we get
\[\boxed{V_a = \dfrac{kQ}{r_a}}.\]
Potential between two charges
Potential inside conductors
Potential due to a dipole
Potential with Spherical Symmetry
To solve problems related to this topic, we will use Newton's Shell theorem. Although it is for gravitational fields, we can apply it here as well.
Potential inside and outside charged shell:
For potential in the case of a spherical shell, i.e. a spherical charge configuration with charge uniformly distributed over the surface of the body, we will consider 2 cases. The first case will
discuss the potential due to the shell on and outside of it, whereas the second case will discuss it in the shell.
Consider a spherical shell of radius \(R\), with charge \(Q\) distributed uniformly on its surface with charge density \(\sigma\). Now, according to the shell theorem, this entire charge can
be considered at the center of this configuration, i.e. point 'O'. So, we have successfully converted our complex charged body into a simple configuration for which Coulomb's law can easily
be applied.
□ Case 1: So, according to Coulomb's law, the electric field due to a point charge at distance \(r\) from it is given by \[\vec{E}=\dfrac { 1 }{ 4\pi { \varepsilon }_{ o } } \dfrac { Q }{ { r }
^{ 2 } }\hat r,\] and hence its potential is given by \[V(r)=\displaystyle\int\overrightarrow{ E }\cdot \vec{dr}= \displaystyle\int \dfrac{1}{4\pi\varepsilon_0} \dfrac{Q}{r}^{2} dr =\dfrac {
-1 }{ 4\pi\varepsilon_0}\dfrac Qr.\] And since the shell theorem holds true for our configuration, electric potential in this case is also given by \[V(r)=\displaystyle\int\vec{ E } \cdot \
vec{dr} = \displaystyle\int { \dfrac { 1 }{ 4\pi { \varepsilon }_{ 0 } } \dfrac { Q }{ { r }^{ 2 } } } dr =\dfrac { -1 }{ 4\pi \varepsilon _{ 0 } } \dfrac { Q }{ r }, \] where \(r\) would act
as the distance of the center of the sphere from the point at which the electric potential is to be calculated. This can also be written as \[V(r)=\dfrac { -1 }{ 4\pi { \varepsilon }_{ 0 } }
\dfrac { \sigma (4\pi { R }^{ 2 }) }{ r } =\dfrac { -1 }{ \varepsilon _{ 0 } } \dfrac { \sigma { R }^{ 2 } }{ r },\] where \(\sigma\) is the charge density for the shell.
□ Case 2: For the second case, we will begin by proving a little fact that will bring us the famous result of having electric field inside a spherical shell to be zero. Consider a hypothetical
spherical area inside the shell of a radius \(r_{0}\). Now, since this hypothetical sphere does not include any charge in it, by Gauss's law, we can easily say that the electric flux and
hence the electric field due to this area = \(0\). This can be done for infinite hypothetical areas inside the shell, which gives us our final argument that "electric field inside a charged
spherical shell = 0."
\( \)
Using this result, since electric field inside the shell is zero and potential at any point due to an external electric field is given by \(\int { E\cdot dr } \), the potential inside the
spherical shell turns out to be some constant value which would be the same as the potential of the surface of the shell, i.e. \[{ V }_{\text{Inside}}={ V }_{\text{Surface}}=\dfrac { -1 }{ 4\
pi { \varepsilon }_{ 0} } \dfrac { Q }{ R }. \]
Potential inside and outside charged sphere:
Let us try to divide this into \(3\) cases. First, let's calculate the potential outside the sphere; second, let's calculate the potential on the surface; finally, let's calculate the potential
inside the sphere. Let us have a sphere of radius \(R\) with a uniform charge density \(\sigma\) carrying a charge \(Q\).
To calculate the potential at any point outside the conductor, we will figure out the potential difference between any two arbitrary points \(a\) and \(b\) situated at distances \(r_a\) and \(r_b
\) \((r_a<r_b),\) and then we will assume that one of them is at infinity. So the potential difference between the two points will be
\[ V_a - V_b &= \displaystyle\int_a^b\vec{E}\cdot \overrightarrow{dl}\\ &=\displaystyle\int_{r_a}^{r_b}\dfrac{Q}{4\pi\epsilon_0r^2}dr\\ &=\dfrac{Q}{4\pi\epsilon_0}\left. \dfrac 1r\right|_{r_a}^
{r_b}\\ &=\dfrac{Q}{4\pi\epsilon_0}\left(\dfrac{1}{r_a}-\dfrac{1}{r_b}\right). \]
Now, let us assume \(r_b\rightarrow \infty,\) then \(V_b=0\). Therefore the potential at any point outside the conductor will be
\[\boxed{V_{\text{Outside}}=\dfrac{Q}{4\pi\epsilon_0}\dfrac 1{r_a}}.\]
Now, we can deduce the formula for the second case, i.e. for the potential on the surface of the sphere, by substituting \(r_a=R\), and we will arrive at this equation
\[\boxed{V_{\text{Surface}}=\dfrac{Q}{4\pi\epsilon_0}\dfrac 1R}.\]
Finally, we know that the electric field inside the sphere is zero because the charge is always collected at the surface, so \(\vec E =0\) and hence the potential inside a uniformly charged solid
sphere is
\[ V_{\text{Any point inside}} - V_{\text{Surface}} &= \int_{\text{Any point inside}}^{\text{Surface}}{\overrightarrow{E}\cdot\overrightarrow{dl}} \\ &= 0 \\ V_{\text{Any point inside}} &= V_{\
text{Surface}} \\ \Rightarrow V_{\text{Any point inside}} &= \dfrac{Q}{4\pi\epsilon_0}\dfrac 1R. \]
a) only a) and b) a) and c) c) only
The above graph shows the electric potential inside or around a charged sphere as a function of \(r\) which denotes the distance from the center of the sphere. If the radius of the sphere is \(R_0,\)
which of the following statements are correct?
a) The charged sphere is charged positively.
b) The intensity of the electric field inside the charged sphere is zero.
c) The intensity of the electric field at \(r=2R_0\) is twice as large as that at \(r=4R_0.\)
We have all seen lightning. It is that spark in the sky when the weather isn't too nice. But trust me when I say that it is not as boring as it seems! We all know from the previously discussed
topics that 2 bodies which have a potential difference will always cause charge to flow from a higher-potential region to a lower one. Well, that is what all lightning is!
During a storm, or in humid weather, the air between the clouds and between the clouds and the ground gets partially ionized, i.e. it allows charge to flow through it, unlike the neutral air we
have in hot weathers. So, this causes a potential difference to be developed between the two surfaces, which further causes a flow of charges, and hence an electric shock is produced in the form
of a zig-zag projectile. This type of sudden discharge occurs due to the reason that both the bodies possess a very high amount of charge, because of which they become very unstable, and hence
they come to equilibrium by this process.
Now, the basic question that arises is, "Why zig-zag?"
The reason is simple. The path of least resistance is the path that is the motive of every single charge in the lightning strike. Moreover, the entire atmosphere varies with humidity, temperature
, pressure and what not, as we move along it? This causes fluctuations in its resistance, and hence the path is never straight, and rather it is one of the millions of possibilities that a
lightning surge could have taken!
Read Capacitors and Series and Parallel Capacitors for information on its formulas.
Basically, a capacitor is an instrument that stores electric charge; it stores this charge using a simple principle. Let's see how a capacitor works. A cap (as we call it) just has two metal rods
inside; they act as the charge storage for it.
These metal rods, when connected to two terminals of a battery, start accumulating the respective charges present in the terminal, i.e. the rod connected to the positive terminal accumulates
positive charge and the rod connected to the negative terminal accumulates negative charge. Thus it creates a potential difference between the two rods, and hence the cap acts as a temporary
Let us now formulate some useful formulae. Suppose that we have a capacitor with two plates of equal area \(A\) and that each plate stores a charge of \(\pm Q\) and has uniform charge density \(\
sigma\). Let us say they are separated by distance \(d\) with a dielectric between them having permittivity \(\epsilon_0\). We can now say that the voltage between the plates is
\[ V &=\int_0^d E\, dl =\int_0^d \dfrac{\sigma}{\epsilon_0}dl =\dfrac{Qd}{\epsilon_0A}\\\\\\ \text{but } V&=Ed\\\ \\ \Rightarrow Ed&=\dfrac{Qd}{\epsilon_0A}\\ E&=\dfrac{Q}{\epsilon_0A}. \]
\[\dfrac{Q}{\epsilon_0A}=\dfrac{V}{d}\implies Q=\dfrac{\epsilon_0A}d V \implies \boxed{Q=CV}.\]
Equipotential surfaces and shielding apparatuses:
Well, you can easily decipher from the word equipotential surfaces that we are going to discuss surfaces which have equal potential.
A dipole consists of two point charges \(+Q\) and \(-Q\) a distance of \(d\) apart. Place the first charge at the origin and the second at \(r=-d\). The potential is simply the sum of the
potential for each charge:
\[\phi=\frac {1}{4\pi\epsilon_0}\left(\frac {Q}{r}-\frac {Q}{|r+d|}\right).\]
More precisely, the electric field is just the sum of the electric fields made by the two point charges. An electric field is not spherically symmetric. The leading order contribution is governed
by the combination \(p=Qd\). This property is known as a dipole electric moment. It points from the negative charge to the positive. Dipole electric field \(E=-\nabla\phi=\frac {1}{4\pi\
epsilon_0}\left(\frac{3(p\cdot\vec{r})\vec{r}-p}{r^3}\right)+\cdots\). Sine \(\phi\) is constant, and the surface of the conductor must be an equipotential. This implies that any \(E=-\nabla\phi
\) is perpendicular to the surface. Any component of the electric field that lies to the tangential to the surface would make the surface charges move. Sign of the electric field depends on where
it sits in space. In some parts, the force will be attractive, in other parts repulsive.
[1] GIF taken from http://gifloop.tumblr.com/post/15018621452 through giphy.com
[2] Image taken from http://hyperphysics.phy-astr.gsu.edu/hbase/electric/elepe.html Hyperphysics.
[3] Image from https://en.m.wikipedia.org/wiki/File:Lightningssequence2_animation.gif under Creative Commons licensing for reuse and modification.
[4] Image from https://commons.m.wikimedia.org/wiki/File:Parallelplatecapacitor.svg under Creative Commons licensing for reuse and modification.
[5] PDF file https://www.math.ksu.edu/~dbski/writings/shell.pdf as reference.
[6] J.D Jackson's classical electrodynamics.
[7] Fitzpatrick electromagnetism.
[8] A.Zangwill ED.
|
{"url":"https://brilliant.org/wiki/electric-potential/","timestamp":"2024-11-10T02:44:45Z","content_type":"text/html","content_length":"77972","record_id":"<urn:uuid:b05f8ad5-bbeb-4b3a-b5fb-6ff793501229>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00824.warc.gz"}
|
Quantum Superposition | Brilliant Math & Science Wiki
Trying to understand what "both dead and alive" means ...
Before studying quantum mechanics, the idea that something can be two things at the same time looks quite weird. Let's try to convince ourselves that it's not.
Let's imagine we have a box full of balls of different colors. We cannot see inside the box, but we can pull out a ball from the case and see its color. Not being able to see all the balls inside, we
do not know which color the ball we pull out is going to be until we see it. But we know for sure that if we see a red ball, that must have been a red ball even when it was inside the box. If we want
to make a prediction on which color will be the next ball, we can define the probability as the ratio between the total number of balls extracted and the number of extracted balls of that color:
\[P_{\text{red ball}}=\frac{\text{Number of red balls extracted}}{\text{Number of balls extracted}.}\]
This is how classical probability works.
Let's now imagine that we have an LCD ball, just one, inside the box. When we put the ball in the box, where we can't see it, a device switches on and starts to randomly show patterns of any allowed
color, even together! When we pull the ball out, the device detects we grabbed it, and not only it stops mixing colors, but it also makes all the balls of one color among those allowed. This is how
quantum probability works.
From our point of view, there is no difference between the two systems, because when we watch a ball, we see a color, and we can measure the probability of each color to be seen. Nevertheless, there
is a deeper difference.
While we could in principle break the classical box and count every single ball to know the probability, we cannot break our LCD ball, because the probability of seeing a color is not given by our
ignorance of the details of our system (aka the number of the balls of each color) but it is an a priori probability, which is intrinsic of the ball.
Why would be a cat both "dead and alive" then? Because its life or death depends on a system whose laws are described by quantum mechanics. The system, just like the LCD ball, can be in a
superposition of more states at the same time, hence both switched on and off, black and white, or dead and alive.
Note that it is just a paradox that the cat is either dead or alive, and the reason is that quantum mechanics does not work on systems bigger than a bunch of atoms.
Luckily and unluckily for the cat.
|
{"url":"https://brilliant.org/wiki/quantum-superposition/","timestamp":"2024-11-05T15:30:47Z","content_type":"text/html","content_length":"42468","record_id":"<urn:uuid:99e70521-d1c2-4ab0-a53e-e65793016fdd>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00542.warc.gz"}
|
B37: Maxwell’s Equations
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
In this chapter, the plan is to summarize much of what we know about electricity and magnetism in a manner similar to the way in which James Clerk Maxwell summarized what was known about electricity
and magnetism near the end of the nineteenth century. Maxwell not only organized and summarized what was known, but he added to the knowledge. From his work, we have a set of equations known as
Maxwell’s Equations. His work culminated in the discovery that light is electromagnetic waves.
In building up to a presentation of Maxwell’s Equations, I first want to revisit ideas we encountered in chapter 20 and I want to start that revisit by introducing an easy way of relating the
direction in which light is traveling to the directions of the electric and magnetic fields that are the light.
Recall the idea that a charged particle moving in a stationary magnetic field
experiences a force given by
\[\vec{F}=q\vec{V}_p \times \vec{B} \nonumber \]
This force, by the way, is called the Lorentz Force. For the case depicted above, by the righthand rule for the cross product of two vectors, this force would be directed out of the page.
Viewing the exact same situation from the reference frame in which the charged particle is at rest we see a magnetic field moving sideways (with velocity \(\vec{v}=-\vec{v}_p\) ) through the
particle. Since we have changed nothing but our viewpoint, the particle is experiencing the same force.
We introduce a “middleman” by adopting the attitude that the moving magnetic field doesn’t really exert a force on the charged particle, rather it causes an electric field which does that. For the
force to be accounted for by this middleman electric field, the latter must be in the direction of the force. The existence of light indicates that the electric field is caused to exist whether or
not there is a charged particle for it to exert a force on.
The bottom line is that wherever you have a magnetic field vector moving sideways through space you have an electric field vector, and, the direction of the velocity of the magnetic field vector is
consistent with
\[\mbox{direction of} \, \vec{V}=\mbox{direction of} \space \vec{E}\times \vec{B}. \nonumber \]
You arrive at the same result for the case of an electric field moving sideways through space. (Recall that in chapter 20, we discussed the fact that an electric field moving sideways through space
causes a magnetic field.)
The purpose of this brief review of material from chapter 20 was to arrive at the result \(\mbox{direction of} \, \vec{V}=\mbox{direction of} \space \vec{E}\times \vec{B}\). This direction relation
will come in handy in our discussion of two of the four equations known as Maxwell’s Equations.
One of Maxwell’s Equations is called Faraday’s Law. It brings together a couple of things we have already talked about, namely, the idea that a changing number of magnetic field lines through a loop
or a coil induces a current in that loop or coil, and, the idea that a magnetic field vector that is moving sideways through a point in space causes an electric field to exist at that point in space.
The former is a manifestation of the latter. For instance, suppose you have an increasing number of downward directed magnetic field lines through a horizontal loop. The idea is that for the number
of magnetic field lines through the loop to be increasing, there must be magnetic field lines moving sideways through the conducting material of the loop (to get inside the perimeter of the loop).
This causes an electric field in the conducting material of the loop which in turn pushes on the charged particles of the conducting material of the loop and thus results in a current in the loop. We
can discuss the production of the electric field at points in space occupied by the conducting loop even if the conducting loop is not there. If we consider an imaginary loop in its place, the
magnetic field lines moving through it to the interior of the loop still produce an electric field in the loop; there are simply no charges for that field to push around the loop.
Suppose we have an increasing number of downward-directed magnetic field lines through an imaginary loop. Viewed from above the situation appears as:
The big idea here is that you can’t have an increasing number of downward-directed magnetic field lines through the region encircled by the imaginary loop without having, either, downward directed
magnetic field lines moving transversely and inward through the loop into the region encircled by the loop, or, upward-directed magnetic field lines moving transversely and outward through the loop
out of the region encircled by the loop. Either way you have magnetic field lines cutting through the loop and with each magnetic field cutting through the loop there has to be an associated electric
field with a component tangent to the loop. Our technical expression for the “number of magnetic field lines through the loop” is the magnetic flux, given, in the case of a uniform (but time-varying)
magnetic field by
\[\Phi_B=\vec{B}\cdot\vec{A} \nonumber \]
where A is the area of the region encircled by the loop.
Faraday’s Law, as it appears in Maxwell’s Equations, is a relation between the rate of change of the magnetic flux through the loop and the electric field (produced by this changing flux) in the
loop. To arrive at it, we consider an infinitesimal segment \(dl\) of the loop and the infinitesimal contribution to the rate of change of the magnetic flux through the loop resulting from magnetic
field lines moving through that segment \(dl\) into the region encircled by the loop.
If the magnetic field depicted above is moving sideways toward the interior of the loop with a speed \(v=\frac{dx}{dt}\) then all the magnetic field lines in the region of area \(A=dl\space dx\),
will, in time \(dt\), move leftward a distance \(dx\). That is, they will all move from outside the loop to inside the loop creating a change of flux, in time \(dt\), of
\[d\phi_B=B\space dA \nonumber \]
\[d\phi_B=B\space dl\space dx \nonumber \]
Now, if I divide both sides of this equation by the time \(dt\) in which the change occurs, we have
\[\frac{d\phi_B}{dt}=B\space dl \frac{dx}{dt} \nonumber \]
which I can write as
\[\dot{\phi_B}=B\space dl\space v \nonumber \]
\[\dot{\phi_B}=v\space \, B\space dl \nonumber \]
For the case at hand, looking at the diagram, we see that \(\vec{B}\) and \(\vec{v}\) are at right angles to each other so the magnitude of \(\vec{v}\times \vec{B}\) is just \(vB\). In that case,
since \(\vec{E}=-\vec{v}\times \vec{B}\) (from equation 20-1 with \(-\vec{v}\) in place of \(\vec{v}_P\)), we have \(E=vB\). Replacing the product \(vB\) appearing on the right side of equation 37-1
\((\dot{phi}_B=vB\space dl)\) yields \(\dot{\phi}_B=Edl\) which I copy at the top of the following page:
\[\dot{\phi}_B=Edl \nonumber \]
We can generalize this to the case where the velocity vector \(\vec{v}\) is not perpendicular to the infinitesimal loop segment in which case \(\vec{E}\) is not along \(\vec{dl}\). In that case the
component of \(\vec{E}\) that is along \(\vec{dl}\), times the length \(dl\) itself, is just \(\vec{E}\cdot \vec{dl}\) and our equation becomes
\[\dot{\phi}_B=-\vec{E} \cdot \vec{dl} \nonumber \]
In this expression, the direction of \(\vec{dl}\) is determined once one decides on which of the two directions in which a magnetic field line can extend through the region enclosed by the loop is
defined to make a positive contribution to the flux through the loop. The direction of \(\vec{dl}\) is then the one which relates the sense in which \(\vec{dl}\) points around the loop, to the
positive direction for magnetic field lines through the loop, by the right hand rule for something curly something straight. With this convention the minus sign is needed to make the dot product have
the same sign as the sign of the ongoing change in flux. Consider for instance the case depicted in the diagram:
We are looking at a horizontal loop from above. Downward is depicted as into the page. Calling downward the positive direction for flux makes clockwise, as viewed from above, the positive sense for
the \(\vec{dl}\)’s in the loop, meaning the \(\vec{dl}\) on the right side of the loop is pointing toward the bottom of the page (as depicted). For a downward-directed magnetic field moving leftward
into the loop, \(\vec{E}\) must be directed toward the top of the page (from \(\mbox{direction of}\space \vec{v}=\mbox{direction of}\space \vec{E}\times \vec{B}\)). Since \(\vec{E}\) is in the
opposite direction to that of \(\vec{dl}\), \(\vec{E} \cdot \vec{dl}\) must be negative. But movement of downward-directed magnetic field lines into the region encircled by the loop, what with
downward being considered the positive direction for flux, means a positive rate of change of flux. The left side of \(\dot{\phi}_B=-\vec{E}\cdot \vec{dl}\) is thus positive. With \(\vec{E}\cdot \vec
{dl}\) being negative, we need the minus sign in front of it to make the right side positive too. Now \(\dot{\phi}_B\) is the rate of change of magnetic flux through the region encircled by the loop
due to the magnetic field lines that are entering that region through the one infinitesimal \(\vec{dl}\) that we have been considering. There is a \(\dot{\phi}_B\) for each infinitesimal \(\vec{dl}\)
making up the loop. Thus there are an infinite number of them. Call the infinite sum of all the \(\dot{\phi}_B\)'s \(\dot{\Phi}_B\) and our equation becomes:
|
{"url":"https://phys.libretexts.org/Bookshelves/University_Physics/Calculus-Based_Physics_(Schnick)/Volume_B%3A_Electricity_Magnetism_and_Optics/B37%3A_Maxwells_Equations","timestamp":"2024-11-09T10:50:30Z","content_type":"text/html","content_length":"141162","record_id":"<urn:uuid:be293450-15fb-4d72-b2f3-2a38a2fe07f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00597.warc.gz"}
|
H in Baseball: The Most Important Statistic?H in Baseball: The Most Important Statistic?
H in Baseball: The Most Important Statistic?
Baseball fans know that the H in baseball statistics stands for hits. But what they might not know is that the H is the most important statistic in baseball. Find out why.
H in baseball: the most important statistic?
Is H in baseball the most important statistic? It depends on how you look at it. If you’re looking at offense, then yes, H is the most important stat. But if you’re looking at pitching, then ERA is
more important.
What is H in baseball?
H in baseball is a starting pitcher’s statistic for hits allowed per inning pitched. The higher the number, the more hits the pitcher allows. A good H number is around 7.5, which means the pitcher
allows about seven and a half hits per nine innings pitched A high H number indicates that the pitcher is not preventing hitters from making contact with the ball, and is therefore more likely to
give up runs.
How important is H in baseball?
In baseball, hits (H) are a measure of a batter’s success rate. They are used to determine batting average and on-base percentage However, some experts believe that hits are not the best measure of a
batter’s ability. In fact, they argue that the most important statistic in baseball is walks plus hits per inning pitched (WHIP).
WHIP takes into account both a pitcher’s ability to prevent batters from reaching base, and their ability to strike batters out. It is a better measure of a pitcher’s true effectiveness than either
ERA or strikeouts alone.
So, how important is H in baseball? While it is certainly one of the more important statistics, it is not the be-all and end-all. A pitchers’ WHIP is a better overall measure of their contribution to
their team.
Why is H in baseball the most important statistic?
In baseball, the batting average (H/AB) is the most common statistic used to measure a batter’s performance. However, sabermetrics, the Empirical study of baseball, has shown that there are other
statistics that are more indicative of a batter’s ability to get on base and score runs These statistics are collectively known as on-base percentage (OBP) and Slugging percentage (SLG).
While batting average is still widely used and recognized, OBP and SLG are now considered by many to be better measures of a batter’s offensive production. In general, a player with a high OBP is
more likely to reach base and a player with a high SLG is more likely to hit for power.
There are a number of reasons why H may not be the best statistic to use when evaluating a hitter. For one thing, it doesn’t take into account walks or hit by pitches, both of which are important
ways for batters to get on base. Additionally, batting average doesn’t account for extra-base hits (doubles, triples, home runs), so it can’t properly measure a player’s power potential.
OBP and SLG do not have these same limitations and thus provide a more accurate assessment of a hitter’s offensive value. For these reasons, OBP and SLG are now considered by many to be the most
important statistics in baseball.
What does H in baseball stand for?
H in baseball is an important statistic because it represents how many hits a player gets. A hit is when the batter safely reaches first base without being called out. The more hits a player has, the
more likely they are to score runs and help their team win.
How is H in baseball calculated?
H in baseball is initials for hits. A hit (abbreviated “H”) is credited to a batter when the batter safely reaches first base after Hitting the ball into fair territory, without the benefit of an
error or a fielder’s choice.
To calculate H, the number of times a batter safely reaches first base is divided by their total number of times at bat. So if a batter has 10 hits and 50 at bats, their H would be .20. Hits are
important because they’re one of the ways a team can score runs.
What is the difference between H and HR in baseball?
Home runs (HR) are the most exciting Plays in Baseball and they usually have a big impact on the game. However, hits (H) are actually a more important statistic, because they’re more likely to lead
to runs scored
Here’s a quick explanation of the difference between hits and home runs
A hit is any time a batter safely reaches first base. This can be done by hitting the ball into fair territory, or by taking advantage of an error by the defense. Hits are important because they’re
one of the ways that batters can reach base, and ultimately score runs.
A home run is when a batter hits the ball over the outfield fence in fair territory. This usually results in multiple runs being scored (the batter and any runners who were on base when he hit the
ball), so it’s a very nt play However, because home runs are rarer than hits, they don’t have as much of an impact on the game overall.
How do H and HR affect a team’s chances of winning?
It’s no secret that home runs are a big Art of Baseball But what you may not know is that hits (abbreviated “H”) are actually a more important statistic when it comes to winning. That’s because a
team’s batting average (H/AB) is a better predictor of runs scored than its home run total. In other words, a team that gets a lot of hits is more likely to scoreruns and win games than a team that
hits a lot of Home Runs
There are a few reasons for this. First, hits are more consistent from year to year than home runs A hitter who had 40 HR in 2017 is likely to hit around 30 in 2018, but his batting average will be
more stable. Second, hits are easier to come by than home runs Even the best power hitters only hit homers about 10% of the time they come to the plate, while the average hitter gets a hit 30% of the
time. Finally, hits are more valuable than home runs because they typically result in runners moving up base, which makes it easier to score runs.
So if you’re looking for the most important statistic in baseball, don’t forget about H!
What is the relationship between H and batting average?
You’ve probably heard of the stat H, but you may not know exactly what it is or how it’s used. H is short for hits, and it’s one of the most important stats in baseball. A player’s batting average is
calculated by dividing their H by their total number of at-bats. This number gives you a good idea of how often a player gets a hit when they’re up to bat.
Generally speaking, the higher a player’s batting average is, the better they are at hitting the ball. However, there are a few other factors that come into play when determining how good a hitter
is. For example, someone with a high Batting Average may not be as valuable to their team if they don’t hit for power or don’t get on base very often.
So, while H is certainly an important stat, it’s not the only thing you should look at when evaluating a player’s hitting ability.
How do H and batting average affect a team’s chances of winning?
In baseball, the batting average (BA) is a measure of a player’s success at getting hits. A hit is defined as a batted ball that: 1) allows the batter to safely reach first base; or 2) would have
resulted in a hit if not for defensive play (such as an error or fielder’s choice). A batter’s batting average is calculated by dividing the number of hits by the total number of plate appearances.
While a high Batting average is generally desirable for a hitter, there are other factors that can affect a team’s chances of winning. One of those factors is the number of home runs (HR) hit by the
team. home runs are important because they can score multiple runs and change the outcome of a game.
According to baseball analyst Bill James, the most important statistic in baseball is not batting average but home runs plus walks plus hits per inning pitched (H + BB + HBP). This statistic, known
as “ops,” takes into account both Offense and defense
In general, teams with high ops win more games than teams with low ops. However, ops alone does not tell us everything we need to know about a team’s chances of winning. For example, a team with a
high ops but no starting pitcher is not likely to win many games. Similarly, a team with a low ops but an ace pitcher may still have a good chance of winning.
So, while ops is certainly important, it is not the only factor that determines a team’s chances of winning. Other important factors include pitching, defense, and luck.
|
{"url":"https://sportsdaynow.com/h-in-baseball/","timestamp":"2024-11-03T16:07:53Z","content_type":"text/html","content_length":"110535","record_id":"<urn:uuid:40b5c002-aca0-4a6e-afa3-ddc417f71842>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00531.warc.gz"}
|
Ignoring Case In A Comparison In Excel - ExcelAdept
Key Takeaway:
• Excel allows users to compare data in cells using case-sensitive and case-insensitive methods. By ignoring case in a comparison, users can save time and effort in data analysis.
• The EXACT function in Excel can be used to compare two cells and determine if they are exactly the same, with or without considering case. This function is useful in situations where precision is
• The LOWER and UPPER functions in Excel can be used to convert text to lowercase or uppercase, respectively, allowing for easier comparison of data that may have inconsistent capitalization.
Have you ever needed to compare text in Excel, but couldn’t find a way to ignore case? Don’t worry – this article will show you how to do it quickly and easily! With these simple steps, you can feel
confident that your comparison is accurate and complete.
Basic comparison in Excel
Comparing data in Excel can be tricky. Learn the difference between case sensitive and insensitive data. This knowledge will help you sort, filter, or analyze data with more precision. To do a
case-sensitive comparison and a case-insensitive comparison, understand Excel’s capabilities. Master this and you’ll be golden!
Sub-Heading: Performing a case-sensitive comparison
When conducting a case-sensitive comparison, it is important to ensure that each character is unique and taken into account. Upper and lowercase letters may seem identical, but Excel recognizes them
as distinct characters. This distinction can affect the accuracy of comparisons and lead to errors in data analysis.
To understand the significance of this distinction, consider a table comparing names in uppercase and lowercase letters. In this table, there are two columns: one with names in all uppercase letters,
and the other with names in all lowercase letters. The data shows that these two columns contain different values due to case-sensitivity.
Uppercase Names Lowercase Names
JOHN john
STEVE steve
LUCAS lucas
To perform a case-sensitive comparison accurately, use functions such as EXACT or TRIM to eliminate unnecessary spaces or discrepancies between cases. With these functions applied correctly, Excel
will recognize that “JOHN” and “john” are not equivalent–an essential step for accurate data analysis.
Pro Tip: Always pay attention to character cases while working with Excel spreadsheets. The difference between upper and lowercase letters may seem small, but it can have a significant impact on data
accuracy. You don’t need to be ‘case-sensitive’ about your Excel comparisons – this guide will show you how to loosen up and ignore the upper/lower-case divide.
Sub-Heading: Performing a case-insensitive comparison
To compare data without considering the case, you can perform a case-insensitive comparison.
Here’s a simple 5-step guide to performing a comparison in Excel that ignores case:
1. Select the cells or columns containing the data you want to compare.
2. In the Home tab, click on the Conditional Formatting drop-down menu and select “New Rule.”
3. Choose “Use a formula to determine which cells to format.”
4. In the formula box, type “=EXACT(LOWER(A1),LOWER(B1))” (modify A1 and B1 as necessary).
5. Select your desired formatting option and click on “OK” to apply it.
It’s important to note that when using this method of comparison, both the cells being compared should have similar data types.
One additional tip is to use VLOOKUP with an exact match. This will help you find an exact match in a table by ignoring case.
By utilizing these methods, you can avoid errors caused by inconsistent capitalization while comparing data in Excel.
Excel doesn’t care if you’re shouting or whispering, ignoring case is the key to a successful comparison.
Ignoring case in a comparison in Excel
Ignoring case in an Excel comparison? Try the EXACT function! Or, use the LOWER or UPPER functions. Simple!
Sub-Heading: Using the EXACT function
To compare text strings in Excel while ignoring case sensitivity, the EXACT function can be used. It returns a Boolean value TRUE if both texts match exactly or FALSE if they do not. The EXACT
function is useful when the text strings are unknown and cannot be manually modified for consistency.
By applying the EXACT function, we can avoid comparing two different cases of text incorrectly as in a comparison without case-sensitivity; “apple” and “AppLE” would return different values. The use
of the EXACT function ensures that all text cases are treated equally and eliminates errors caused by overlooking letter casing differences.
There are other functions like UPPER and LOWER that can change letter case quickly. But these functions only modify one input at a time, whereas EXACT allows comparison while disregarding cases on
multiple strings at once.
The EXACT function also supports any type of character, including symbols or numerals, making it versatile for all applications.
It is documented on the Microsoft support website that the syntax to use this formula is – =EXACT(Text 1, Text 2)
If you’re feeling UPPER CASE-y, use LOWER function to bring your Excel comparisons down to a lowercase level.
Sub-Heading: Using the LOWER function
Using the lower function is an effective approach to ignore case in a comparison in Excel. Follow this simple 3-step guide to using the lower function.
1. Highlight the range of cells you want to compare.
2. Go to the “Formulas” tab and select “Text.” Select “LOWER,” and then click the cell you want to change the case of. Press “Enter.”
3. In a separate cell, create your comparison formula as usual. However, instead of selecting cells with caps, use cells with lowercase letters created from Step 2.
It’s important to note that using the lower function does not modify original data in any way. This means that while it is possible to use the formula for sorting and filtering purposes, it should
not be used when the actual data needs changing.
Pro Tip: Use CONCATENATE or ‘&’ functions together with LOWER when dealing with multiple columns or strings.
Make your words SHOUT with the UPPER function in Excel – because sometimes lower case just won’t cut it.
Sub-Heading: Using the UPPER function
Integrating Excel Functions to Overlook Case While Comparing
Using Excel to compare strings with different cases provides inconsistent results. The UPPER function present in Excel disregards case when comparing text data.
A 3-Step Guide on Functioning of UPPER:
1. Define a separate column where the compared string resides.
2. Initiate another column to enter =UPPER(A2) Formulation; this converts all lowercase values to uppercase for comparison purposes.
3. Insert an IF statement in the third column that checks the two cells’ similarities, e.g., =IF(C2=D2,"Match","No Match").
Employing these methods helps to compare, regardless of character case representation.
Excel has unique functions that facilitate these types of comparisons without complications or intricate coding procedures while feeling successful as an individual user.
It’s interesting to note that the UPPER function is inclusive in most programming languages and databases like MySQL and Oracle.
Five Facts About Ignoring Case in a Comparison in Excel:
• ✅ Ignoring case is useful when comparing text in Excel that may not have consistent capitalization. (Source: Microsoft Excel Help)
• ✅ Using the “LOWER” function in Excel is a common way to ignore case when comparing text. (Source: Excel Easy)
• ✅ Ignoring case can also be achieved by converting all text to uppercase or lowercase before comparing. (Source: Ablebits)
• ✅ Ignoring case can be helpful in sorting data alphabetically in Excel. (Source: Excel Campus)
• ✅ Ignoring case is not recommended when comparing alphanumeric values, as it can lead to inaccurate results. (Source: Stack Overflow)
FAQs about Ignoring Case In A Comparison In Excel
What does it mean to ignore case in a comparison in Excel?
When comparing text in Excel, you may want to ignore the case of the text in order to get more accurate results. Ignoring case means that uppercase letters and lowercase letters are treated as the
same character.
How do I ignore case in a comparison in Excel?
You can ignore case in a comparison in Excel by using the EXACT function. The EXACT function compares two text strings and returns TRUE if they are the same, regardless of case.
Can I ignore case in a comparison for an entire column?
Yes, you can ignore case in a comparison for an entire column by using the formula =EXACT(A1,”searchtext”) and then copying it down the entire column.
Is it possible to ignore case in a pivot table in Excel?
Yes, it is possible to ignore case in a pivot table in Excel by using the GROUP function. Go to PivotTable Tools > Analyze > Group Selection > By. Select Day, Hour or whatever you want to change the
format of and click OK.
Are there any drawbacks to ignoring case in a comparison in Excel?
There are no inherent drawbacks to ignoring case in a comparison in Excel, but you may need to be careful when comparing text that looks the same but has different meanings, or text that contains
non-alphanumeric characters.
Can I ignore case in a comparison when using VLOOKUP or HLOOKUP?
Yes, you can ignore case in a comparison when using VLOOKUP or HLOOKUP by using the EXACT function in the lookup_value argument. For example, if you want to vlookup a value that is stored in
uppercase in your reference table, use =VLOOKUP(EXACT(A1),”table_array”,2,0).
|
{"url":"https://exceladept.com/ignoring-case-in-a-comparison-in-excel/","timestamp":"2024-11-12T15:32:02Z","content_type":"text/html","content_length":"66778","record_id":"<urn:uuid:85d593e5-c7e2-4724-af03-ec912b52eb21>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00188.warc.gz"}
|
How to create a gold league bot on CodinGame
An implementation of Ultimate Tic Tac Toe using Monte Carlo Tree Search
The CodinGame platform has a dedicated area for programming game bots. First, you have to choose the game to play. Then it provides the code template for the initial context and the moves of the
opponent player. For each turn, you should calculate and provide your best moves in less than 100ms. The opponent can be the league AI implementation or another developer implementation. This way,
your bot can compete with other users' bots and advance through leagues: Wood, Bronze, Silver, Gold, and Legend. By competing against others, you can be quite sure how robust your algorithm is.
You can code in any of the 26 programming languages available. The cool part is that every game comes with built-in visual simulation so it's easier to debug and notice anomalies in the algorithm.
Ultimate Tic Tac Toe
Before this, I didn't know about the extended version of the well-known game. Well, I would say it's on another level of complexity to just play it, not to mention to code it.
The rules are well explained here.
It is not fair to paste the entire source code. Nor I will explain in detail the algorithms when there is already enough information. But I will mention in the bibliography the best sources that
helped me. Additionally, I will provide all the details that made a difference in the final implementation. And other tries that failed.
The Bronze league
You probably have already implemented the classic game a few times. It's not way harder for the extended version. You can keep it simple and implement it however you want. I started to do it in an
object-oriented fashion and had to manage the following entities: MiniBoard, MainBoard, Player, and Game. State of a MiniBoard can be an array of 9 int/char entries and MainBoard can be an array of 9
MiniBoard items. Obvious, right?!
Logic can be straightforward: prioritize 3 in a row winning/defending a MiniBoard, or choose one of the best moves: center and corners. Hope for the best, and your bot can advance to the Bronze
league with minimum effort.
The challenge
No matter how well you play this game, you cannot code all possible (best) moves. Compared with a basic game, the Ultimate version has just too many possible states of boards and it's not obvious why
you should not win a MiniBoard, just to have a better chance to win the game 5 moves later.
Minimax algorithm?
Can be a good option if you are comfortable trusting some heuristics methods that would rate the next possible 5-10 moves. Then it would choose the best one: minimize opponent chances and maximize
your chances. The problem is that you should already know some strategies and tricks to correctly evaluate the state of a board. Then should be easy to code evaluation heuristics.
Monte Carlo Tree Search
MCTS uses random game playouts to the end, instead of the usual static evaluation function. The expected-outcome model is shown to be precise, accurate, easily estimable, efficiently calculable, and
Simply put: let the algorithm simulate a lot of games, and over time it will choose better and better moves. The big advantage is that you don't have to be a domain expert, to know how to win the
game. You only need to code the rules and the expected outcome.
Following different articles, you will get to have an implementation like this:
static int WinScore = 10;
public Point FindNextMove(MainBoard mainBoard)
Node rootNode = new Node(mainBoard);
var end = DateTime.Now.AddMilliseconds(90);
while (DateTime.Now < end)
Node winnerNode = rootNode.GetChildWithMaxScore();
return winnerNode.State.Board.LastMove;
public void ExploreBestNodes(Node rootNode)
Node promisingNode = SelectPromisingNode(rootNode);
if (promisingNode.State.Board.Status == MainBoardStatus.Playable)
Node nodeToExplore = promisingNode;
if (promisingNode.Childs.Count > 0)
nodeToExplore = promisingNode.GetRandomChildNode();
MainBoardStatus playoutResult = SimulateRandomPlayout(nodeToExplore);
BackPropogation(nodeToExplore, playoutResult);
private void BackPropogation(Node nodeToExplore, MainBoardStatus boardStatus)
Node tempNode = nodeToExplore;
while (tempNode != null)
if (boardStatus == MainBoardStatus.WonByP1)
tempNode.State.Score += WinScore;
tempNode = tempNode.Parent;
private MainBoardStatus SimulateRandomPlayout(Node node)
Node tempNode = new Node(node);
State tempState = tempNode.State;
var boardStatus = tempState.Board.Status;
if (boardStatus == MainBoardStatus.WonByP2)
tempNode.Parent.State.Score = int.MinValue;
return boardStatus;
int rounds = 100;
while (boardStatus == MainBoardStatus.Playable && rounds> 0)
boardStatus = tempState.Board.Status;
return boardStatus;
private static double GetUctValue(int totalVisits, double score, int visits)
if (visits == 0)
return int.MaxValue;
return (score / visits) + 1.41 * Math.Sqrt(Math.Log(totalVisits) / visits);
The Silver league
You should advance to the silver league. MCTS and UCT (Upper Confidence Trees) work like magic! From now on, your bot can become smart.
Precalculated moves before the first round
Only for the first round, there is a limit of 1000ms instead of 100ms. That means more available time for simulations, therefore better moves. I tried to calculate and cache those moves at the start
to have an advantage. But after a lot of tests, didn't seem to matter too much. I suppose it's because the impact of the first 3-5 moves is minimal considering there are 50 more. I did a rollback.
Optimize, optimize, optimize
Using MCTS means that you should squeeze in as much computational time as possible. You want to have a lot of simulated playouts for UCT value to converge to better results. Don't underestimate 10%
improvement as, over many rounds, your bot would play exponentially better.
Having more simulations does help. But it's way better to let the bot play games with higher chances of winning. Here are a few tips and tricks:
• When you start the game, play a hardcoded best position. I chose (4, 4), centered MiniBoard, left-up corner square.
• When a MiniBoard is (almost) empty choose to play first the best positions: center and corners.
• When you have to choose the current MiniBoard(the next board is full or already won), evaluate states for all others.
• Pay attention to custom logic. Takes more time, reduces simulations, and may introduce a lot of bias. It happened to me, and I had to delete most of the custom code. Keep it simple and add
improvements only after being individually tested. Don't forget that the power of MCTS is in random playouts.
Test different parameters
To get the best results, parameters need to be correctly adjusted.
• Find the right balance of Exploitation vs Exploration weights of UCT value. I tested WinScore to be in the range [1, 100] and the exploration constant from sqrt(2) to 2. For me worked best with
WinScore=6 and c=sqrt(2).
• Most of the games finish in 50-60 moves. There is a waste of time to let the bot simulate way longer games. Remember that the bot must win as fast as possible, not play every available square.
int rounds = 70 - node.State.Board.CurrentRound;
• Teach the bot to work smarter, not harder. Give a small penalty if the game is won with more rounds.
double score = WinScore;
tempNode.State.Score += score;
score *= .99;
Use bits for game state
Probably you would think to represent your board state like an array of 9 integers: 0 - unassigned, 1 - won by X, 2 - won by O. But, instead of that, it can be used only 1 number! A C# integer has 32
bits. So, each bit can record 2 states: 0 - unassigned, 1 - won. You can encode any kind of information you want.
This is an old trick to get the best performance in terms of computing and memory management. Bitwise operators are very fast. Additionally, loops and checks are dramatically improved. I chose to
• First bit to be 0 if board is still playable. Otherwise to be set to 1.
• Next 9 bits for X player and next 9 bits for O player. So if X won the board by having second row, it would look like:
int board = 0b_1_000111000_000000000;
So how to test if a player won main diagonal?
int currentPlayer = 0; // or 1
int winMask = 0b_100010001; // or loop through all winning masks
int playerAdjustedMask = winMask << currentPlayer * 9; // toggle bits mask shifting based on current player
var result = board & playerAdjustedMask;
return result == playerAdjustedMask; // magic!
How to test for a draw?
return ((board | (board >> 9)) & 511) == 511; // Check if positions of both players are set knowing that 511 is 111111111 in binary
The Gold league
If you follow all the tips, you should arrive in Gold league. Here I stopped my journey as it took me way too many days. It was both annoying and fun. You know, standard programmer feelings on daily
tasks. I learnt a lot of stuff and I'm very happy that I created a game bot that can actually beat me on this.
Did you create a similar bot? What other improvements did you make? What did you learn?
|
{"url":"https://dev.hodina.net/how-to-create-a-gold-league-bot-on-codingame","timestamp":"2024-11-14T08:36:23Z","content_type":"text/html","content_length":"221136","record_id":"<urn:uuid:95d7bf4c-3a78-449a-bd14-a2fb8cef32f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00580.warc.gz"}
|
In This Topic
You can change the type of the FlexChart control depending on your requirement. Chart type can be changed by setting the ChartType property of the FlexChart control. In this case, if multiple series
are added to the FlexChart, all of them are of the same chart type. To know how to add multiple series and to set a different ChartType for each series, see Mixed charts. FlexChart supports various
chart types including Line and LineSymbol chart, Area chart, Bar and Column chart, Bubble chart, Scatter chart, Candlestick chart, etc.
In Code
C# Copy Code
chart.ChartType = ChartType.LineSymbols;
In XAML
XAML Copy Code
<c1:FlexChart x:Name="chart" ChartType="LineSymbols" ItemsSource="{Binding Data}" BindingX="Name">
<c1:ChartSeries x:Name="Sales2014" Binding="Sales" ></c1:ChartSeries>
Line and LineSymbol chart
A Line chart draws each series as connected points of data, similar to area chart except that the area below the connected points is not filled. The series can be drawn independently or stacked. It
is the most effective way of denoting changes in value between different groups of data. A LineSymbol chart is similar to line chart except that it represents data points using symbols.
These charts are commonly used to show trends and performance over time.
Line Chart LineSymbol Chart
Area chart
An Area chart draws each series as connected points of data and the area below the connected points is filled with color to denote volume. Each new series is drawn on top of the preceding series. The
series can either be drawn independently or stacked.
These charts are commonly used to show trends between associated attributes over time.
Bar and Column chart
A Bar chart or a Column chart represents each series in the form of bars of the same color and width, whose length is determined by its value. Each new series is plotted in the form of bars next to
the bars of the preceding series. When the bars are arranged horizontally, the chart is called a bar chart and when the bars are arranged vertically, the chart is called column chart. Bar charts and
Column charts can be either grouped or stacked.
These charts are commonly used to visually represent data that is grouped into discrete categories, for example age groups, months, etc.
Bubble chart
A Bubble chart represents three dimensions of data. The X and Y values denote two of the data dimensions. The third dimension is denoted by the size of the bubble.
These charts are used to compare entities based on their relative positions on the axis as well as their size.
A Scatter chart represents a series in the form of points plotted using their X and Y axis coordinates. The X and Y axis coordinates are combined into single data points and displayed in uneven
intervals or clusters.
These charts are commonly used to determine the variation in data point density with varying x and y coordinates.
Candlestick chart
A Candlestick chart is a financial chart that shows the opening, closing, high and low prices of a given stock. It is a special type of HiLoOpenClose chart that is used to show the relationship
between open and close as well as high and low. Candle chart uses price data (high, low, open, and close values) and it includes a thick candle-like body that uses the color and size of the body to
reveal additional information about the relationship between the open and close values. For example, long transparent candles show buying pressure and long filled candles show selling pressure.
Elements of a Candlestick chart
The Candlestick chart is made up of the following elements: candle, wick, and tail.
• Candle: The candle or the body (the solid bar between the opening and closing values) represents the change in stock price from opening to closing.
• Wick and Tail: The thin lines, wick and tail, above and below the candle depict the high/low range.
• Hollow Body: A hollow candle or transparent candle indicates a rising stock price (close was higher than open). In a hollow candle, the bottom of the body represents the opening price and the top
of the body represents the closing price.
• Filled Body: A filled candle indicates a falling stock price (open was higher than close). In a filled candle the top of the body represents the opening price and the bottom of the body
represents the closing price.
In a Candlestick there are five values for each data point in the series.
• x: Determines the date position along the x axis.
• high: Determines the highest price for the day, and plots it as the top of the candle along the y axis.
• low: Determines the lowest price for the day, and plots it as the bottom of the candle along the y axis.
• open: Determines the opening price for the day.
• close: Determines the closing price for the day.
The following image shows a candlestick chart displaying stock prices.
High Low Open Close chart
HiLoOpenClose are financial charts that combine four independent values to supply high, low, open and close data for a point in a series. In addition to showing the high and low value of a stock, the
Y2 and Y3 array elements represent the stock's opening and closing price respectively.
Histogram Chart
Histogram chart plots the frequency distribution of data against the defined class intervals or bins. These bins are created by dividing the raw data values into a series of consecutive and
non-overlapping intervals. Based on the number of values falling in a particular bin, frequencies are then plotted as rectangular columns against continuous x-axis.
These charts are commonly used for visualizing distribution of numerical data over a continuous, or a certain period of time.
Spline and SplineSymbol chart
A Spline chart is a combination of line and area charts. It draws a fitted curve through each data point and its series can be drawn independently or stacked. It is the most effective way of
representing data that uses curve fittings to show difference of values. A SplineSymbol chart is similar to Spline chart except that it represents data points using symbols.
These charts are commonly used to show trends and performance over time, such as product life-cycle.
Spline Chart SplineSymbol Chart
Step Chart
Step charts use horizontal and vertical lines to present data that show sudden changes along y-axis by discrete amount. These charts help display changes that are sudden and irregular but stay
constant till the next change. Step charts enable judging trends in data along with the duration for which the trend remained constant.
Consider a use case where you want to visualize and compare weekly sales and units downloaded of a software. As both of these values vary with discrete amounts, you can use step chart to visualize
them. As shown in the image below, apart from depicting the change in sales these charts also show the exact time of change and the duration for which sales were constant. Moreover, you can easily
identify the magnitude of respective changes by simply looking at the chart.
FlexChart supports Step chart, StepSymbols chart, and StepArea or filled step chart. The following table gives detailed explanation of these chart types.
Step chart is similar to the Line chart, except that Line chart uses shortest distance to connect consecutive data points, while Step chart connects them with horizontal and vertical
Step Chart lines. These horizontal and vertical lines give the chart step-like appearance.
While the line charts depict change and its trend, the Step charts also help in judging the magnitude and the intermittent pattern of the change.
StepSymbols StepSymbols chart combines the Step chart and the Scatter chart. FlexChart plots data points by using symbols and connects those data points with horizontal and vertical step lines.
Here, the data points are marked using symbols and, therefore, help mark the beginning of an intermittent change.
StepArea chart combines the Step chart and the Area chart. It is similar to Area chart with the difference in the manner in which data points are connected. FlexChart plots the data
points using horizontal and vertical step lines, and then fills the area between x-axis and the step lines.
Chart These are based on Step charts, and are commonly used to compare discrete and intermittent changes between two or more quantities. This gives the chart stacked appearance, where related
data points of the multiple series seem stacked above the other.
For example, number of units downloaded and sales of a software for a particular time duration can be easily compared as shown in the image.
SplineArea chart
SplineArea charts are spline charts that display the area below the spline filled with color. SplineArea chart is similar to Area chart as both the charts show area, except that SplineArea chart uses
splines and Area chart uses lines to connect data points.
SplineArea Chart
See Also
|
{"url":"https://developer.mescius.com/componentone/docs/xamarin/online-forms/chartTypes.html","timestamp":"2024-11-14T03:40:35Z","content_type":"application/xhtml+xml","content_length":"25494","record_id":"<urn:uuid:83754a95-2220-43b7-bb48-3e534263a4e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00856.warc.gz"}
|
Quantum Many-body Physics
Our research in quantum many-body physics spans the following areas:
1. Spin chains
In this context, we study the phase diagram of the bilinear-biquadratic model with a uniaxial field (single-ion anisotropy). This model is characterized by a rich phase diagram, shown in figure,
containing different phases with distinctive features.
For example the system can be in a dimerized phase in which neighboring spins tend to pair in singlets and translational invariance is broken. A negative uniaxial field can also stabilize the
nematic (XY-antiferromagnetic) and Neel phases in which rotational invariance is broken. For positive values of the uniaxial field the system is in a trivial gapped phase, the Large D phase,
where the spins tend to the 0 component state. Finally, the model allows the celebrated Haldane phase, a topological gapped phase, characterized by a string order parameter, and containing the
AKLT point. Such models can now be realized with ultracold atoms in optical lattices. In the section Quantum gases and ultracold atoms we discuss how to detect these phase using quantum
polarization spectroscopy.
Further reading:
G. De Chiara, M. Lewenstein, A. Sanpera, Bilinear-biquadratic spin-1 chain undergoing quadratic Zeeman effect
It is now well established that entanglement between the two halves of a spin chain give a lot of information about the ground state of the system. For example, for critical points, it is known
that the entanglement, as measured by the von Neumann entropy of the reduced density matrix of one of the two blocks, scales logarithmically with the size of the block. The prefactor is given by
the central charge of the corresponding conformal field theory describing the critical point.
Another useful tool is the so called entanglement spectrum which can be related to the spectrum of the reduced density matrix of one of the two halves of a spin chain. It has been demonstrated by
Pollmann et al., Phys. Rev. B 81, 064439 (2010), that the Haldane phase is uniquely characterised by a evenly degenerate entanglement spectrum, i.e. the eigenvalues of the reduced density matrix
come always in even multiplets. We discovered that we can use the difference of the first two eigenvalues, called the Schmidt gap, to characterise the approaching of the Haldane phase from other
phases. The Schmidt gap shows scaling behaviour in analogy to order parameters with the correct critical exponents, when going from the Neel to the Haldane phase. Similar results hold for the
Ising model.
Further reading:
G. De Chiara, L. Lepori, M. Lewenstein, A. Sanpera, Entanglement Spectrum, Critical Exponents, and Order Parameters in Quantum Spin Chains, Phys. Rev. Lett. 109, 237208 (2012)
2. Entanglement generation in harmonic latticesEntanglement can be created between two non interacting particles if they are both interacting with a common environment. Here we consider two non
interacting harmonic oscillators interacting with an environment formed by a one-dimensional array of harmonic oscillators.
We show that entanglement can be generated even when the two particles are further apart and even when the environment is initially at thermal equilibrium.
Further reading:
A. Wolf, G. De Chiara, E. Kajari, E. Lutz and G. Morigi, Entangling two distant oscillators with a quantum reservoir, Europhys. Lett. 95, 60008 (2011).
3. Coulomb crystalsWe study dynamical crossing of structural phase transitions in quasi-one-dimensional ion crystals. If the ions are confined to a plane, the crystal makes a transition from a
linear to a planar zigzag structure (see below, left). When the ions are allowed to move in all three dimensions then a helical structure (somehow similar to DNA) can appear (see below, right).
We study the statistics of the number of twists as a function of the speed in crossing the transition.
We study quantum superpositions of quantum states of ion crystals corresponding to different motional states (see figure below).
In particular we have proposed the way to detect superpositions of different crystalline structures with a Ramsey-type experiment using the internal state of one of the ions.
Further reading:
G. De Chiara, T. Calarco, S. Fishman, G. Morigi, Phys. Rev. A 78, 043414 (2008)
J. D. Baltrusch, C. Cormick, G. De Chiara, T. Calarco, G. Morigi, Phys. Rev. A 84, 063821 (2011).
4. Non-Markovianity in spin chainsWe study the dynamics of a qubit coupled to a spin environment via an energy-exchange mechanism. We show the existence of a point, in the parameter space of the
system, where the qubit dynamics is effectively Markovian and that such a point separates two regions with completely different dynamical behaviors. Indeed, our study demonstrates that the qubit
evolution can in principle be tuned from a perfectly forgetful one to a deep non-Markovian regime where the qubit is strongly affected by the dynamical back-action of the environmental spins. By
means of quantum process tomography, we provide a complete and intuitive characterization of the qubit channel.
Further reading:
T. J. G. Apollaro, C. Di Franco, F. Plastina, M. Paternostro, Phys. Rev. A 83, 032103 (2011).
|
{"url":"https://blogs.qub.ac.uk/qteq/research/quantum-many-body-physics/","timestamp":"2024-11-09T20:36:30Z","content_type":"text/html","content_length":"42391","record_id":"<urn:uuid:3a484a5b-9f85-41f8-b84a-8bf587b95615>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00625.warc.gz"}
|
Nobel pricing
iPredict is running markets on the Economic Nobel prize announcement coming up on Monday.
Two things worth noting:
1. The contract on each contender pays $1 whether the candidate gets a sole or a joint prize
2. The set of contracts obviously does not span the space: it's entirely possible that none of the listed contenders wins the prize and it's instead awarded to something like Kitoyaki & Moore or
three random econometricians not including Deaton.
What does rational pricing in this kind of market look like? If a set of contracts spans the space and has only one winner, then the prices have to sum to one. If the set of contracts doesn't span
the space and has only one winner, then the prices ought to sum to one minus the probability of "something else". Let's say the probability of "something else" here is 10%. I don't think it crazy to
think there's be a prize awarded that includes none of the contenders listed. I don't think the probability would be much higher than 10%, but neither do I think it would be much lower.
So the sum of prices ought be around $0.9. Except that we can have multiple winners. If Thaler wins with Shiller, then the sum of payouts is $2. A Thaler/Shiller/Fehr prize would have it at $3. A
Hart & Moore prize would have it at $1 as Moore isn't listed, but a Hart & Tirole prize would have it at $2.
How can we think about rational prices in this world, in terms of looking for arbitrage opportunities if the sum of prices is out of whack?
The sum of prices ought to be equal to:
• The sum of the probability of each as a single winner times $1, plus
• The sum of the probability of each as a joint winner with someone else on the list times another dollar for each in such a set.
The sum of current market prices is $1.92. Is that too high or too low if you think the probability of "none of the above" is 10%? I think it's rather too high: it's basically priced in that there is
very likely to be a joint prize to two of the people on the list.
Now, I don't know the probabilities of joint versus single prizes. But here's another way of looking at it. Suppose you think that Shiller might get the prize, but that he's really unlikely to get it
without Thaler. There's a decent chance of a Thaler prize without Shiller, but it's hard to imagine a Shiller prize without Thaler. We can then figure that the probability of Shiller winning is some
fraction of the probability of Thaler winning. Drop Shiller out of the set of prices entirely and think only about a Thaler-only prize. Same for Nordhaus: we can imagine a Weitzman prize without
Nordhaus, but not Nordhaus without Weitzman, though a joint prize is there most likely. The Nordhaus price ought then be lower than Weitzman and ought be ignored.
Get the set of all "most likely" guys for single prizes, or the part of pairs that is most likely. The prices of those singles ought to sum to one, or one minus "something else".
The sum of Thaler (drop Shiller and Fehr), Weitzman (drop Nordhaus), Hart (drop Tirole), Deaton, Posner (drop Peltzman, Tullock), Grossman, Dixit, Barro and Fama should then not be greater than one
unless you can imagine a joint Thaler/Weitzman prize, or a Hart/Deaton prize, or a Thaler/Dixit prize. The odds of those odd combos is sufficiently low to be ignored.
If we take the sum above, we get $1.16 where it ought to be around $0.9 if we think that none of the guys whose prices we've kept are going to win in combination with other guys whose names we've
kept. Of course, this biases things in favour of the sum of prices being low - Peltzman and Tullock are low probability chances, but I'm certainly not convinced that they couldn't get it without
Posner. And there's a really good shot of Tirole getting a solo prize rather than joint with Hart. But I want to make it hard to show that the sum of prices is too high because I don't want to make a
mistake in shorting a set of contracts that seems overpriced.
The best shot at an oddball combo? Fama/Shiller for finance. It would give Fama his due but take the sting off of "efficient markets" stuff by throwing Shiller in. Like the Hayek prize that was joint
with Myrdal, except that Shiller is more deserving than Myrdal was. If that's a serious risk, then the sum of prices on the contracts still oughtn't be much affected as Fama is in there and Shiller's
already dropped; I'd just previously been wrong in thinking Fama had zero chance. He has a chance, with Shiller.
If the sum of prices is $1.16 and ought to be around 0.9, then I ought to short the set of (Thaler, Weitzman, hart, Deaton, Posner, Grossman, Dixit, Barro, Fama) considerably, then short Shiller to
keep his price relative to Thaler in line with the prices ex ante, Tirole relative to Hart, Nordhaus relative to Weitzman, and so on. The whole thing resolves Monday night, so capital isn't tied up
all that long. Hmm.
|
{"url":"https://offsettingbehaviour.blogspot.com/2010/10/nobel-pricing.html","timestamp":"2024-11-06T23:57:00Z","content_type":"application/xhtml+xml","content_length":"125735","record_id":"<urn:uuid:622f93bc-872c-4746-b0ce-b86b4e1c4733>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00052.warc.gz"}
|
Quadratic Equation Calculator
Quadratic Equation Calculator
Solve quadratic equations step by step
The calculator will solve the quadratic equation step by step either by completing the square or using the quadratic formula. It will find both the real and the imaginary (complex) roots.
Related calculator: Discriminant Calculator
Your input: solve the quadratic equation $$$2 x^{2} + 5 x - 3 = 0$$$ by using quadratic formula.
The standard quadratic equation has the form $$$ax^2+bx+c=0$$$.
In our case, $$$a=2$$$, $$$b=5$$$, $$$c=-3$$$.
Now, find the discriminant using the formula $$$D=b^2-4ac$$$: $$$D=5^2-4\cdot 2 \cdot \left(-3\right)=49$$$.
Find the roots of the equation using the formulas $$$x_1=\frac{-b-\sqrt{D}}{2a}$$$ and $$$x_2=\frac{-b+\sqrt{D}}{2a}$$$
$$$x_1=\frac{-5-\sqrt{49}}{2\cdot 2}=-3$$$ and $$$x_2=\frac{-5+\sqrt{49}}{2\cdot 2}=\frac{1}{2}$$$
Answer: $$$x_1=-3$$$; $$$x_2=\frac{1}{2}$$$
|
{"url":"https://www.emathhelp.net/calculators/algebra-1/quadratic-equation-calculator-solver/?f=2%2Ax%5E2+%2B+5%2Ax+-+3","timestamp":"2024-11-14T17:22:12Z","content_type":"application/xhtml+xml","content_length":"19822","record_id":"<urn:uuid:c08b2062-a11a-4235-9983-62ec653f67eb>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00398.warc.gz"}
|
Math and Trigonometry Functions in Excel [21 Examples]
The math and trigonometry functions are one of the most used functions in Microsoft Excel. In this article, I will discuss the most useful 21 math and trigonometry functions in Microsoft Excel.
What are Math and Trigonometry Functions in Excel?
Math and trigonometry functions in Excel are built-in tools that enable users to perform a wide range of mathematical and trigonometric calculations directly within spreadsheet cells. These functions
facilitate numeric operations, making Excel a powerful tool for data analysis and manipulation. Math functions include basic operations such as addition, subtraction, multiplication, and division,
while trigonometry functions involve calculations related to angles and geometric relationships, offering functionalities like sine, cosine, tangent, and more. In Excel, users can leverage these
functions by entering specific formulas, enhancing efficiency in handling numerical data, and supporting a diverse array of applications, from finance to scientific research.
21 Examples of Math and Trigonometry Functions in Excel
In this guide, we’ll delve into practical examples that showcase the versatility and utility of these functions, empowering you to wield Excel as a potent tool for mathematical analysis and
Example 1: Use the SUM Function
The SUM function in Excel returns the summation of the given values inside the function. It accepts numbers, ranges, cell references, or a combination of any of the three. Here is the syntax to use
the SUM Function.
You can see the stored result with the SUM function in the next image.
Example 2: Apply the SUMIF Function
The SUMIF function in Excel calculates the sum of the values based on a single condition. The syntax for the SUMIF function in Excel:
=SUMIF(range, criteria, [sum_range])
Here are examples of storing the output using the SUMIF function in Excel.
Example 3: Apply the SUMIFS Function
The SUMIFS function in Excel calculates the sum of the values based on multiple conditions.
Below is the syntax for the SUMIFS function
=SUMIFS(sum_range, criteria_range1, criteria1, [criteria_range2, criteria2], ...)
Example 4: Utilize the SUMPRODUCT Function
The SUMPRODUCT function in Excel first calculates the multiplication of the first two values. Then it sums up all the multiplications.
See the syntax with the SUMPRODUCT function in Excel:
=SUMPRODUCT(array1, [array2], [array3], ...)
You can see the result with the SUMPRODUCT function resulting with the sum of products of quantity with the price for each item.
Example 5: Use the ABS Function
The ABS function in Excel removes the sign from a number. It calculates the absolute value of any given number.
Syntax for the ABS function:
In the output column, you can see the absolute value of the input column using the formula with the ABS function.
Example 6: Implement the BASE Function
The BASE function in Excel converts a number of a certain base into a text version of another base.
Syntax for BASE function:
=BASE(Number, Radix, [Min_length])
Example 7: Employ the CEILING Function
The CEILING function in Excel is used to round up a numerical value to the nearest integer or the nearest multiple of significance.
The syntax with the CEILING function:
=CEILING(number, significance)
The result shows the rounded-up to the nearest integer in the output column including syntax error.
Example 8: Use the COS Function
The COS function in Excel calculates the cosine of an angle. The unit of the angle must be in radians.
If you have angles in degrees, multiply the angle with PI()/180 to convert it into degrees. Alternatively, you can use the RADIAN function to turn angles in degrees to angles in radians.
The syntax for this function is below:
In the output column, you can see the stored output of the angles in the radian.
Example 9: Application of FLOOR Function
The FLOOR function in Excel is used to round down a numerical value to the nearest multiple of significance.
Syntax with the FLOOR function:
=FLOOR(number, significance)
In this output, you can see 2 types of syntax errors and also stored rounded up nearest multiple in Excel.
Example 10: Utilize the INT Function
The INT function in Excel rounds down a numerical value to the nearest integer number. It removes the fraction part of a decimal fraction number. Thus, the output becomes a rounded-down version of
the input number.
Syntax of the INT function:
You can see the application of the INT function showing the nearest integer number of the input numbers.
Example 11: Use the MOD Function
The MOD function in Excel calculates the remainder when a numerical value (dividend) is divided by another one (divisor).
See the syntax for the MOD function:
=MOD(number, divisor)
The output shows the remainder using the MOD function for numbers with the divisor as input.
Example 12: Apply the MROUND Function
The MROUND function in Excel rounds off a numerical value to the nearest multiple of a given number. It rounds up or down depending on the distance of the nearest multiple.
Syntax using the MROUND function:
=MROUND(number, multiple)
See the screenshot. It shows the rounded-up number of the inputs.
Example 13: Execute the RAND Function
The RAND function in Excel generates a random positive number in the range 0 ≤ x < 1.
Syntax with RAND function:
The output shows random numbers with the RAND function.
Example 14: Employ the RANDBETWEEN Function
The RANDBETWEEN function in Excel generates a random integer number between a top value and a bottom value.
The syntax for the above function is,
=RANDBETWEEN(bottom, top)
The result is showing the random numbers between 5 and 50 in Excel with the RANDBETWEEN function.
Example 15: Apply the ROUND Function
The ROUND function in Excel is used to round off the numerical values. It can round off numbers to specific values. It can also round off the decimal fraction numbers to a specific number of places
after the decimal point.
Syntax with ROUND function:
=ROUND(number, num_digits)
The image displays the rounded value of the nearest multiple of the given numbers.
Example 16: Use the ROUNDDOWN Function
The ROUNDDOWN function in Excel is used to round down a numerical value to a specific decimal place.
Syntax of the ROUNDDOWN function:
=ROUNDDOWN(number, num_digits)
Applying the ROUNDDOWN function in different examples, that is showing in the output column in Excel.
Example 17: Employ the ROUNDUP Function
The ROUNDUP function in Excel is used to round up a numerical value to a specific decimal place.
Syntax using the ROUNDUP function,
=ROUNDUP(number, num_digits)
This result shows the output of rounded up in the desired decimal places in Excel.
Example 18: Utilize the SIN Function
The SIN function in Excel calculates the sine of an angle. The unit of the angle must be in radians.
If you have angles in degrees, multiply the angle with PI()/180 to convert it into degrees. Alternatively, you can use the RADIAN function to turn angles in degrees to angles in radians.
Use the syntax to apply the SIN function:
The output stores the result of the SIN function in Excel.
Example 19: Implementation of SQRT Function
The SQRT function in Excel calculates the positive square root of a positive input number.
SQRT syntax:
This image shows the output of the square root of the positive numbers but displays an error for negative numbers in Excel.
Example 20: Use the TRUNC Function
The TRUNC function in Excel is used to remove specific digits from the fraction part of a number.
Syntax with TRUNC function,
=TRUNC(number, [num_digits])
This function eliminates the specific digits and stores the rest of the number in the following image.
Example 21: Apply the SUBTOTAL Function
The SUBTOTAL function in Excel can calculate summation, average, count, maximum, minimum, multiplication, etc. within a specified sub-range. Its characteristics are controlled by its first argument
named function_num. The value of the function_num can be either 1-11 or 101-111.
The syntax for the SUBTOTAL function:
In this section, the SUBTOTAL function stores the result according to the function name in Excel.
One of the most used function categories in Microsoft Excel is the math and trigonometry function category. Here, I tried to discuss the most used 21 functions of the category. I hope this blog will
help you learn all these functions. Thanks.
Frequently Asked Questions
Can Excel solve functions?
Yes, Excel can solve functions by using built-in mathematical and trigonometric functions. Users can input formulas into cells to perform various calculations, such as addition, subtraction,
multiplication, division, and more. Excel’s functions enable users to automate complex mathematical tasks, making it a powerful tool for data analysis and manipulation. Additionally, Excel supports
solving equations and performing advanced calculations, allowing users to streamline their workflow and derive valuable insights from their data.
How do you show math in Excel?
To show math in Excel, you can use mathematical functions and operators within cells. Enter a formula by starting with an equal sign (=) followed by the desired mathematical expression. For example,
to add numbers in cells A1 and B1, use the formula =A1+B1. Excel supports various functions (e.g., SUM, AVERAGE) and operators (+, -, *, /) for diverse mathematical tasks. Ensure correct cell
references and use parentheses to control the order of operations. Excel dynamically updates results as data changes, providing a flexible and efficient way to showcase mathematical calculations
within your spreadsheet.
|
{"url":"https://excelgraduate.com/math-and-trigonometry-functions-in-excel/","timestamp":"2024-11-08T12:18:12Z","content_type":"text/html","content_length":"167326","record_id":"<urn:uuid:480dfa88-6889-4dfd-8acb-c05258ad17b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00653.warc.gz"}
|
Grade 7 Motion and Time Worksheets - WorkSheets Buddy
Grade 7 Motion and Time Worksheets
A. Answer the following questions in short:
1. Classify the following as motion along a straight line, circular or oscillatory motion:
(i) Motion of your hands while running.
(ii) Motion of a horse pulling a cart on a straight road.
(iii) Motion of a child in a merry-go-round.
(iv) Motion of a child on a see-saw.
(v) Motion of the hammer of an electric bell.
(vi) Motion of a train on a straight bridge.
2. A simple pendulum takes 32 seconds to complete 20 oscillations. What is the time period of the pendulum?
3. The distance between two stations is 240 km. A train takes 4 hours to cover this distance. Calculate the speed of the train.
4. The odometer of a car reads 57321.0 km when the clock shows the time 08:30 AM. What is the distance moved by the car, if at 08:50 AM, the odometer reading has changed to 57336.0 km? Calculate the
speed of the car in km/min during this time. Express the speed in km/h also.
5. Salma takes 15 minutes from her house to reach her school on a bicycle. If the bicycle has a speed of 2 m/s, calculate the distance between her house and the school.
6. Show the shape of the distance-time graph for the motion in the following cases:
(i) A car moving with a constant speed.
(ii) A car parked on a side road.
7. Suppose the two photographs, shown in fig. 13.1 and fig. 13.2 (in NCERT textbook) had been taken at an interval of 10 seconds. If a distance of 100 metres is shown by 1 cm in these photographs,
calculate the speed of the blue car.
8. Adjoining figure shows the distancetime graph for the motion of two vehicles A and B. Which one of them is moving faster?
9. Why are standard units used in measurements?
10. How time was measured when pendulum clocks were not available?
11. Why do you think accurate measurements of time became possible much after accurate measurement of length and mass?
B. Which of the following are not correct?
(i) The basic unit of time is second.
(ii) Every object moves with a constant speed.
(iii) Distances between two cities are measured in kilometres.
(iv) The time period of a given pendulum is not constant.
(v) The speed of a train is expressed in m/h.
C. Tick (✓) the correct option:
1. Which of the following relations is correct ?
(a) Speed = Distance × Time
(b) Speed = $\frac{\text { Distance }}{\text { Time }}$
(c) km/h = $\frac{\text { Time }}{\text { Distance }}$
(d) m/s = $\frac{1}{\text { Distance } \times \text { Time }}$
2. The basic unit of speed is:
(a) km/min
(b) m/min
(c) km/h
(d) m/s
3. A car moves with a speed of 40 km/h for 15 minutes and then with a speed of 60 km/h for the next 15 minutes. The total distance covered by the car is:
(a) 100 km
(b) 25 km
(c) 15 km
(d) 10 km
4. Which of the following distance-time graphs shows a truck moving with speed which is not constant?
D. Fill in the blanks:
1. Speed is a ………………. quantity.
2. The distance covered by a vehicle is measured by ………………. .
3. A device which measures time by the flow of sand is ……………… .
4. If the distance-time graph is a curved line then its speed is ………………… .
E. Match the following:
┃‘A’ │‘B’ ┃
┃1. Periodic motion│a. Twenty-four hours ┃
┃2. Time │b. Non-uniform speed ┃
┃3. Pendulum │c. Gap between two events ┃
┃4. Solar day │d. Movement of earth on its axis ┃
┃5. Zig-zag graph │e. A heavy mass suspended from string ┃
F. Identify the type of motion in each case:
G. Speeds of some living organisms are given in the table given below. You can calculate the speeds in m/s yourself:
Leave a Comment
|
{"url":"https://www.worksheetsbuddy.com/grade-7-motion-and-time-worksheets/","timestamp":"2024-11-14T11:59:17Z","content_type":"text/html","content_length":"156723","record_id":"<urn:uuid:17797312-b47d-4b5c-81f7-9b2336fd2c6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00173.warc.gz"}
|
YBC 7289
The famous 'root(2)' tablet from the Yale Babylonian Collection.
This tablet is a round school tablet of unknown provenance from the Old Babylonian period. It has a picture of a square with both diagonals drawn in. On one side of the square is written the number
30, along one of the diagonals is the number 1,24,51,10 and below it is 42,25,35.
It is easy to see that 30 times 1,24,51,10 is 42,25,35 (or, recalling that the reciprocal of 30 is 2, that 42,25,35 times 2 is 1,24,51,10). From the positioning of the numbers, the natural
interpretation to make is that a square with side of length 30 (or 1/2) has diagonal of length 42,25,35. This means that the number 1,24,51,10 must be the 'coefficient of the diagonal of a square'
and, indeed we do have an Old Babylonian coefficient list that has this number (see MCT text Ue (YBC 7243)).
We know that the ratio of the side to diagonal in a square is 1 to the square root of 2. Since root(2) is irrational, it cannot be expressed as a finite sexagesimal number, so 1,24,51,10 can only be
approximate. In fact, the square of 1,24,51,10 is 1,59,59,59,38,1,40, a remarkably good approximation to 2
Why choose a side of 30 in a school exercise on diagonals? Because it is the only choice that makes the diagonal of the square (42,25,35) equal to the reciprocal of the coefficient (1,24,51,10) That
is, 1,24,51,10 is root(2) and 42,25,35 is 1/root(2). Given the importance the Babylonians attached to reciprocals, this can hardly be a coincidence. It is a nice exercise in algebra to see why you
want a square of side 1/2 rather than 1.
For a recent analysis of how Mesopotamian scribes might have determined the approximation used, see Fowler, D.H. and Robson, E.R. (1998). 'Square root approximations in Old Babylonian mathematics:
YBC 7289 in context.' Historia Mathematica 25, 366-378.
Some excellent large-size photographs of the tablet along with some commentary and analysis are available on Bill Casselman's website. I encourage you to visit there.
I thank Professor W.W. Hallo for permission to use the image of the tablet and Professor A. Aaboe for permission to reproduce his copy.
Go to Mesopotamian Mathematics.
Last modified: 18 September 2006
Duncan J. Melville
|
{"url":"https://myslu.stlawu.edu/~dmel/mesomath/tablets/YBC7289.html","timestamp":"2024-11-09T20:27:10Z","content_type":"text/html","content_length":"3686","record_id":"<urn:uuid:50977324-abc3-4ff5-bb43-2844b7957de1>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00078.warc.gz"}
|
How to test multiple variables for equality against a single value?
Testing Multiple Variables for Equality Against a Single Value: A Guide for Efficient Code
Often in programming, you need to check if multiple variables hold the same value. This is common when validating user input, comparing data, or controlling program flow. While you could write
individual if statements for each variable, a more efficient and elegant solution involves testing all variables against a single value simultaneously.
Let's illustrate this with a simple scenario: Imagine you're building a game where players need to guess a secret number. You have three variables (guess1, guess2, guess3) representing player
guesses, and a variable secret_number storing the correct answer. We want to determine if any of the guesses match the secret number.
Here's a naive way to achieve this using multiple if statements:
guess1 = 5
guess2 = 7
guess3 = 3
secret_number = 5
if guess1 == secret_number:
print("Guess 1 is correct!")
elif guess2 == secret_number:
print("Guess 2 is correct!")
elif guess3 == secret_number:
print("Guess 3 is correct!")
print("No guess is correct.")
This works, but it's repetitive and becomes cumbersome with more variables. A more elegant approach is to leverage the power of logical operators (and, or).
The Power of Logical Operators
Let's rewrite our code using a single if statement with logical operators:
guess1 = 5
guess2 = 7
guess3 = 3
secret_number = 5
if guess1 == secret_number or guess2 == secret_number or guess3 == secret_number:
print("At least one guess is correct!")
print("No guess is correct.")
Here, we use the or operator to check if at least one of the conditions (guess1 == secret_number, guess2 == secret_number, guess3 == secret_number) is True. This significantly reduces the code
complexity and enhances readability.
Extending the Concept: Checking for Exact Matches
What if we want to determine if all three guesses match the secret number? We can use the and operator in this case.
guess1 = 5
guess2 = 5
guess3 = 5
secret_number = 5
if guess1 == secret_number and guess2 == secret_number and guess3 == secret_number:
print("All guesses are correct!")
print("Not all guesses are correct.")
Here, the if statement only evaluates to True if all the conditions are True.
Adapting to Different Scenarios
The principles demonstrated above apply to various programming languages and scenarios. For example, you might want to check if multiple user inputs satisfy certain criteria.
Example: Imagine you're validating a user's password. You want to ensure that it's at least 8 characters long and contains at least one uppercase letter, one lowercase letter, and one digit. You
could use logical operators to check these conditions simultaneously.
Key takeaway: Using logical operators to test multiple variables against a single value is a more efficient and readable approach than using multiple if statements. It makes your code cleaner, easier
to understand, and adaptable to complex scenarios.
|
{"url":"https://laganvalleydup.co.uk/post/how-to-test-multiple-variables-for-equality-against-a","timestamp":"2024-11-07T10:01:14Z","content_type":"text/html","content_length":"83230","record_id":"<urn:uuid:3045cc5b-5c62-4ad4-8355-55c4c83733d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00868.warc.gz"}
|
Students will use distance and perimeter on the coordinate plane to determine which fence company offers the best price for fencing a yard. Depending on the level of the class, students can use
the Pythagorean Theorem or the distance formula.
Remix of:
Middle School, High School
Grade 8, Grade 9
Material Type:
Activity/Lab, Assessment, Formative Assessment, Homework/Assignment
Date Added:
Media Format:
Downloadable docs, Text/HTML
PATRICIA NORTON Evaluation
NC.Math.8.G.8: Superior (3)
This activity uses the first standard in order to plot points on a coordinate plane to identify the fencing of the house. Use of the Pythagorean theorem allows students to find the distance between
the trees in order to find perimeter.
PATRICIA NORTON Evaluation
Usability: Strong (2)
This resource is fairly easy to understand. Some students may need hints of a compass rose to be reminded how to plot the points to represent the fence. As well, they may require a hint of using
Pythagorean theorem in order to find the distance between the trees.
PATRICIA NORTON Evaluation
Appropriateness: Superior (3)
I think this activity is very appropriate for real world examples for students. It is a real life situation to know how to find the perimeter for fencing around a house and use that information to
calculate pricing.
Kathleen Rogers Evaluation
Purpose: Very Weak (0)
This resource does not have valuable technology features, but can be remixed to include different technological features to enhance content and instruction.
Kathleen Rogers Evaluation
Reliability: Superior (3)
This resource is reliable on all platforms since it is a word document and doesn't have any technological features. However, it can be remixed to include technological features that can be reliable
on all platforms.
Achieve OER
Average Score (3 Points Possible)
Degree of Alignment 3 (1 user)
Focus N/A
Engagement N/A
Evaluation N/A
Accuracy N/A
Adequacy N/A
Appropriateness 3 (1 user)
Purpose 0 (1 user)
Reliability 3 (1 user)
Accessibility N/A
Motivation N/A
Clarity N/A
Usability 2 (1 user)
|
{"url":"https://goopennc.oercommons.org/authoring/1346-fencing","timestamp":"2024-11-04T13:33:09Z","content_type":"text/html","content_length":"61831","record_id":"<urn:uuid:180bc705-75c3-47b5-afb8-2ffd3a3f75d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00747.warc.gz"}
|
Our Additional Mathematics lessons are an advanced course that extends mathematical concepts beyond the standard curriculum. Students delve into topics such as calculus, matrices, enhancing their
analytical and problem-solving skills. Algebraic manipulations and functions are explored in depth, providing a foundation for higher-level mathematics. The curriculum emphasizes applications in
real-world scenarios, fostering a practical understanding of mathematical concepts. Trigonometry is extended to include identities and applications, while statistics and probability are covered to
provide a well-rounded mathematical education. Additional Mathematics prepares students for further studies in STEM fields, enhancing their quantitative abilities.
Be the first to add a review.
to leave a review
|
{"url":"https://lms.aiseasy.com/courses/s3-a-math-momentum-coaching-2/","timestamp":"2024-11-09T09:04:56Z","content_type":"text/html","content_length":"133143","record_id":"<urn:uuid:a253cd96-4e1c-419a-b9e7-37150e2f66b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00707.warc.gz"}
|