content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
inear mixed-effects model
Linear mixed-effects model
A LinearMixedModel object represents a model of a response variable with fixed and random effects. It comprises data, a model description, fitted coefficients, covariance parameters, design matrices,
residuals, residual plots, and other diagnostic information for a linear mixed-effects model. You can predict model responses with the predict function and generate random data at new design points
using the random function.
Create a LinearMixedModel model using fitlme or fitlmematrix. You can fit a linear mixed-effects model using fitlme(tbl,formula) if your data is in a table or dataset array. Alternatively, if your
model is not easily described using a formula, you can create matrices to define the fixed and random effects, and fit the model using fitlmematrix(X,y,Z,G)
Coefficient Estimates
Coefficients — Fixed-effects coefficient estimates
dataset array
Fixed-effects coefficient estimates and related statistics, stored as a dataset array containing the following fields.
Name Name of the term.
Estimate Estimated value of the coefficient.
SE Standard error of the coefficient.
tStat t-statistics for testing the null hypothesis that the coefficient is equal to zero.
DF Degrees of freedom for the t-test. Method to compute DF is specified by the 'DFMethod' name-value pair argument. Coefficients always uses the 'Residual' method for 'DFMethod'.
pValue p-value for the t-test.
Lower Lower limit of the confidence interval for coefficient. Coefficients always uses the 95% confidence level, i.e.'alpha' is 0.05.
Upper Upper limit of confidence interval for coefficient. Coefficients always uses the 95% confidence level, i.e.'alpha' is 0.05.
You can change 'DFMethod' and 'alpha' while computing confidence intervals for or testing hypotheses involving fixed- and random-effects, using the coefCI and coefTest methods.
CoefficientCovariance — Covariance of the estimated fixed-effects coefficients
p-by-p matrix
Covariance of the estimated fixed-effects coefficients of the linear mixed-effects model, stored as a p-by-p matrix, where p is the number of fixed-effects coefficients.
You can display the covariance parameters associated with the random effects using the covarianceParameters method.
Data Types: double
CoefficientNames — Names of the fixed-effects coefficients
1-by-p cell array of character vectors
Names of the fixed-effects coefficients of a linear mixed-effects model, stored as a 1-by-p cell array of character vectors.
Data Types: cell
NumCoefficients — Number of fixed-effects coefficients
positive integer value
Number of fixed-effects coefficients in the fitted linear mixed-effects model, stored as a positive integer value.
Data Types: double
NumEstimatedCoefficients — Number of estimated fixed-effects coefficients
positive integer value
Number of estimated fixed-effects coefficients in the fitted linear mixed-effects model, stored as a positive integer value.
Data Types: double
Fitting Method
FitMethod — Method used to fit the linear mixed-effects model
ML | REML
Method used to fit the linear mixed-effects model, stored as either of the following.
• ML, if the fitting method is maximum likelihood
• REML, if the fitting method is restricted maximum likelihood
Data Types: char
Input Data
Formula — Specification of the fixed- and random-effects terms, and grouping variables
Specification of the fixed-effects terms, random-effects terms, and grouping variables that define the linear mixed-effects model, stored as an object.
For more information on how to specify the model to fit using a formula, see Formula.
NumObservations — Number of observations
positive integer value
Number of observations used in the fit, stored as a positive integer value. This is the number of rows in the table or dataset array, or the design matrices minus the excluded rows or rows with NaN
Data Types: double
NumPredictors — Number of predictors
positive integer value
Number of variables used as predictors in the linear mixed-effects model, stored as a positive integer value.
Data Types: double
NumVariables — Total number of variables
positive integer value
Total number of variables including the response and predictors, stored as a positive integer value.
• If the sample data is in a table or dataset array tbl, NumVariables is the total number of variables in tbl including the response variable.
• If the fit is based on matrix input, NumVariables is the total number of columns in the predictor matrix or matrices, and response vector.
NumVariables includes variables, if there are any, that are not used as predictors or as the response.
Data Types: double
ObservationInfo — Information about the observations
Information about the observations used in the fit, stored as a table.
ObservationInfo has one row for each observation and the following four columns.
Weights The value of the weighted variable for that observation. Default value is 1.
Excluded true, if the observation was excluded from the fit using the 'Exclude' name-value pair argument, false, otherwise. 1 stands for true and 0 stands for false.
true, if the observation was excluded from the fit because any response or predictor value is missing, false, otherwise.
Missing values include NaN for numeric variables, empty cells for cell arrays, blank rows for character arrays, and the <undefined> value for categorical arrays.
Subset true, if the observation was used in the fit, false, if it was not used because it is missing or excluded.
Data Types: table
ObservationNames — Names of observations
cell array of character vectors
Names of observations used in the fit, stored as a cell array of character vectors.
• If the data is in a table or dataset array, tbl, containing observation names, ObservationNames has those names.
• If the data is provided in matrices, or a table or dataset array without observation names, then ObservationNames is an empty cell array.
Data Types: cell
PredictorNames — Names of predictors
cell array of character vectors
Names of the variables that you use as predictors in the fit, stored as a cell array of character vectors that has the same length as NumPredictors.
Data Types: cell
ResponseName — Names of response variable
character vector
Name of the variable used as the response variable in the fit, stored as a character vector.
Data Types: char
Variables — Variables
Variables, stored as a table.
• If the fit is based on a table or dataset array tbl, then Variables is identical to tbl.
• If the fit is based on matrix input, then Variables is a table containing all the variables in the predictor matrix or matrices, and response variable.
Data Types: table
VariableInfo — Information about the variables
Information about the variables used in the fit, stored as a table.
VariableInfo has one row for each variable and contains the following four columns.
Class Class of the variable ('double', 'cell', 'nominal', and so on).
Value range of the variable.
Range • For a numerical variable, it is a two-element vector of the form [min,max].
• For a cell or categorical variable, it is a cell or categorical array containing all unique values of the variable.
true, if the variable is a predictor in the fitted model.
false, if the variable is not in the fitted model.
true, if the variable has a type that is treated as a categorical predictor, such as cell, logical, or categorical, or if it is specified as categorical by the 'Categorical' name-value
IsCategorical pair argument of the fit method.
false, if it is a continuous predictor.
Data Types: table
VariableNames — Names of the variables
cell array of character vectors
Names of the variables used in the fit, stored as a cell array of character vectors.
• If sample data is in a table or dataset array tbl, VariableNames contains the names of the variables in tbl.
• If sample data is in matrix format, then VariableInfo includes variable names you supply while fitting the model. If you do not supply the variable names, then VariableInfo contains the default
Data Types: cell
Summary Statistics
DFE — Residual degrees of freedom
positive integer value
Residual degrees of freedom, stored as a positive integer value. DFE = n – p, where n is the number of observations, and p is the number of fixed-effects coefficients.
This corresponds to the 'Residual' method of calculating degrees of freedom in the fixedEffects and randomEffects methods.
Data Types: double
LogLikelihood — Maximized log or restricted log likelihood
scalar value
Maximized log likelihood or maximized restricted log likelihood of the fitted linear mixed-effects model depending on the fitting method you choose, stored as a scalar value.
Data Types: double
ModelCriterion — Model criterion
dataset array
Model criterion to compare fitted linear mixed-effects models, stored as a dataset array with the following columns.
AIC Akaike Information Criterion
BIC Bayesian Information Criterion
Loglikelihood Log likelihood value of the model
Deviance –2 times the log likelihood of the model
If n is the number of observations used in fitting the model, and p is the number of fixed-effects coefficients, then for calculating AIC and BIC,
• The total number of parameters is nc + p + 1, where nc is the total number of parameters in the random-effects covariance excluding the residual variance
• The effective number of observations is
□ n, when the fitting method is maximum likelihood (ML)
□ n – p, when the fitting method is restricted maximum likelihood (REML)
MSE — ML or REML estimate
positive scalar value
ML or REML estimate, based on the fitting method used for estimating σ^2, stored as a positive scalar value. σ^2 is the residual variance or variance of the observation error term of the linear
mixed-effects model.
Data Types: double
Rsquared — Proportion of variability in the response explained by the fitted model
Proportion of variability in the response explained by the fitted model, stored as a structure. It is the multiple correlation coefficient or R-squared. Rsquared has two fields.
Ordinary R-squared value, stored as a scalar value in a structure. Rsquared.Ordinary = 1 – SSE./SST
R-squared value adjusted for the number of fixed-effects coefficients, stored as a scalar value in a structure.
Adjusted Rsquared.Adjusted = 1 – (SSE./SST)*(DFT./DFE),
where DFE = n – p, DFT = n – 1, and n is the total number of observations, p is the number of fixed-effects coefficients.
Data Types: struct
SSE — Sum of squared errors
positive scalar
Sum of squared errors, specified as a positive scalar. SSE is equal to the squared conditional residuals, that is
where y is the response vector and F is the fitted conditional response of the linear mixed-effects model. The conditional model has contributions from both fixed and random effects.
If the model was trained with observation weights, the sum of squares in the SSE calculation is the weighted sum of squares.
Data Types: double
SSR — Regression sum of squares
positive scalar
Regression sum of squares, specified as a positive scalar. SSR is the sum of squares explained by the linear mixed-effects regression, and is equal to the sum of the squared deviations between the
fitted values and the mean of the response.
SSR = sum((F – mean(y)).^2),
where F is the fitted conditional response of the linear mixed-effects model and y is the response vector. The conditional model has contributions from both fixed and random effects.
If the model was trained with observation weights, the sum of squares in the SSR calculation is the weighted sum of squares.
Data Types: double
SST — Total sum of squares
positive scalar
Total sum of squares, specified as a positive scalar.
For a linear mixed-effects model with an intercept, SST is calculated as
where SST is the total sum of squares, SSE is the sum of squared errors, and SSR is the regression sum of squares.
For a linear mixed-effects model without an intercept, SST is calculated as the sum of the squared deviations of the observed response values from their mean, that is
SST = sum((y – mean(y)).^2),
where y is the response vector.
If the model was trained with observation weights, the sum of squares in the SST calculation is the weighted sum of squares.
Data Types: double
Object Functions
anova Analysis of variance for linear mixed-effects model
coefCI Confidence intervals for coefficients of linear mixed-effects model
coefTest Hypothesis test on fixed and random effects of linear mixed-effects model
compare Compare linear mixed-effects models
covarianceParameters Extract covariance parameters of linear mixed-effects model
designMatrix Fixed- and random-effects design matrices
fitted Fitted responses from a linear mixed-effects model
fixedEffects Estimates of fixed effects and related statistics
partialDependence Compute partial dependence
plotPartialDependence Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots
plotResiduals Plot residuals of linear mixed-effects model
predict Predict response of linear mixed-effects model
random Generate random responses from fitted linear mixed-effects model
randomEffects Estimates of random effects and related statistics
residuals Residuals of fitted linear mixed-effects model
response Response vector of the linear mixed-effects model
Random Intercept Model with Categorical Predictor
Load the sample data.
The flu dataset array has a Date variable, and 10 variables containing estimated influenza rates (in 9 different regions, estimated from Google® searches, plus a nationwide estimate from the Center
for Disease Control and Prevention, CDC).
To fit a linear-mixed effects model, your data must be in a properly formatted dataset array. To fit a linear mixed-effects model with the influenza rates as the responses and region as the predictor
variable, combine the nine columns corresponding to the regions into an array. The new dataset array, flu2, must have the response variable, FluRate, the nominal variable, Region, that shows which
region each estimate is from, and the grouping variable Date.
flu2 = stack(flu,2:10,'NewDataVarName','FluRate',...
flu2.Date = nominal(flu2.Date);
Fit a linear mixed-effects model with fixed effects for region and a random intercept that varies by Date.
Because region is a nominal variable, fitlme takes the first region, NE, as the reference and creates eight dummy variables representing the other eight regions. For example, $I\left[MidAtl\right]$
is the dummy variable representing the region MidAtl. For details, see Dummy Variables.
The corresponding model is
$\begin{array}{l}{y}_{im}={\beta }_{0}+{\beta }_{1}I{\left[MidAtl\right]}_{i}+{\beta }_{2}I{\left[ENCentral\right]}_{i}+{\beta }_{3}I{\left[WNCentral\right]}_{i}+{\beta }_{4}I{\left[SAtl\right]}_{i}\
\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+{\beta }_{5}I{\left[ESCentral\right]}_{i}+{\beta }_{6}I{\left[WSCentral\right]}_{i}+{\beta }_{7}I{\left[Mtn\right]}_{i}+{\beta }_{8}I{\left[Pac\
right]}_{i}+{b}_{0m}+{\epsilon }_{im},\phantom{\rule{1em}{0ex}}m=1,2,...,52,\end{array}$
where ${y}_{im}$ is the observation $i$ for level $m$ of grouping variable Date, ${\beta }_{j}$, $j$ = 0, 1, ..., 8, are the fixed-effects coefficients, ${b}_{0m}$ is the random effect for level $m$
of the grouping variable Date, and ${\epsilon }_{im}$ is the observation error for observation $i$. The random effect has the prior distribution, ${b}_{0m}\sim N\left(0,{\sigma }_{b}^{2}\right)$ and
the error term has the distribution, ${\epsilon }_{im}\sim N\left(0,{\sigma }^{2}\right)$.
lme = fitlme(flu2,'FluRate ~ 1 + Region + (1|Date)')
lme =
Linear mixed-effects model fit by ML
Model information:
Number of observations 468
Fixed effects coefficients 9
Random effects coefficients 52
Covariance parameters 2
FluRate ~ 1 + Region + (1 | Date)
Model fit statistics:
AIC BIC LogLikelihood Deviance
318.71 364.35 -148.36 296.71
Fixed effects coefficients (95% CIs):
Name Estimate SE tStat DF pValue Lower Upper
{'(Intercept)' } 1.2233 0.096678 12.654 459 1.085e-31 1.0334 1.4133
{'Region_MidAtl' } 0.010192 0.052221 0.19518 459 0.84534 -0.092429 0.11281
{'Region_ENCentral'} 0.051923 0.052221 0.9943 459 0.3206 -0.050698 0.15454
{'Region_WNCentral'} 0.23687 0.052221 4.5359 459 7.3324e-06 0.13424 0.33949
{'Region_SAtl' } 0.075481 0.052221 1.4454 459 0.14902 -0.02714 0.1781
{'Region_ESCentral'} 0.33917 0.052221 6.495 459 2.1623e-10 0.23655 0.44179
{'Region_WSCentral'} 0.069 0.052221 1.3213 459 0.18705 -0.033621 0.17162
{'Region_Mtn' } 0.046673 0.052221 0.89377 459 0.37191 -0.055948 0.14929
{'Region_Pac' } -0.16013 0.052221 -3.0665 459 0.0022936 -0.26276 -0.057514
Random effects covariance parameters (95% CIs):
Group: Date (52 Levels)
Name1 Name2 Type Estimate Lower Upper
{'(Intercept)'} {'(Intercept)'} {'std'} 0.6443 0.5297 0.78368
Group: Error
Name Estimate Lower Upper
{'Res Std'} 0.26627 0.24878 0.285
The $p$-values 7.3324e-06 and 2.1623e-10 respectively show that the fixed effects of the flu rates in regions WNCentral and ESCentral are significantly different relative to the flu rates in region
The confidence limits for the standard deviation of the random-effects term, ${\sigma }_{b}$, do not include 0 (0.5297, 0.78368), which indicates that the random-effects term is significant. You can
also test the significance of the random-effects terms using the compare method.
The estimated value of an observation is the sum of the fixed effects and the random-effect value at the grouping variable level corresponding to that observation. For example, the estimated best
linear unbiased predictor (BLUP) of the flu rate for region WNCentral in week 10/9/2005 is
$\begin{array}{rl}{\underset{}{\overset{ˆ}{y}}}_{WNCentral,10/9/2005}& ={\underset{}{\overset{ˆ}{\beta }}}_{0}+{\underset{}{\overset{ˆ}{\beta }}}_{3}I\left[WNCentral\right]+{\underset{}{\overset{ˆ}
{b}}}_{10/9/2005}\\ & =1.2233+0.23687-0.1718\\ & =1.28837.\end{array}$
This is the fitted conditional response, since it includes contribution to the estimate from both the fixed and random effects. You can compute this value as follows.
beta = fixedEffects(lme);
[~,~,STATS] = randomEffects(lme); % Compute the random-effects statistics (STATS)
STATS.Level = nominal(STATS.Level);
y_hat = beta(1) + beta(4) + STATS.Estimate(STATS.Level=='10/9/2005')
You can simply display the fitted value using the fitted method.
F = fitted(lme);
F(flu2.Date == '10/9/2005' & flu2.Region == 'WNCentral')
Compute the fitted marginal response for region WNCentral in week 10/9/2005.
F = fitted(lme,'Conditional',false);
F(flu2.Date == '10/9/2005' & flu2.Region == 'WNCentral')
Linear Mixed-Effects Model with a Random Slope
Load the sample data.
Fit a linear mixed-effects model for miles per gallon (MPG), with fixed effects for acceleration, horsepower and the cylinders, and uncorrelated random-effect for intercept and acceleration grouped
by the model year. This model corresponds to
${MPG}_{im}={\beta }_{0}+{\beta }_{1}{Acc}_{i}+{\beta }_{2}HP+{b}_{0m}+{{b}_{1}}_{m}{Acc}_{im}+{\epsilon }_{im},\phantom{\rule{1em}{0ex}}m=1,2,3,$
with the random-effects terms having the following prior distributions:
${b}_{m}=\left(\begin{array}{l}{b}_{0m}\\ {b}_{1m}\end{array}\right)\sim N\left(0,\left(\begin{array}{cc}{\sigma }_{0}^{2}& {\sigma }_{0,1}\\ {\sigma }_{0,1}& {\sigma }_{1}^{2}\end{array}\right)\
where $m$ represents the model year.
First, prepare the design matrices for fitting the linear mixed-effects model.
X = [ones(406,1) Acceleration Horsepower];
Z = [ones(406,1) Acceleration];
Model_Year = nominal(Model_Year);
G = Model_Year;
Now, fit the model using fitlmematrix with the defined design matrices and grouping variables. Use the 'fminunc' optimization algorithm.
lme = fitlmematrix(X,MPG,Z,G,'FixedEffectPredictors',....
lme =
Linear mixed-effects model fit by REML
Model information:
Number of observations 392
Fixed effects coefficients 3
Random effects coefficients 26
Covariance parameters 4
y ~ Intercept + Acceleration + Horsepower + (Intercept + Acceleration | Model_Year)
Model fit statistics:
AIC BIC LogLikelihood Deviance
2202.9 2230.7 -1094.5 2188.9
Fixed effects coefficients (95% CIs):
Name Estimate SE tStat DF pValue Lower Upper
{'Intercept' } 50.064 2.3176 21.602 389 1.4185e-68 45.507 54.62
{'Acceleration'} -0.57897 0.13843 -4.1825 389 3.5654e-05 -0.85112 -0.30681
{'Horsepower' } -0.16958 0.0073242 -23.153 389 3.5289e-75 -0.18398 -0.15518
Random effects covariance parameters (95% CIs):
Group: Model_Year (13 Levels)
Name1 Name2 Type Estimate Lower Upper
{'Intercept' } {'Intercept' } {'std' } 3.72 1.5215 9.0954
{'Acceleration'} {'Intercept' } {'corr'} -0.8769 -0.98274 -0.33846
{'Acceleration'} {'Acceleration'} {'std' } 0.3593 0.19418 0.66483
Group: Error
Name Estimate Lower Upper
{'Res Std'} 3.6913 3.4331 3.9688
The fixed effects coefficients display includes the estimate, standard errors (SE), and the 95% confidence interval limits (Lower and Upper). The $p$-values for (pValue) indicate that all three
fixed-effects coefficients are significant.
The confidence intervals for the standard deviations and the correlation between the random effects for intercept and acceleration do not include zeros, hence they seem significant. Use the compare
method to test for the random effects.
Display the covariance matrix of the estimated fixed-effects coefficients.
ans = 3×3
5.3711 -0.2809 -0.0126
-0.2809 0.0192 0.0005
-0.0126 0.0005 0.0001
The diagonal elements show the variances of the fixed-effects coefficient estimates. For example, the variance of the estimate of the intercept is 5.3711. Note that the standard errors of the
estimates are the square roots of the variances. For example, the standard error of the intercept is 2.3176, which is sqrt(5.3711).
The off-diagonal elements show the correlation between the fixed-effects coefficient estimates. For example, the correlation between the intercept and acceleration is –0.2809 and the correlation
between acceleration and horsepower is 0.0005.
Display the coefficient of determination for the model.
ans = struct with fields:
Ordinary: 0.7866
Adjusted: 0.7855
The adjusted value is the R-squared value adjusted for the number of predictors in the model.
More About
In general, a formula for model specification is a character vector or string scalar of the form 'y ~ terms'. For the linear mixed-effects models, this formula is in the form 'y ~ fixed + (random1|
grouping1) + ... + (randomR|groupingR)', where fixed and random contain the fixed-effects and the random-effects terms.
Suppose a table tbl contains the following:
• A response variable, y
• Predictor variables, X[j], which can be continuous or grouping variables
• Grouping variables, g[1], g[2], ..., g[R],
where the grouping variables in X[j] and g[r] can be categorical, logical, character arrays, string arrays, or cell arrays of character vectors.
Then, in a formula of the form, 'y ~ fixed + (random[1]|g[1]) + ... + (random[R]|g[R])', the term fixed corresponds to a specification of the fixed-effects design matrix X, random[1] is a
specification of the random-effects design matrix Z[1] corresponding to grouping variable g[1], and similarly random[R] is a specification of the random-effects design matrix Z[R] corresponding to
grouping variable g[R]. You can express the fixed and random terms using Wilkinson notation.
Wilkinson notation describes the factors present in models. The notation relates to factors present in models, not to the multipliers (coefficients) of those factors.
Wilkinson Notation Factors in Standard Notation
1 Constant (intercept) term
X^k, where k is a positive integer X, X^2, ..., X^k
X1 + X2 X1, X2
X1*X2 X1, X2, X1.*X2 (elementwise multiplication of X1 and X2)
X1:X2 X1.*X2 only
- X2 Do not include X2
X1*X2 + X3 X1, X2, X3, X1*X2
X1 + X2 + X3 + X1:X2 X1, X2, X3, X1*X2
X1*X2*X3 - X1:X2:X3 X1, X2, X3, X1*X2, X1*X3, X2*X3
X1*(X2 + X3) X1, X2, X3, X1*X2, X1*X3
Statistics and Machine Learning Toolbox™ notation always includes a constant term unless you explicitly remove the term using -1. Here are some examples for linear mixed-effects model specification.
Formula Description
'y ~ X1 + X2' Fixed effects for the intercept, X1 and X2. This is equivalent to 'y ~ 1 + X1 + X2'.
'y ~ -1 + X1 + X2' No intercept and fixed effects for X1 and X2. The implicit intercept term is suppressed by including -1.
'y ~ 1 + (1 | g1)' Fixed effects for the intercept plus random effect for the intercept for each level of the grouping variable g1.
'y ~ X1 + (1 | g1)' Random intercept model with a fixed slope.
'y ~ X1 + (X1 | g1)' Random intercept and slope, with possible correlation between them. This is equivalent to 'y ~ 1 + X1 + (1 + X1|g1)'.
'y ~ X1 + (1 | g1) + (-1 + X1 | g1)' Independent random effects terms for intercept and slope.
'y ~ 1 + (1 | g1) + (1 | g2) + (1 | g1:g2)' Random intercept model with independent main effects for g1 and g2, plus an independent interaction effect.
Version History
Introduced in R2013b | {"url":"https://de.mathworks.com/help/stats/linearmixedmodel.html","timestamp":"2024-11-08T03:17:03Z","content_type":"text/html","content_length":"177624","record_id":"<urn:uuid:09a11bc6-7c04-4898-b588-a7b2c7a7eaac>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00119.warc.gz"} |
Teacherbot 28 Aug, 15:24:36
a) 15 minutes to 2 hours: To simplify this ratio, we need to convert both quantities to the same unit. Since there are 60 minutes in an hour, we can convert 2 hours to 2 * 60 = 120 minutes. So the
ratio becomes 15 minutes to 120 minutes. To simplify this ratio, we can divide both quantities by their greatest common divisor, which is 15. The simplified ratio is 1 minute to 8 minutes.
b) 125 ml : 1 litre: To simplify this ratio, we need to convert both quantities to the same unit. Since there are 1000 ml in a litre, we can convert 1 litre to 1 * 1000 = 1000 ml. So the ratio
becomes 125 ml to 1000 ml. To simplify this ratio, we can divide both quantities by their greatest common divisor, which is 125. The simplified ratio is 1 ml to 8 ml.
c) 9 days : 3 weeks: To simplify this ratio, we need to convert both quantities to the same unit. Since there are 7 days in a week, we can convert 3 weeks to 3 * 7 = 21 days. So the ratio becomes 9
days to 21 days. To simplify this ratio, we can divide both quantities by their greatest common divisor, which is 3. The simplified ratio is 3 days to 7 days.
d) 6 months : 4/3: To simplify this ratio, we need to convert the second quantity to the same unit as the first quantity. Since there are 12 months in a year, we can convert 4/3 to 4/3 * 12 = 16
months. So the ratio becomes 6 months to 16 months. To simplify this ratio, we can divide both quantities by their greatest common divisor, which is 2. The simplified ratio is 3 months to 8 months. | {"url":"https://teacherbot.io/contents/express-each-of-the-following-ratios-in-simplest-form-a-15-minutes-to-","timestamp":"2024-11-13T15:49:47Z","content_type":"text/html","content_length":"33470","record_id":"<urn:uuid:d75b934e-268b-426f-bcc1-f86bb31b522b>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00877.warc.gz"} |
Re: hough ellipse fit inaccurate?
4 Mar 2015 4 Mar '15
4:32 p.m.
Third, you could fit an ellipse using RANSAC. How does this approach work for you: http://stackoverflow.com/questions/28281742/fitting-a-circle-to-a-binary-ima...
On Mar 4, 2015, at 7:24 PM, Kevin Keraudren <kevin.keraudren@googlemail.com> wrote:
A second source of inaccuracy comes from your input ellipse: it is not a perfect ellipse because you drew it using anti-aliasing.
On Thu, Mar 5, 2015 at 12:21 AM, Kevin Keraudren <kevin.keraudren@googlemail.com> wrote: Hi Arno,
The first source of inaccuracy comes from your code, you need to round the values instead of truncating them:
#yc = int(best[1]) #xc = int(best[2]) #a = int(best[3]) #b = int(best[4])
yc = int(round(best[1])) xc = int(round(best[2])) a = int(round(best[3])) b = int(round(best[4]))
See resulting image attached.
Kind regards,
On Wed, Mar 4, 2015 at 11:49 PM, Arno Dietz <arnodietz86@googlemail.com> wrote:
Ok sorry. Here is my code:
from skimage import color from skimage.filter import canny from skimage.transform import hough_ellipse from skimage.draw import ellipse_perimeter from skimage import io from skimage.viewer import
ImageViewer # load image img = io.imread('ellipse.png') cimg = color.gray2rgb(img) # edges and ellipse fit edges = canny(img, sigma=0.1, low_threshold=0.55, high_threshold=0.8) result =
hough_ellipse(edges, accuracy=4, threshold=25, min_size=47, max_size=60) result.sort(order='accumulator') # Estimated parameters for the ellipse best = result[-1] yc = int(best[1]) xc = int(best
[2]) a = int(best[3]) b = int(best[4]) orientation = best[5] # Draw the ellipse on the original image cy, cx = ellipse_perimeter(yc, xc, a, b, orientation) cimg[cy, cx] = (0, 0, 255) # Draw the
edge (white) and the resulting ellipse (red) edges = color.gray2rgb(edges) edges[cy, cx] = (250, 0, 0) viewer = ImageViewer(edges) viewer.show()
I noticed, that the ellipse center is detected only in half pixel accuracy. Maybe this is the Problem? Is there a possibility to get the ellipse center with sub-pixel accuracy?
regards Arno
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to
scikit-image+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to
scikit-image+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/d/optout. | {"url":"https://mail.python.org/archives/list/scikit-image@python.org/message/5G72VQUAS367MAFNY2G3UQ2G5GQOXI6X/","timestamp":"2024-11-07T03:10:04Z","content_type":"text/html","content_length":"15369","record_id":"<urn:uuid:8179f976-a097-45ab-bf98-4629ee1f4230>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00348.warc.gz"} |
1. Introduction2. Materials and methods2.1. The dynamics2.2. Meanfield theory2.3. Simulation technique2.4. Experimental methods and data analysis3. Results3.1. Meanfield theory vs. network simulations3.2. Time course of retrieval3.3. Effects of long-term memory representations3.4. Performance4. DiscussionAuthor contributionsConflict of interest statementReferences
Human long-term memory capacity for names, facts, episodes and other aspects of our lives is practically unlimited. Yet recalling this information is often challenging, especially when no precise
cues are available. A striking example of this deficiency is provided by classical studies of free recall, where participants are asked to recall lists of unrelated words after a quick exposure
(Murdock, 1962; Kahana, 1996). Even for short lists of 5–10 words most of participants are unable to reproduce them without omissions (Murdock, 1960; Tulving, 1966).
Several influential models of recall were developed. Some of them were driven by the description of behavioral aspects (Glenberg and Swanson, 1986; Howard and Kahana, 1999; Davelaar et al., 2005;
Brown et al., 2007); while others were rooted in biological principles (Grossberg and Stone, 1986; Ruppin and Yeshurun, 1991; Wong et al., 1991; Hasselmo and Wyble, 1997; Verduzco-Flores et al.,
2012; Lansner et al., 2013).
According to the influential “search of associative memory” (SAM) model, items presented for recall acquire a set of mutual associations when stored temporarily in working memory buffers (Raaijmakers
and Shiffrin, 1980). These acquired associations are then used to retrieve words from memory. SAM can be fit to reproduce recall data with great precision (Raaijmakers and Shiffrin, 1981), but since
it has many parameters it cannot provide the first-principle explanation for very limited recall capacity observed in experiments. A recent model of memory retrieval (Romani et al., 2013; Katkov et
al., 2015) introduced the notion that long-term associations between items determined by overlaps between their neuronal representations in memory networks, rather than short-term associations
acquired during the experiment, are primarily responsible for recall process. With a simple phenomenological implementation of recall, this assumption results in a generic limit for the recall
capacity compatible with the data (Romani et al., 2013). Moreover, the neuronal representations determine the recall probability of different items (“easy” vs. “difficult” words) and the order of
their recall. In the current contribution, we develop a more realistic neural network model where recall is mediated by the sequential reactivation of neuronal ensembles encoding different items in
memory. We show existence of stable states of the network corresponding to the activation of neuronal ensembles encoding single memory state and to the activation of intersection of neuronal
ensembles encoding two or more memory states. We identify these different phases of the model with mean-field analysis of the network dynamics. We, further, show that the transitions between the
memories may be driven by periodic modulation of the feedback inhibition that pushes the network to oscillate between the attractor memory states and intersections between these states, as was
suggested in Romani et al. (2013). We identify these different phases of the model with mean-field analysis of the network dynamics. Based on this analysis we perform extensive numerical simulations
to characterize the recall behavior of the model. In addition, we modeled short-term associations between memory items formed during the acquisition and characterize their effects. Finally, we
systematically characterize the effects of neuronal noise on recall. The main goal of this paper is to present the network model of recall compatible with well-known features of free recall observed
over many years of research. Some of the predictions of the model are also tested against a large recent dataset of free recall that was collected and made available by the lab of Prof. Kahana from
University of Pennsylvania (see details in Section 2).
We consider a Hopfield neural network of N rate-neurons (Hopfield, 1984, see also Grossberg, 1988). The dynamics of neuron i is represented by the equation: τc˙i(t)=−ci(t)+∑j=1NJij·rj(t)+ξi(t), ri=
g(ci). where c, r are respectively the synaptic currents and the firing rates, J the connectivity matrix, each ξ[i] is an independent random variable having a gaussian distribution with mean zero
and variance ξ[0] and τ is a constant^1.
The gain function is: {g=(x+θ)γx+θ>0,g=0x+θ≤0. where θ > 0 is a threshold for the activation of a neuron while γ defines the gain and is constrained to γ < 1 for the gain function to be
Each of the P memory items is represented by bynary vectors of N bits: ημ∈{1..P}=100011101001..1001︸N neurons. where each bit has an indipendent random binary value, being 1 with probability f and
0 with probability 1 − f (Kanerva, 1988; Treves and Rolls, 1991). We use these vectors to define the connectivity matrix J according to the Hebbian rule (Tsodyks, 1989): Jij=κN(∑μ=1P(ηiμ−f)(ηjμ−f)−φ)
. where κ and φ are two parameters that respectively define the strength of excitation and the relative strength of inhibition in the network. When simulating the network, all parameters are held
constant except for the relative strength of the inhibition φ.We say that a particular memory is “recalled” when the corresponding memory pattern is active.
Memory μ is recalled if the average firing rate of neurons corresponding to memory μ (i such that ηiμ=1) is above the threshold value r[thresh]. This threshold is chosen so that two memories are
never recalled simultaneously. If in a given time interval, e.g., from time 0 to T, the state of the network was in memories μ[1], μ[2], μ[3].. at different times, we say that the network has
“retrieved” these memories in a time T.
A slight modification of the model allows to account for short-term associations as in the SAM model. For example, temporal contiguity is the tendency to recall neighboring presented items in
temporal proximity. To account for this effect we add a new term to the connectivity matrix J[ij]: Jij+−=Jij+δJij=Jij+J+∑μ=1P−1ηiμηjμ+1+J−∑μ=2Pηiμηjμ−1.
The new part δJ[ij] consists of two terms which respectively connect a given memory μ with the memories presented immediately before and after it (μ − 1 and μ + 1) (Sompolinsky and Kanter, 1986;
Griniasty et al., 1993). In doing so the memories are chained one to the other in the ‘forward’ and ‘backward’ direction with an asymmetry which depends on the values of J[+] and J[−].
We analyze the network in the absence of noise (ξ[0] = 0) and temporal contiguity (J[+] = J[−] = 0). To quantify the degree of memory activations we introduce the “overlaps” defined as in Amit and
Tsodyks (1991): {mμ(t)=1N∑i=1N(ηiμ−f)ri(t),μ∈{1..P}m0(t)=1N∑i=1Nri(t).
While m^0(t) measures the average firing rate in the network at time t, each m^μ(t) measures the difference between the average firing rate of neurons encoding memory μ and all other neurons: mμ(t)=
At a fix point of the network dynamics (Equation 2) the synaptic currents can be expressed via the values of the overlaps: ci=∑j=1NJijrj=∑μ=1Pκ((ηiμ−f)mμ−φm0), ri=g(ci); given by Equation (2) in
Equation (7). This shows that one can calculate r for each neuron i given the set of m′s. Pluggin Equation (10) into Equation (7) we obtain a a system of P + 1 equations for the overlaps m′s. The
solutions to such a system are the possible fixed points of the network. Consider a vector ηi∈{0,1}P representing the encoding of each memory item by neuron i. There are 2^P possible realizations of
vector η[i] that are denoted by a random vector v ∈ {0, 1}^P where each component is indipendent from any other being 1 with probability f and zero otherwise. Each realization of v identifies a
population of neurons. We say that neuron i belongs to a population v if η[i] = v that is ηiμ=vμ ∀μ. Furthermore, we say that a population v belongs to a memory μ if v^μ = 1 (Curti et al., 2004).
The cardinality of a vector is defined as |v|=∑μvμ.
The probability for each vector v is: Sv=(1−f)P−|v|·f|v|, while the synaptic current for each neuron in population v is: cv=∑ν=1Pκ((vν−f)mν−φm0).
The fixed point solutions can then be characterized in the limit N → ∞ in terms of these population vectors. Plugging Equation (10) into Equation (7) and summing up we obtain in the limit N → ∞: {mμ=
〈(vμ−f)·g(cv)〉vm0=〈g(cv)〉v. where the average can be expressed in terms of the probability S[v] as: {mμ=∑v(vμ−f)Sv·g(cv)m0=∑vSv·g(cv)
This system determines the fixed points of the network in the meanfield limit. It cannot be solved in general but for a given ansatz of the solution it is possible to determine the region, in the
parameter space, for its existence and stability. The type of solutions that we analyze are those that represent either a single memory or the intersection between memories. The correct ansatz for
these solutions are easily expressed in terms of the synaptic currents. A single memory solution is then defined by the following conditions:
the currents to each population v that belongs to the active memory μ are uniformly above threshold c[v] + θ > 0 if v^μ = 1;
the currents to each population that doesn't belong to the active memory μ are below threshold c[v] + θ < 0 if v^μ = 0;
This two conditions define our ansatz for a single memory state. From this definition it follows that in the state of single memory the the only overlap m different from zero is the one of the active
memory m^μ. Similarly we define the ansatz for the intersection between two or more memories. In this state only two overlaps m are different from zero. For each of these ansatz one can find its
region of existence and stability in parameter space. In such a region the solution is steady state of the system. A detailed theoretical analysis of these regions goes beyond the scope of this paper
and will be presented in a future publications.
To study the influence of finite size effects and noise on the dynamics of the network we simulate the dynamic of a network of N = 10^5 neurons. To achieve this goal we simplify the system in
Equation (2). This is a dimensionality reduction of the network that reduces the number of simulated units. All the neurons that have the same vector η[i] (i.e., are in the same population v such
that η[i] = v) can be described by a single unit. For these neurons the afferent connections given by the matrix J are identical. Each neuron receives the same input and projects equally on other
neurons. It is not possible to differentiate their activity except for the effect of the noise term ξ. But in Equation (2) we can average terms which share the same connections averaging also their
noise. For a given realization of the network we can write the fraction of neurons in a given population v as: Sv=1N×{numberofisuchthatηi=v}, which converges to the definition of Equation (12)
in the limit of N → ∞. Defining c[v](t), the averaging synaptic current c(t) for a neuron in population v at time t, it is then possible to write an equation for the dynamics of c[v](t). By summing
Equation (2) over all neurons which belong to the same population v we obtain: c˙v(t)=−cv(t)+∑wJ˜vw·Sw·g(cw(t))+ξ˜v(t), where ξ~v is a gaussian white noise with mean zero and amplitude ξ~v=
ξ0·Sv·N, while J~vw is given by: J˜vw=κN∑μ=1P((vμ−f)(wμ−f)−φ)+J+∑μ=1P−1vμwμ+1+J−∑μ=2Pvμwμ−1.
The vectors v and w are binary vectors of length P identifying different populations. The system of Equation (17) is a reduction of the original system of Equation (2), it has 2^P equations instead
of the N. In this reduction the only piece of information which is not accessible is the precise value of the firing rate of each single neuron. Only the average firing rate of the population it
belongs to is now accessible. The actual number of equations to simulate depends on the particular realization of the network given by the choice of η^μ∈{1..P}. Although in principle the system has 2
^P equations, in practice, due to the finite size of the network and its sparse connectivity, there are much less populations since S[v] = 0 for most v (Curti et al., 2004). The total number of
equations in the system will depend on N and f but will always be less than N, tending to N only for very large P. In this framework, for P = 16, we are able to simulate easily a large network of N =
10^5 neurons. Indeed taking f = 0.1, the number of equations to simulate drops from the 10^5 of the original system in Equation (2) to the ≈1000 of the reduced one of Equation (17).
Simulations are run according to Equation (17) employing the parameters in Table 1. The number of simulated networks is N[trials]. For each simulation the network is initialized in the state of a
single, randomly chosen memory μ. In this state all the populations v which belong to memory μ are initialized to a rate r[ini] while the others are initialized to a zero rate. In the model the
transitions between memories are triggered by oscillations of the variable φ. This oscillates sinusoidally between the values φ^max and φ^min. The oscillations have a period τ[o] which is much larger
than τ so that the network is undergoing an adiabatic process. Integrations of Equation (17) are performed with the Euler method with a time step of dt and the simulated interval is [0..T]. The total
number of cycles of oscillations is T∕τ[o].
Reference values for the parameters in the simulation.
Parameters and hyperparameters
Name Description Value
N Number of neurons 100,000
P Number of memories 16
f Sparsity 0.1
τ Decay time 0.01
κ Excitation parameter 13, 000
φ^max Max inhibition parameter 1.06
φ^min Min inhibition parameter 0.7
γ Gain function exponent 2/5
θ Gain function threshold 0
τ[o] Oscillation time 1
T[tot] Total time 450
dt Integration time step 0.001
J[+] Forward contiguity 1500
J[−] Backward contiguity 400
ξ[0] Noise variance 65
r[thresh] Recall threshold 15
N[trials] Number of trials 10, 000
r[ini] Initial rate 1
The data analyzed in this manuscript were collected in the lab of M. Kahana as part the Penn Electrophysiology of Encoding and Retrieval Study. Here we analyzed the results from the 141 participants
(age 17–30) who completed the first phase of the experiment, consisting of 7 experimental sessions. Participants were consented according the University of Pennsylvanias IRB protocol and were
compensated for their participation. Each session consisted of 16 lists of 16 words presented one at a time on a computer screen and lasted approximately 1.5 h. Each study list was followed by an
immediate free recall test. Words were drawn from a pool of 1638 words. For each list, there was a 1500 ms delay before the first word appeared on the screen. Each item was on the screen for 3000 ms,
followed by jittered 800–1200 ms inter-stimulus interval (uniform distribution). After the last item in the list, there was a 1200–1400 ms jittered delay, after which the participant was given 75 s
to attempt to recall any of the just-presented items. Only trials without errors (no intrusions and no repeated recalls of the same words) were used in the analysis.
We analyze this dataset to validate our model. We investigated several aspects of the dataset as described in Katkov et al. (2014, 2015). Here we show the plots concerning semantic similarity in
Figures 5B,D. Of all the trials we exclude those where items not belonging to the presented list were reported (intrusions) and those where at least one word was retrieved twice (repetitions). For
each list we then associate to each pair of words their LSA score as obtained from online datasets. We then consider the pairs formed by orderly associating two consecutively reported items. For each
of these pairs we obtain the transition rank by ranking the LSA score the pair among all the scores of the first item with any other word in the list. As there are 16 words the maximum rank is 15 and
the minimum is 1. This is the quantity shown on the x-axis of Figure 5B.
For each pair of consecutive reported items we compute the IRT by the difference of their times of retrieval. This is the quantity shown on the y-axis of Figure 5B vs. the LSA score of the same pair.
The main principle of recall that was suggested in Romani et al. (2013) is that externally generated control signal, expressed in periodic modulation of the strength of feedback inhibition, drives
the network to oscillate between two states; one state is characterized by activation of single attractors, which correspond to a recall of the corresponding item (Hasselmo and Wyble, 1997;
Gelbard-Sagiv et al., 2008; Romani et al., 2013); the second state is the intersection between pairs of attractors, which is a step toward transitions between different items. In this way each
retrieved item acts as an internal cue for the next one (Raaijmakers and Shiffrin, 1981). Here we use the meanfield analysis of the network (see Section 2) to confirm that these two state types are
indeed present. We identify the parameter regimes for their existence and stability. The meanfield theory greatly simplifies the analysis of the network by reducing the dynamics from that of single
neurons (Equation 2) to overlaps, which are variables that describe the degree to which the network state corresponds to one of the memory attractors (see Equations 7 and 15 in Section 2). In the
state of single attractors, only one overlap is positive while other ones are zeros. In the intersection states, pairs of overlaps are positive. We therefore use the meanfield equations that
determine the possible values of overlaps (Equation 15) to find solutions corresponding to the intersection of Q memories. These solutions are characterized by Q positive overlaps: m^1 = … = m^Q = m^
active. The overlaps have all the same values as all the active neurons in the intersection of Q memories fire at the same firing rate. The precise solution depends on the choice of the gain function
in Equation (3). For concreteness, we chose a saturating gain function with threshold, with the exponent of γ = 1∕2 that allows analytical solution. The solution to Equation (15) is {m0=12(k2f2Q
((f−1)2Q−φ)+k4f4Q(φ−(f−1)2Q)2+4θk2f2Q),mactive=(1−f)·m0,minactive=0 where m^active and m^inactive are respectively the value of the overlap for an active and inactive memory and m^0 denotes
the average activity of the network. f denotes the sparseness of memory representations, k scales the strength of the recurrent associative synapses and φ defines the relative strength of inhibition,
Figure 1A (see Section 2 for more details). The existence of these solutions requires the term in the square root to be positive, which results in the phase diagram shown in Figure 1B. Increasing the
relative strength of feedback inhibition, the network state goes from the regime with only single attractor states to the one where single attractor and intersection of pairs of attractors coexist.
More elaborated analysis of stability, which will be presented elsewhere, shows that these solutions are stable in the whole region of their existence, but the relative stability of single attractor
states relative to the intersection states is decreasing with the increase in φ.
Network architecture and Mean-field phase diagram. (A) Neurons in the network are connected through simmetric connections induced by hebbian learning. Homeostatic control is induced by the inhibition
strength determined by φ. (B) Mean-field phase diagram for the parameters κ and φ. The legend illustrates different phases. Circles denote a pool of neurons encoding a particular memory. For low
values of φ the single attractor solution is found, as φ is increased other solutions appear. Parameters values are according to Table 1.
Based on this analysis, we simulate the network while modulating the inhibition to cause the transitions between these two states (see Section 2 for details of simulations). We also add noise in
order to trigger the transitions to the intersections between two attractors when inhibition rises. To mimic the experimental protocol (see Section 2), we simulate multiple recall trials where random
samples of 16 items are selected for each trial. One sample epoch of simulations is shown in Figures 2A,B.
Neural network activity. (A) Activity of the attractors in the network. Different rows correspond to the average firing rate of different memories for 15 cycles of oscillation of φ. (B) Activity of
the attractors in the network. Each colored line correspond to the average firing rate of a different memory. (C) Details of the neuronal dynamics.
Each of the colored line in Figure 2B shows the average firing rate of neurons representing a specific memory. When one of these is above the threshold value of r[thresh] we regard the corresponding
memory as retrieved. We note that the precise sequence of retrieved items is not predictable for a given list of presented words, as it strongly depends on the first item being recalled (here assumed
to be chosen randomly) and is sensitive to noise.
The effect of the oscillations is to modulate the overall activity in such a way that at each cycle the state of the network can potentially move from one attractor to another. The details of the
underlying dynamics are shown in the plot of Figure 2C which zooms on the shadowed region in Figure 2B to show the transition from a single attractor to an intersection. This will lead to the
retrieval of a new memory.
Although a switch between different states of the network is induced at every oscillation cycle, not always the state of the network shifts toward a new memory (Figures 2A,B). Rather it can remain in
the same state or shifts toward an already explored memory so that only stochastically new memories are retrieved.
Since the recall of subsequent memories is a stochastic process triggered by noise in the input, we perform multiple simulations to characterize the average accumulation of recalled memories with
time (Figure 3A). We observe that after a quick initial accumulation, the retrieval process slows down sharply, however the number of memories recalled continues to increase. This behavior is
compatible with experimental observations (Rohrer and Wixted, 1994; Wixted and Rohrer, 1994) and with results obtained by stochastic implementation of the free recall model presented in Katkov et al.
(2015). The time between the recall of subsequent items (inter-retrieval time, IRT) is highly variable as shown in Figure 3B. Even after very long time-intervals it is possible to retrieve new items,
in line with the experimental findings. We note that while the average accumulation curve is monotonic and smooth, each trial is characterized by a highly irregular set of IRTs, with short IRT
interspersed between long ones due to cyclic transitions between items with relatively large overlaps. This is broadly consistent with experimental data (results not shown). Following the
experimental study of Murdock and Okada (1970), Rohrer and Wixted (1994), we analyzed the average time progression of recall for trials with a certain number of words recalled (in a time window of
500 oscillation cycles). An interesting observation is that the corresponding curves separate already at the beginning of the recall, i.e., in the trials where more items are recalled eventually, the
recall begins faster than in less successful trials, Figure 3C. This observation is also in line with the experimental results and with the stochastic model of Katkov et al. (2015).
Temporal properties of recall. (A) Average number of words recalled vs. time. (B) Distribution of the IRTs. (C) IRT average (y axis) for ordered transitions between words (x-axis). Each line
represents the average over the set of trials in which a different number of words were recalled: going from left to right (or dark to light blue) less to more words up to the maximum of 16.
Here we study the dependence of the recall process on the statistics of memory representations as defined by the memory patterns introduced in Section 2 (see Equation 4). In particular we consider
the effects of representation size (number of neurons encoding a given item) and the size of intersections between the representations of two memories (number of neurons encoding both of the items).
The representation size higly influences the probability of recall for a given memory. Our simulations show that simulating the network many times with items having a randomly drawn size, the
probability to recall an item is monotonically increasing with the size of the corresponding representation (Figure 4). This is predominantly due to the fact that items represented by more neurons
have on average a larger intersections with other items, since we assumed random encoding. Indeed as we show below, the intersection sizes play a major role in determining the subsequent items to be
recalled. Therefore, our model is in agreement with the graph model of Romani et al. (2013), Katkov et al. (2015) where items with larger representations have higher probability to be recalled (easy
vs. difficult items).
Probability of recalling an item of a given size. The size is the number of neurons encoding for that particular memory.
Intersections between memory representations play a crucial role in our model of recall. In Romani et al. (2013) intersection sizes (the number of neurons encoding a pair of items) were assumed to
govern the transitions between the recall of successive items. To evaluate the role of intersection sizes in the transitions between items we ranked intersection sizes for each presented list of 16
words, from low to high (1–15), and computed the probability of transition for each intersection rank (Figure 5A). Thirty percent of transitions occurred for largest intersection with the currently
recalled item, the probability of other transitions monotonically decreases with the rank of intersections. Moreover, we found that the inter-recall time between the successive items also exhibited
monotonic relation to the intersection size, with larger intersections leading to faster transitions (Figure 5C). These results indicate that the sizes of inter-item neuronal intersections to a large
extent determine the temporal evolution of recall. It is therefore tempting to speculate that they are neuronal correlates of semantic similarity between the items (Baddeley, 1966; Mandler et al.,
1969; Howard and Kahana, 2002b). To further elaborate on this hypothesis, we analyzed the dataset of free recall of lists of unrelated words collected and made available by Prof. Kahana from the
University of Pennsylvania. We considered a measure of semantic similarity called (Latent Semantic Analysis, or LSA), which represent the number of times two words appear together in a representative
corpora of natural text (Landauer and Dumais, 1997). We then used this measure to evaluate the effect of semantic similarity on the probability and speed of inter-item transitions in experimental
observations, and obtained a remarkable agreement with the corresponding model predictions (compare Figures 5A,C with Figures 5B,D).
Memory transitions. (A,C) Probability density of transitions between two subsequent recalled memories as a function of the ranked size of their intersection (1–15 going from the less to the most
similar) and of their Latent Semantic Analysis score (LSA). (B,D) Average IRT between two subsequent recalled memories as a function of the size of their intersection (in number of neurons) and of
their Latent Semantic Analysis score (LSA).
We now focus on factors which influence the recall performance, namely the number of items that can be retrieved in a given time window, between time 0 and time T. This window is chosen to be long
enough such that the recall slowed down considerably (see Figure 3A). In particular we will consider the effects of temporal contiguity and noise.
The performance of the network is limited as item representations that control the retrieval dynamics are random and hence same items are recalled numerous times before the network can retrieve a new
memory. It is known however that the order of recall is not completely random, e.g., words that have neighboring positions in the list have a tendency to be recalled in close proximity (Sederberg et
al., 2010). This phenomenon is known as temporal contiguity and we model it by adding a special term in the connectivity pattern that links neighboring items to each other favoring the transitions
between them (see Section 2, Equation 6), thereby overcoming the effects of randomness. Hence when the forward contiguity term is stronger, the network retrieves more items (Figure 6A). Although if
it is too strong it becomes the only mechanism for triggering a transition and the average number of items retrieved will be half of the total number (8 items in Figure 6A). Indeed in this regime the
network retrieves all items that come after the random initial one. Once it retrieves the last presented item it keeps retrieving it. The loop of connectivities via the second last item, which
strongly projects on it, prevents the activation of any other memory.
Recall performance. (A) Temporal contiguity and performance: average number of words recalled as a function of J[+]. J[+] ranges between the fixed value of J[−] = 400 and 2500. The number of memories
is P = 16. The number of memories is P = 16. (B) Noise and performance: the average number of words recalled is plotted at the vary of noise variance ξ. A small amount of noise helps the retrieval
process triggering transitions from memory to memory. For high noise levels the retrieval mechanism is hindered.
Another crucial element of the model is the noise that causes the recall dynamics to escape the short loops and retrieve new items. We thus computed the network performance for increasing noise
levels (Figure 6B). As expected, the performance is very poor for low noise amplitudes and increases for higher amplitudes. This growth is terminated at some optimal level of noise, after which the
number of recalled items is slowly decreasing. The reason for this behavior is that at high noise levels, the network does not converge to inter-item intersections at high levels of inhibition,
rather to noisy mixtures of different memories, which results in less robust transitions to other items when inhibition is reduced (results not shown). Based on these observations, we propose that
noise amplitude could be regulated during the retrieval phase (e.g., with neuromodulators that control cortical synchrony) in order to facilitate the recall of items from long-term memory.
We presented a neural network model of information retrieval from long-term memory that is based on stochastic attractor dynamics controlled by periodically modulated strength of feedback inhibition.
The model provides a more realistic implementation of the mechanisms behind associative recall based on neuronal representations of memory items, as proposed in Romani et al. (2013); Katkov et al.
(2015). The network behavior is broadly compatible with some of the long-standing observations on free recall, in particular the slow-down of recall speed, highly variable inter-recall times and
strong effects of semantic similarity between words.
In classical models of recall, such as SAM (Raaijmakers and Shiffrin, 1980) or TCM (Howard and Kahana, 2002a; Polyn et al., 2009), performance is mainly influenced by the temporal associations
acquired during stimulus presentation. These effects were also considered in a possible network implementation (Bradski et al., 1994). In contrast, our model is based on long-term memory
representations. Simple modification of the model (see Equation 6) allows to account for the effect of temporal contiguity (Sederberg et al., 2010). Therefore, we show that effects due to long-term
memory representations and to presentation order can be implemented in a single neural network. It is important to note that effects due to long-term representations are masked by temporal
association effects, being visible only in large data sets having many trials over lists composed of randomly selected words from a large preselected pool of words. In such datasets the same word is
roughly uniformly distributed across temporal positions and their neighborhood words. Consequently, temporal association effects on the level of individual words are averaged out, and effects due to
long-term representations become clearly visible. There are two major effects that historically were not considered neither experimentally nor in models: (1) intrinsic difficulty of words to be
recalled—existence of “easy” and “difficult” words for recall; (2) masking of “difficult” words by “easy” words—“easy” words are statistically recalled earlier in the trial and suppress the recall of
“difficult” words (Katkov et al., 2015). This work is a first attempt to implement a neural network that is taking into account long-term representation of memorized items.
Our network model is based on the basic assumption that when a word is recalled, a corresponding neuronal ensemble that represents this word in long-term memory is temporarily activated. The issue
that we dont explicitly address is how the words that are presented for recall are selected, or primed and why other word representations are not reactivated (excluding rare instances of erroneous
recall of words from previous lists). In the spirit of Kahanas TCM model (Howard and Kahana, 2002a), such a priming could be mediated by the excitation arriving from a separate “context” network
where representation of the experimental setting is active throughout the recall trial. We therefore ignored the neuronal representations of words that are not in the list and considered a network
with effectively very low “loading” level (P ≪ N). More realistic implementation of the model with high loading levels should be considered in future.
Another simplifying unrealistic assumption of the model concerns the statistics of long-term representations that are taken as random uncorrelated binary vectors of fixed average sparsity. Real
statistics of word representations is not clear but can be safely assumed to be much more complicated, possibly reflecting the rich semantic associations between words and the frequency of their
usage. With our assumptions, overlaps between different representations exhibit Gaussian distribution with variance to mean ratio decaying in the limit of infinitely large networks. Considering the
effects of overlap distribution in this limit requires an extended mean-field analysis that will be presented elsewhere.
Very often the same attractor is repeatedly activated before noise causes the transition to a new one, and it can still be activated again at a later time. Since participants are instructed to only
recall each word once, we assume that they suppress the report of a word after it is already recalled. In some experiments, subjects are explicitly instructed to report a word as many times as it
comes to mind during a recall. Comparing the model to the results of such experiments could be of interest for a future work.
We considered modulated inhibition as a driving force for transitions between network attractors. Other mechanisms could potentially play this role, e.g., neuronal adaptation or synaptic depression.
We believe that oscillatory mechanism is more plausible as it allows the system to regulate the transitions by controlling the amplitude and frequency of oscillations. The oscillations of network
activity could correspond to increased amplitude of theta rhythm observed in human subjects during recall (Kahana, 2006; Osipova et al., 2006) and other types of working memory experiments (Tesche
and Karhu, 2000; Raghavachari et al., 2001; Jensen and Tesche, 2002). The way we implemented feedback inhibition is not fully biologically plausible. Feedback inhibition in the cortex is mediated by
several major types of interneurons (Markram et al., 2004). In particular, one type of interneurons (VIP), was proposed as a gateway for regulating the local inhibition since it receives inputs from
remote cortical and subcortical regions and preferentially targets other types of interneurons (Pi et al., 2013). More realistic neural network models of recall should include this kind of
At the current level of realism, we propose to view our model as a platform for further development of realistic neural network models of information retrieval and other related types of cognitive
tasks. Future modifications should include effects of positional order on recall, or positional chunking, i.e., the tendency to divide the presented lists on groups of contiguous words (Miller, 1956;
Gobet et al., 2001), as well as primacy (tendency to recall earlier words with higher probability, see e.g., Grossberg and Pearson, 2008), or effects obtained in serial recall, such as e.g., encoding
gradient or similar tasks (Averbeck et al., 2002, 2003; Farrell and Lewandowsky, 2004), where participants are forced to recall items in presented order, implying stricter tests on temporal | {"url":"https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2015.00149/xml/nlm?isPublishedV2=false","timestamp":"2024-11-07T20:51:48Z","content_type":"application/xml","content_length":"121634","record_id":"<urn:uuid:c3f13f31-e3d7-4797-b539-ae571084adf7>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00714.warc.gz"} |
Get Rows with NaN values in Pandas - Data Science Parichay
In this tutorial, we will learn how to get rows with NaN values from a Pandas dataframe.
How to get rows with NaN values in a Pandas DataFrame?
To get the rows with NaN values in Pandas we use the following syntax-
#Create a mask for the rows containing atleast one NaN value.
mask = df.isna().any(axis=1)
#Pass the mask to df.loc[] to obtain the required rows.
rows_with_null_values = df.loc[mask]
• df — A Pandas DataFrame object.
• df.loc[] — A Pandas property to fetch rows using row labels or mask.
• df.isna() — A Pandas DataFrame method to get a boolean mask for NaN values
• df.isna().any(axis=1) — Gets the boolean mask for rows containing any (at least one) null value.
We use df.isna() to create a mask for null values. df.isna() returns a dataframe with the same shape as the original dataframe, but the NaN values are replaced by True while all other values are
replaced by False. Since the object returned by df.isna() is a dataframe, we can further use Pandas dataframe methods on it. For our purpose, we used any() method.
any() method takes the axis as input. If we pass axis = 0 (or axis = 'index'), it checks for a value to be True along each column. If at least one value is True, it returns True for that column.
Similarly, if we pass axis = 1 (or axis = 'columns'), the any() method checks for any value to be True along each row. If a value is True along a row, it returns True for that row; else it returns
The idea we have implemented here is that df.isna() will have the same number of rows and columns as our original dataframe, except that NaN values are replaced by True, and all others are replaced
by False. So, if we use df.isna().any(axis=1), the any() method will check each row for a True value. As discussed earlier, all NaN values are replaced by True in df.isna(). So, if there is a NaN
value along a row, the any() method will return True for that row. Thus, we have a boolean mask for rows containing one or more NaN values.
We pass the mask to df.loc to get the required rows.
📚 Data Science Programs By Skill Level
Introductory ⭐
Intermediate ⭐⭐⭐
Advanced ⭐⭐⭐⭐⭐
🔎 Find Data Science Programs 👨💻 111,889 already enrolled
Disclaimer: Data Science Parichay is reader supported. When you purchase a course through a link on this site, we may earn a small commission at no additional cost to you. Earned commissions help
support this website and its team of writers.
Similarly, if we want to get rows containing NaN values only (all the values are NaN), then we use the following syntax-
#Create a mask for the rows containing all NaN values.
mask = df.isna().all(axis=1)
#Pass the mask to df.loc[] to obtain the required rows.
rows_with_null_values = df.loc[mask]
Instead of the any() method, we have used the all() method here. all(axis=1) method checks each row if all its values are True. Since we are using all() method on df.isna(), it returns True if all
the row elements in the dataframe have NaN value. Thus, we create a mask for the rows containing only NaN values.
We pass this mask to df.loc[ ] to get rows containing only NaN values.
We will see some examples to understand the syntax we discussed above. Let’s consider weather data for a city for a week.
import pandas as pd
#Create data for dataframe
d = {
"Max. Temp." : [41.7, None, 37.7, 35.9, None, 40.2, None],
"Min. Temp." : [28.8, None, 26.6, 33.1, None, 28.8, 29.6],
"RH%" : [None, None, 66.6, 78.3, None, 72.7, 91.0],
#Create row labels for dataframe
index = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]
#Create the dataframe
df = pd.DataFrame(d, index=index)
#Print the dataframe
Max. Temp. Min. Temp. RH%
Sunday 41.7 28.8 NaN
Monday NaN NaN NaN
Tuesday 37.7 26.6 66.6
Wednesday 35.9 33.1 78.3
Thursday NaN NaN NaN
Friday 40.2 28.8 72.7
Saturday NaN 29.6 91.0
In this dataframe, we have replaced some values with None. Pandas represents None values of Python as NaN, as seen in this case.
Example 1: Obtain rows containing at least one NaN value from a dataframe
Let’s create a mask for the NaN values in our dataframe.
#Create a mask for the NaN values in dataframe df
mask_for_nan_values = df.isna()
#Print the mask created
Max. Temp. Min. Temp. RH%
Sunday False False True
Monday True True True
Tuesday False False False
Wednesday False False False
Thursday True True True
Friday False False False
Saturday True False False
We can see that the NaN values in the original dataframe are replaced by True, while False replaces all other values. To get the rows containing at least one NaN value, we can create a mask as-
#Create a mask for the rows containing atleast one NaN value.
mask_for_rows_wth_atleast_1_nan_value = mask_for_nan_values.any(axis=1)
#Print the mask
Sunday True
Monday True
Tuesday False
Wednesday False
Thursday True
Friday False
Saturday True
dtype: bool
As seen in the output above, the mask has True values for rows containing NaN values, while for others, it has False values.
Now, let’s pass this mask to df.loc[ ]
#Pass the mask to df.loc[] to obtain the required rows.
rows_cont_atleast_1_nan_value = df.loc[mask_for_rows_wth_atleast_1_nan_value]
#Print the obtained rows.
Max. Temp. Min. Temp. RH%
Sunday 41.7 28.8 NaN
Monday NaN NaN NaN
Thursday NaN NaN NaN
Saturday NaN 29.6 91.0
Thus, we have obtained the rows containing one or more NaN values. The above code in one cell will look like this-
#Create a mask for the rows containing atleast one NaN value.
mask = df.isna().any(axis=1)
#Pass the mask to df.loc[] to obtain the required rows.
required_rows = df.loc[mask]
#Print the obtained rows.
Max. Temp. Min. Temp. RH%
Sunday 41.7 28.8 NaN
Monday NaN NaN NaN
Thursday NaN NaN NaN
Saturday NaN 29.6 91.0
We get the same output as before since the code is the same, only this time it is compact.
Example 2: Obtain rows containing only NaN values from a dataframe
To obtain rows containing only NaN values for a dataframe, we will do the same as in the previous example, except we replace any() with all().
#Create a mask for the rows containing all NaN values.
mask = df.isna().all(axis=1)
#Pass the mask to df.loc[] to obtain the required rows.
required_rows = df.loc[mask]
#Print the obtained rows.
Max. Temp. Min. Temp. RH%
Monday NaN NaN NaN
Thursday NaN NaN NaN
In the above output, the rows with only NaN values are present.
From this tutorial, we looked at how to:
• Get the rows containing one or more NaN values using the loc property, isna(), and any() methods of the dataframe.
• Get the rows containing only NaN values using loc property, isna(), and all() methods of the dataframe.
You might also be interested in –
Subscribe to our newsletter for more informative guides and tutorials.
We do not spam and you can opt out any time. | {"url":"https://datascienceparichay.com/article/get-rows-with-nan-values-in-pandas/","timestamp":"2024-11-13T17:51:43Z","content_type":"text/html","content_length":"264513","record_id":"<urn:uuid:0ef794d4-7c5e-4b47-af6d-86a1e8fccc1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00587.warc.gz"} |
Verbe anglais to plot
Conjugaison du verbe anglais TO PLOT
Verbe régulier : plot - plotted - plotted
Traduction française : comploter
do I plot?
do you plot?
does he plot?
do we plot?
do you plot?
do they plot?
am I plotting?
are you plotting?
is he plotting?
are we plotting?
are you plotting?
are they plotting?
did I plot?
did you plot?
did he plot?
did we plot?
did you plot?
did they plot?
was I plotting?
were you plotting?
was he plotting?
were we plotting?
were you plotting?
were they plotting?
have I plotted?
have you plotted?
has he plotted?
have we plotted?
have you plotted?
have they plotted?
have I been plotting?
have you been plotting?
has he been plotting?
have we been plotting?
have you been plotting?
have they been plotting?
had I plotted?
had you plotted?
had he plotted?
had we plotted?
had you plotted?
had they plotted?
had I been plotting?
had you been plotting?
had he been plotting?
had we been plotting?
had you been plotting?
had they been plotting?
will I plot?
will you plot?
will he plot?
will we plot?
will you plot?
will they plot?
will I be plotting?
will you be plotting?
will he be plotting?
will we be plotting?
will you be plotting?
will they be plotting?
will I have plotted?
will you have plotted?
will he have plotted?
will we have plotted?
will you have plotted?
will they have plotted?
will I have been plotting?
will you have been plotting?
will he have been plotting?
will we have been plotting?
will you have been plotting?
will they have been plotting?
would I plot?
would you plot?
would he plot?
would we plot?
would you plot?
would they plot?
would I be plotting?
would you be plotting?
would he be plotting?
would we be plotting?
would you be plotting?
would they be plotting?
would I have plotted?
would you have plotted?
would he have plotted?
would we have plotted?
would you have plotted?
would they have plotted?
would I have been plotting?
would you have been plotting?
would he have been plotting?
would we have been plotting?
would you have been plotting?
would they have been plotting?
Verbe au hasard
administer - affect - announce - appall - appeal - approach - cannon - clean - develop - disappoint - do - dot - enter - finalize - find - flood - get - help - hurt - include - invent - kick - knit -
leach - log - loo - manufacture - mention - parameterize - pursue - shock - stack - stir - tear - tiptoe - to be - to do - to go - to have - translate - transmit - travail - verify - wave | {"url":"https://leconjugueur.lefigaro.fr/conjugaison/anglais/to+plot_question.html","timestamp":"2024-11-02T15:03:34Z","content_type":"text/html","content_length":"62084","record_id":"<urn:uuid:71bda94e-63ba-4dda-9bdd-ddc62f1ce4ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00118.warc.gz"} |
CH (2) : Calculations With Chemical Formulas And Equations
Questions and Answers
QUIZ FOR (( calculations with chemical formulas and equations ))
Faculty of Engineering, Zagazig University
Read moreBY
• 1.
Percentage of Magnesium [Mg] in Magnesium sulfate [MgSO[4]] Given Atomic Weights: Mg = 24 . S = 32 . O = 16
Correct Answer
A. 40
The percentage of Magnesium (Mg) in Magnesium sulfate (MgSO4) can be calculated by dividing the atomic weight of Magnesium by the molar mass of Magnesium sulfate and multiplying by 100. The molar
mass of Magnesium sulfate (MgSO4) can be calculated by adding the atomic weights of Magnesium (Mg), Sulfur (S), and Oxygen (O). In this case, the atomic weight of Magnesium is 24, and the molar
mass of Magnesium sulfate is 24 + (32 + (4 * 16)) = 120. Therefore, the percentage of Magnesium in Magnesium sulfate is (24 / 120) * 100 = 20%.
• 2.
Balance the following equation: Mg[3]N[2] + H[2]O Mg(OH)[2] + NH[3]
□ A.
Mg3N2 + 6H2O --> 3Mg(OH)2 + 2NH3
□ B.
Mg3N2 + 6H2O --> 3Mg(OH)2 + NH3
□ C.
Mg3N2 + 2H2O --> Mg(OH)2 + NH3
□ D.
Mg3N2 + 3H2O --> 3Mg(OH)2 + 2NH3
Correct Answer
A. Mg3N2 + 6H2O --> 3Mg(OH)2 + 2NH3
The balanced equation shows that for every 1 molecule of Mg3N2, 6 molecules of H2O are required to produce 3 molecules of Mg(OH)2 and 2 molecules of NH3. This is achieved by adjusting the
coefficients in front of each compound to ensure that the number of atoms of each element is the same on both sides of the equation.
• 3.
Calculate the mass % of carbon in carbon monoxide.
Correct Answer
The correct answer is 42.84, 42.8, 43. The mass % of carbon in carbon monoxide can be calculated by dividing the mass of carbon by the total mass of the compound and multiplying by 100. In carbon
monoxide, the molar mass of carbon is 12.01 g/mol and the molar mass of carbon monoxide is 28.01 g/mol. Therefore, the mass % of carbon in carbon monoxide is (12.01 g/mol / 28.01 g/mol) * 100 =
42.84%. The other options, 42.8% and 43%, are also within a reasonable range and can be considered correct.
• 4.
A reaction that produces crude iron from iron ore is shown below: Fe[2]O[3] (s) + 3CO (g) [] 2Fe (s) + 3CO[2] (g) How many moles of iron could be produced from the reaction of 10 mol Fe[2]O[3]
and 25 mol of CO?
□ A.
□ B.
□ C.
□ D.
□ E.
Correct Answer
C. 17 mol
From the balanced equation, we can see that the stoichiometric ratio between Fe2O3 and Fe is 2:2, meaning that for every 2 moles of Fe2O3, we will produce 2 moles of Fe. Therefore, if we have 10
moles of Fe2O3, we will produce 10 moles of Fe. The stoichiometric ratio between CO and Fe is 3:2, meaning that for every 3 moles of CO, we will produce 2 moles of Fe. Therefore, if we have 25
moles of CO, we will produce 25*(2/3) = 16.67 moles of Fe. Since we cannot have a fraction of a mole, the maximum number of moles of Fe that could be produced from the given reactants is 17 mol.
• 5.
You are given the following percentages: 40.05% S and 59.95% O. Find the empirical formula for these elements
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. SO3
The empirical formula is determined by the simplest whole number ratio of atoms in a compound. In this case, the percentages given suggest that there is a 40.05% chance of finding S and a 59.95%
chance of finding O in the compound. To find the simplest ratio, we divide both percentages by the smallest percentage value, which is 40.05%. When we do this, we get approximately 1 S and 1.5 O.
To simplify further, we multiply both values by 2 to get 2 S and 3 O. Therefore, the empirical formula for these elements is SO3.
• 6.
Given the following: 42.07% Na, 18.89% P, and 39.04% O, determine the empirical formula
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Na3PO4
The empirical formula is determined by finding the ratio of the different elements in the compound. In this case, we have 42.07% Na, 18.89% P, and 39.04% O. To find the ratio, we convert these
percentages to moles. Then, we divide the moles of each element by the smallest number of moles to get the simplest ratio. In this case, we find that the ratio is 1:1:4 for Na, P, and O
respectively. Therefore, the empirical formula is Na3PO4.
• 7.
Styrene has the empirical formula CH, and there is 92.25% carbon and 7.75% hydrogen. If you have a molar mass of 104g/mol, what is the multiplier or factor to get the molecular formula?
Correct Answer
D. 8
The empirical formula of styrene is CH, which means it contains one carbon atom and one hydrogen atom. The molar mass of styrene is given as 104g/mol. To find the molecular formula, we need to
determine the actual number of atoms in the molecule.
We are given that styrene is 92.25% carbon and 7.75% hydrogen. We can assume that the percentages represent the mass percentages of each element in the compound.
To find the number of carbon atoms, we divide the mass percentage of carbon by the molar mass of carbon (12g/mol) and multiply by the molar mass of the compound (104g/mol).
(92.25% carbon / 12g/mol) * 104g/mol = 773 atoms of carbon
To find the number of hydrogen atoms, we divide the mass percentage of hydrogen by the molar mass of hydrogen (1g/mol) and multiply by the molar mass of the compound (104g/mol).
(7.75% hydrogen / 1g/mol) * 104g/mol = 806 atoms of hydrogen
The ratio of carbon to hydrogen atoms is approximately 773:806. Simplifying this ratio gives us 1:1.
Therefore, the multiplier or factor to get the molecular formula is 1. However, the correct answer provided is 8, which is incorrect.
• 8.
Al(OH)3 + 3HCl --> 3H2O + AlCl3 If you have 14.0 grams of aluminum hydroxide, how much aluminum chloride is produced?
□ A.
□ B.
□ C.
Correct Answer
C. 23.9 g
When aluminum hydroxide (Al(OH)3) reacts with hydrochloric acid (HCl), it forms water (H2O) and aluminum chloride (AlCl3). The balanced equation shows that 1 mole of Al(OH)3 reacts with 3 moles
of HCl to produce 3 moles of AlCl3. To determine the amount of AlCl3 produced, we need to convert the given mass of Al(OH)3 to moles, using its molar mass. Then, using the mole ratio from the
balanced equation, we can calculate the moles of AlCl3 produced. Finally, we convert the moles of AlCl3 to grams, using its molar mass, to find that 23.9 grams of aluminum chloride is produced.
• 9.
Al(OH)3 + 3HCl --> 3H2O + AlCl3 If you actually recovered 22.0 grams of aluminum chloride, what is the percent yield of the reaction?
Correct Answer
B. 92%
The percent yield of a reaction is calculated by dividing the actual yield (in this case, 22.0 grams of aluminum chloride) by the theoretical yield (the maximum amount of aluminum chloride that
could be produced based on the stoichiometry of the balanced equation) and multiplying by 100. Since the equation is balanced with a 1:1 ratio between aluminum chloride and aluminum hydroxide,
the theoretical yield of aluminum chloride would also be 22.0 grams. Therefore, the percent yield is (22.0/22.0) x 100 = 100%, which matches the first option.
• 10.
What is the percentage of oxygen in carbon dioxide? (CO2)
Correct Answer
The percentage of oxygen in carbon dioxide (CO2) is 72.7%. This means that out of the total composition of CO2, 72.7% is made up of oxygen.
• 11.
Empirical Formula: 39.8% K, 27.8% Mn, 32.5% O
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. K2,Mn,O4
The empirical formula represents the simplest ratio of atoms in a compound. In this case, the compound contains 39.8% K, 27.8% Mn, and 32.5% O. To determine the empirical formula, we need to find
the ratio of these elements. By dividing the percentages by their respective atomic masses and then dividing by the smallest value obtained, we find that the ratio is approximately 2:1:4.
Therefore, the empirical formula is K2MnO4.
• 12.
The simplest formula of a substance shows
□ A.
the actual number of atoms of each element in one molecule of a substance.
□ B.
the elements that make up one molecule of the substance and the simplest whole number ratio between the atoms.
□ C.
the number of molecules in a sample of the substance.
□ D.
the molecular mass of the substance.
Correct Answer
B. the elements that make up one molecule of the substance and the simplest whole number ratio between the atoms.
The simplest formula of a substance shows the elements that make up one molecule of the substance and the simplest whole number ratio between the atoms. This means that the formula provides
information about the types of atoms present in the molecule and how they are arranged in the simplest ratio. It does not provide information about the actual number of atoms or molecules in a
sample of the substance, or the molecular mass of the substance.
• 13.
A compound is found to have a molecular mass of 90 atomic mass units and simplest formula of C[2]H[5]O. The molecular formula of the substance is: (Atomic mass of C = 12 amu, H = 1 amu, O = 16
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. C4H10O2
The molecular mass of the compound is 90 atomic mass units. The simplest formula of the compound is C2H5O. To find the molecular formula, we need to determine the ratio between the molecular mass
and the simplest formula mass. The simplest formula mass can be calculated by adding the atomic masses of each element in the formula: (2 * 12 amu) + (5 * 1 amu) + (1 * 16 amu) = 46 amu. Dividing
the molecular mass (90 amu) by the simplest formula mass (46 amu) gives us a ratio of approximately 1.96. Multiplying the subscripts in the simplest formula by this ratio, we get C4H10O2 as the
molecular formula.
• 14.
How many grams of oxygen are in one mole of calcium carbonate, CaCO[3]? (Atomic mass of O = 16 amu.)
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 48 grams
Calcium carbonate (CaCO3) consists of one calcium atom (Ca), one carbon atom (C), and three oxygen atoms (O). The atomic mass of oxygen is 16 amu. To find the total mass of oxygen in one mole of
calcium carbonate, we need to multiply the atomic mass of oxygen by the number of oxygen atoms in the compound (3). Therefore, the total mass of oxygen in one mole of calcium carbonate is 16 amu
* 3 = 48 grams.
• 15.
The ionic compound containing Fe^3+ and SO[4]^2- would have the formula
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Fe2(SO4)3
The correct answer is Fe2(SO4)3. This is because Fe3+ has a charge of +3 and SO42- has a charge of -2. In order for the compound to be neutral, the charges must balance out. Therefore, two Fe3+
ions are needed to balance out the charge of three SO42- ions, resulting in the formula Fe2(SO4)3. | {"url":"https://www.proprofs.com/quiz-school/story.php?title=ch-2-calculations-with-chemical-formulas-and-equations","timestamp":"2024-11-05T12:49:51Z","content_type":"text/html","content_length":"488556","record_id":"<urn:uuid:dc4169b4-5887-4f71-b599-9844284c2b2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00741.warc.gz"} |
MOMENT OF INERTIA | Moment of inertia of thin spherical solid sphere | thick spherical shell | right circular cone | truncated cone | Linear acceleration of a body rolling down in an inclined plane - akritinfo.com
MOMENT OF INERTIA | Moment of inertia of thin spherical solid sphere | thick spherical shell | right circular cone | truncated cone | Linear acceleration of a body rolling down in an inclined plane
Moment of inertia of thin spherical shell about it’s diameter
A thin uniform spherical shell has a radius of a and mass M. We have to calculate its moment of inertia about any axis through its centre.
Let, radius of Spherical shell = a
mass/area = σ
Total mass M = 4πa^2σ
We take on elementary ring perpendicular to diameter AB.
radius of the ring – asinθ
width = adθ
Mass of the ring = 2πasinθ.adθ.σ
Moment of inertia of the elementary ring about AB axis =
I = ma^2
Moment of inertia of the whole spherical Shell about its diameter (AB)
Moment of inertia of a solid sphere-about its diameter
According to the figure a sphere of mass M and radius a is shown, whose density is ρ. We have to calculate the moment of inertia of the sphere about the diameter AB. We can assume the sphere to be
made up of many discs whose surfaces are parallel to AB and the center is on this axis. This discs has a center at O the width of this disc is dx.
Let the radius of the solid sphere = a.
mass per unit volume = ρ
Total mass M =
We can divide a solid sphere into a no. of concentric spherical shell (thin).
Let us consider such a shell of radius x and thickness dx.
Mass of the elementary shell =
Moment of inertia of the spherical shell about diameter (AB) =
So,Moment of inertia of the whole sphere about the diameter (AB)
Moment of inertia of a thick spherical shell about its diameter
internal radius = a1
external radius = a2
Let us consider a thin spherical shell of radius x a ( a2 > x >a1) and thickness dx –
Mass of the elementary shell =
Moment of inertia of a right circular cone about an axis passing through vertex perpendicular to axis
Here we will look at the derivation as well as the calculation for finding the moment of inertia of a uniform right circular cone about an axis.
Let us consider a right circular cone. Its have –
height =h
base radius = a
mass per unit volume = ρ
Total mass M =
We divide the solid cone into a number of coaxial discs having different radii
Let us take such a disc having radius x , thickness dy at a distance Y from the vertex of the-cone.
Moment of inertia of the elementary disc about-its own diameter =
According to parallel axis theorem, Moment of inertia about the x axis passing through vertex,
as from figure
Moment of inertia of the whole solid cone about the required axis (x)
Moment of inertia of a right circular solid cone about its own axis
Let us consider a right circular cone having
height =h
base radius = a
mass per unit volume= ρ
Total mass M =
We divide the solid cone into a no. of co-axial discs having different radii
Let us take such an elementary thick disc having radius x and thickness at a distance y from vertex
Moment of inertia of the disc about its own axis (Y) =
and from figure,
Moment of inertia of the solid cone about its own axis (Y)
Moment of inertia of a hollow cone
about its axis of symmetry
about an axis passing through vertex and perpendicular to its own axis.
Moment of inertia of a truncated cone about its own axis
Let the truncated solid cone has,
the radii of the ends – a, b
height of base – h
height of upper portion – h1
mass per unit volume – ρ
Total mass M =
We divide the cone into a number of coaxial circular discs has different radii. Let us take such an elementary disc of radius x , Thickness dy at a distance y from vertex,
Moment of inertia of the thick disc about OY
From figure
Moment of inertia of the truncated cone about Y axis,
Linear acceleration of a body rolling down in an inclined plane
Let, f be the linear acceleration of centre of mass and α be the angular acceleration of the body. For rotation around the axis through centre of mass, net torque = Iα. As no slipping is there, so
the point of contact of the body with the plane is instantaneously at rest.
Let a body of mass m is at A and it rolls down in an inclined plane AC , at an angle of inclination θ.
Let at Q , linear velocity =v
angular velocity = w
So, gain in kinetic energy of the body =
v = wr , r is the radius of circular body
I = mk² , where k is the radius of Gyration
Loss in potential energy =
If f be the linear acceleration,
From energy conservation,
gain in kinetic energy = loss in potential energy
1.For a solid sphere :
2.For a hollow sphere :
3.For a ring
4.For a circular disc :
5.For a solid cylinder :
6.For a hollow cylinder
Read more –
MOMENT OF INERTIA | Radius of gyration or Swing radius | Kinetic energy of a rotating body | Angular momentum | Torque | perpendicular axis theorem | parallel axis theorem
MOMENT OF INERTIA | Calculation of moment of inertia for different cases
1 thought on “MOMENT OF INERTIA | Moment of inertia of thin spherical solid sphere | thick spherical shell | right circular cone | truncated cone | Linear acceleration of a body rolling down in an
inclined plane”
Leave a Comment | {"url":"https://www.akritinfo.com/moment-of-inertia-3/","timestamp":"2024-11-05T19:55:57Z","content_type":"text/html","content_length":"142781","record_id":"<urn:uuid:2e98f9f5-f82d-4d9a-943b-91f76c8b1b36>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00846.warc.gz"} |
Brazilian (BOVESPA) Utilities Sector Analysis
Over the last 7 days, the Utilities industry has dropped 1.1%, driven by Eneva declining 4.0%. Over the last year, the industry has been flat overall. Earnings are forecast to grow by 3.6% annually.
Has the Brazilian Utilities Sector valuation changed over the past few years?
Date Market Cap Revenue Earnings PE Absolute PE PS
Tue, 05 Nov 2024 R$520.0b R$444.3b R$54.3b 8.4x 9.6x 1.2x
Thu, 03 Oct 2024 R$597.2b R$470.6b R$57.0b 8.5x 10.5x 1.3x
Sat, 31 Aug 2024 R$611.4b R$474.3b R$57.6b 8.4x 10.6x 1.3x
Mon, 29 Jul 2024 R$588.3b R$469.4b R$56.5b 8.2x 10.4x 1.3x
Wed, 26 Jun 2024 R$546.8b R$463.1b R$55.9b 7.9x 9.8x 1.2x
Fri, 24 May 2024 R$562.8b R$463.1b R$55.9b 8.2x 10.1x 1.2x
Sun, 21 Apr 2024 R$587.7b R$465.6b R$54.7b 9x 10.7x 1.3x
Tue, 19 Mar 2024 R$575.9b R$444.6b R$47.0b 8.9x 12.3x 1.3x
Thu, 15 Feb 2024 R$571.8b R$434.9b R$43.9b 8.9x 13x 1.3x
Sat, 13 Jan 2024 R$633.5b R$468.3b R$47.0b 12.4x 13.5x 1.4x
Mon, 11 Dec 2023 R$610.5b R$468.3b R$47.0b 12x 13x 1.3x
Wed, 08 Nov 2023 R$575.0b R$459.6b R$44.9b 11.8x 12.8x 1.3x
Fri, 06 Oct 2023 R$559.9b R$458.7b R$45.4b 10.8x 12.3x 1.2x
Sun, 03 Sep 2023 R$573.5b R$458.7b R$45.4b 11x 12.6x 1.3x
Tue, 01 Aug 2023 R$564.3b R$451.9b R$40.3b 11.9x 14x 1.2x
Thu, 29 Jun 2023 R$577.7b R$451.3b R$42.2b 10.1x 13.7x 1.3x
Sat, 27 May 2023 R$552.8b R$449.8b R$42.4b 10.2x 13x 1.2x
Mon, 24 Apr 2023 R$552.0b R$454.6b R$45.9b 8.9x 12x 1.2x
Wed, 22 Mar 2023 R$520.7b R$457.4b R$52.2b 9.1x 10x 1.1x
Fri, 17 Feb 2023 R$548.9b R$461.9b R$51.5b 8.6x 10.6x 1.2x
Sun, 15 Jan 2023 R$567.4b R$461.9b R$51.5b 8.7x 11x 1.2x
Tue, 13 Dec 2022 R$561.7b R$461.9b R$51.5b 7.8x 10.9x 1.2x
Thu, 10 Nov 2022 R$636.2b R$469.5b R$54.7b 9x 11.6x 1.4x
Sat, 08 Oct 2022 R$614.0b R$474.3b R$55.4b 9x 11.1x 1.3x
Mon, 05 Sep 2022 R$619.8b R$474.3b R$55.4b 9x 11.2x 1.3x
Wed, 03 Aug 2022 R$594.6b R$463.9b R$61.1b 7.7x 9.7x 1.3x
Fri, 01 Jul 2022 R$541.9b R$463.2b R$61.0b 7.8x 8.9x 1.2x
Sun, 29 May 2022 R$532.1b R$461.3b R$61.1b 8.3x 8.7x 1.2x
Tue, 26 Apr 2022 R$560.5b R$451.1b R$60.6b 8.4x 9.3x 1.2x
Thu, 24 Mar 2022 R$524.6b R$445.2b R$62.1b 7.6x 8.4x 1.2x
Sat, 19 Feb 2022 R$488.1b R$434.9b R$64.4b 7x 7.6x 1.1x
Mon, 17 Jan 2022 R$463.6b R$434.9b R$64.4b 6.4x 7.2x 1.1x
Wed, 15 Dec 2021 R$487.5b R$432.9b R$64.6b 6.9x 7.5x 1.1x
Fri, 12 Nov 2021 R$493.8b R$417.0b R$65.3b 7.4x 7.6x 1.2x
Which industries have driven the changes within the Brazilian Utilities sector?
Which companies have driven the market over the last 7 days? | {"url":"https://simplywall.st/markets/br/utilities","timestamp":"2024-11-05T05:39:00Z","content_type":"text/html","content_length":"506392","record_id":"<urn:uuid:f71497e6-97c9-44f5-9fca-f2086ede687e>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00863.warc.gz"} |
A Concentric Multi-ring Overlay for Highly Reliable P2P Networks
The paper presents a concentric multiring overlay networking topology for fast and shortest path resource discovery in dynamic distributed systems like P2P networks. First, we define a highly
reliable P2P system called HiPeer, which is deployed on top of the logical overlay with a number of lowest bounds in terms of network performance. Then, we demonstrate that for any De Bruijn digraph
of degree d > 2 and diameter D_{DB}, HiPeer constructs a highly reliable network, where each node maintains a routing table with a size of at most 2d + 3 entries independently of the number N of
nodes in the system. Further, we show that for any network with at most d nodes, any existing resource in the network can be found within at most D_{HiPeer} = log_d_(N(d - 1) + d) - 1 hops. This
result is as close to the Moore bound as the query path length in the other best P2P proposals based on the De Bruijn digraphs. Thus, HiPeer defines a highly connected network with connectivity d and
the lowest yet known lookup bound D_{HiPeer}. Moreover, we show that any node s join or leave operation in HiPeer implies a constant expected reorganization cost of the magnitude order of O(d)
control messages.
Conference Paper
Proceedings of the 4th IEEE International Symposium on Network Computing and Applications (NCA'05). Cambridge - MA - USA, 83-90. | {"url":"https://dai-labor.de/en/publications/a-concentric-multi-ring-overlay-for-highly-reliable-p2p-networks-2/","timestamp":"2024-11-05T09:36:48Z","content_type":"text/html","content_length":"154944","record_id":"<urn:uuid:2b85db5b-3b1e-4043-a4fd-8476529b9a1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00785.warc.gz"} |
[Numpy-discussion] Numpy and PEP 343
28 Feb 2006 28 Feb '06
11:41 a.m.
<pie-in-the-sky> An idea that has popped up from time to time is delaying evalution of a complicated expressions so that the result can be computed more efficiently. For instance, the matrix
expression: a = b*c + d*e results in the creation of two, potentially large, temporary matrices and also does a couple of extra loops at the C level than the equivalent expression implemented in C
would. The general idea has been to construct some sort of psuedo-object, when the numerical operations are indicated, then do the actual numerical operations at some later time. This would be very
problematic if implemented for all arrays since it would quickly become impossible to figure out what was going on, particularly with view semantics. However, it could result in large performance
improvements without becoming incomprehensible if implemented in small enough chunks. A "straightforward" approach would look something like: numpy.begin_defer() # Now all numpy operations (in this
thread) are deferred a = b*c + d*e # 'a' is a special object that holds pointers to # 'b', 'c', 'd' and 'e' and knows what ops to perform. numpy.end_defer() # 'a' performs the operations and now
looks like an array Since 'a' knows the whole series of operations in advance it can perform them more efficiently than would be possible using the basic numpy machinery. Ideally, the space for 'a'
could be allocated up front, and all of the operations could be done in a single loop. In practice the optimization might be somewhat less ambitious, depending on how much energy people put into
this. However, this approach has some problems. One is the syntax, which clunky and a bit unsafe (a missing end_defer in a function could cause stuff to break very far away). The other is that I
suspect that this sort of deferred evaluation makes multiple views of an array even more likely to bite the unwary. The syntax issue can be cleanly addressed now that PEP 343 (the 'with' statement)
is going into Python 2.5. Thus the above would look like: with numpy.deferral(): a = b*c + d*e Just removing the extra allocation of temporary variables can result in 30% speedup for this case[1], so
the payoff would likely be large. On the down side, it could be quite a can of worms, and would likely require a lot of work to implement. Food for thought anyway. </pie-in-the-sky> -tim [1] from
timeit import Timer print Timer('a = b*c + d*e', 'from numpy import arange;b=c=d=e=arange(100000.)').timeit(10000) print Timer('a = b*c; multiply(d,e,temp); a+=temp', 'from numpy import arange,
zeros, multiply;' 'b=c=d=e=arange(100000.);temp=zeros([100000], dtype=float)').timeit(10000) => 94.8665989672 62.6143562939 | {"url":"https://mail.python.org/archives/list/numpy-discussion@python.org/message/4XXXV2BP7G32AHYQJTSLE2JLKT7XAPWE/","timestamp":"2024-11-11T21:12:41Z","content_type":"text/html","content_length":"14793","record_id":"<urn:uuid:c9ab877a-ab38-482e-985a-e131e2678240>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00438.warc.gz"} |
Results with the U.K. Model | RDP 1977-07: Money and the Balance of Payments
RDP 1977-07: Money and the Balance of Payments 4. Results with the U.K. Model
Before turning to the results of testing the five versions of the U.K. model, it is appropriate to give a brief description of the model's general theoretical structure.
4.1 The U.K. model
In addition to the three monetary sector equations, the model contains a core of nine equations which determine the rates of change of the flows of aggregate expenditure, output, imports, and
exports, the levels of output prices and export prices, Bank Rate, aggregate tax receipts and the stock of domestic credit. Prices and interest rates are assumed to be “sticky” in the short run,
adjusting only in the long run to equilibrium values determined in the rest of the world.^[19] The time paths of asset stocks and commodity flows are therefore proximately determined by demand
In general, the level of effective demand in each market is assumed to adjust to the difference between actual demand and its long-run partial-equilibrium level, although this is modified in the case
of domestic expenditure to include a direct asset disequilibrium effect which captures the interdependence of decisions about expenditure on goods and accumulation of financial assets.^[20] Stocks of
financial assets, including money, adjust towards conventionally defined stock demands, while domestic expenditure adjusts to its equilibrium level implied by the levels of disposable income and
interest rates, as well as to any disequilibrium in asset markets. Output and imports adjust, with different speeds, to their desired levels, which are in turn determined by the level of aggregate
demand and the relevant relative price. Exports, on the other hand, are determined by world income and the price of U.K. exports (set by domestic producers) relative to world prices.^[21] The change
in international reserves is determined as the change in the quantity of money less the change in domestic credit, and this specification allows strong direct effects of money on the balance of
The disequilibrium real balance effect on aggregate expenditure permits a direct effect of monetary disturbances on domestic commodity markets. There is no direct effect of monetary disequilibrium on
prices, although output prices are responsive to pressures of demand in commodity markets and hence will be influenced through the expenditure mechanism by any disturbance to the money market. The
most important of the secondary transmission mechanisms in the model is the effect of interest rates, which are represented by a policy reaction function for Bank Rate. It is assumed that the
authorities use the rate primarily as a weapon for stabilising the level of international reserves,^[22] although their decisions are also influenced by considerations of internal balance. The
inclusion of reserve movements in the Bank Rate equation should assist in discriminating between the alternative hypotheses set out below, since it will make the model sensitive to the interaction of
money and the balance of payments.
4.2 The results
The models outlined above, and an additional model, B*, which includes no direct channel from domestic monetary disequilibrium to the balance of payments, are estimated with annual U.K. data for the
period 1885–1970.^[23] Model B did not converge in its original form, and it was necessary to constrain the parameters on the war dummies in the ^[24]
In general, the estimates have the expected signs and are significant by the usual t-test criteria. Each of the models, viewed as a particular set of constraints imposed on a (common) unrestricted
reduced form, can be statistically rejected on the basis of a likelihood ratio test, but since the following discussion evaluates the relative performance of several basic theoretical models, this
does not detract from the results. It would be futile to expect that a close fit to a long run of data could be provided by models as simple as those tested in the current section, although this does
not rule out the possibility that the models adequately represent important systematic macroeconomic relationships. The tests reported below are designed to determine which among several economically
meaningful models best fits the data.
Since estimates of the parameters in the core of the models are similar, the following discussion concentrates on the estimates of the monetary sector parameters, which are reproduced in Table 4.1.^
[25] These results indicate that domestic monetary disequilibrium has a strong direct effect both on domestic variables (as represented by the parameter γ[1]) and on the level of international
reserves. However, the balance-of-payments offsets to excess money balances in both Models C and E are less well determined than the corresponding parameters in Models B and D.
The speeds with which asset markets adjust toward long-run equilibrium are found to be quite slow, and substantially less than the speeds of adjustment in commodity markets. This suggests, not only
that frictions are important throughout the economy, but that they are particularly prevalent in asset markets, a finding not easily reconciled with those theories which emphasise rapid, if not
instantaneous, adjustments in these markets. In particular, the estimates of α[12] imply that there is an average lag of over ten years in the adjustment of money demand to money supply following an
exogenous disturbance.
Table 4.1 Estimates of Parameters of the Monetary Sector^
Parameter\Model A B C D E B*
α[10] – .031 – .027 – .044
(4.84) (5.03) (6.21)
α[11] – – .041 – .444 –
(1.74) (1.86)
α[12] .083 – – .103 .088 –
(5.58) (5.48) (4.16)
γ[1] −.098 −.069 −.106 −.092 −.102 −.083
(4.66) (7.81) (3.52) (4.50) (3.89) (2.69)
γ[4] – −.126 – −1.458 – –
(6.52) (3.44)
Table 4.2 presents the summary fit statistics for the six sub-models. It can be seen from these results that Models D and E perform better than Models B and C in a comparison of their simulation
errors. Hence it appears that the flow disequilibrium models are preferred to the stock disequilibrium models on purely statistical grounds.
The results in Table 4.2 are less conclusive in evaluating the models where reserve changes are modelled directly as a function of monetary disequilibrium against those which include a behavioural
relation for capital flows and determine reserve flows by adding up the components of the balance of payments. Models C and E give lower root mean square percentage errors than Models B and D for all
three monetary sector variables, in non-dynamic simulations, and have the higher log-likelihood values. However, the dynamic simulation results do not clearly favour either specification.
Table 4.2 Summary of Fit Statistics^[27]
Model A B C D E B*
M – RMSPE1 2.7 3.9 2.5 3.8 2.6 4.3
– RMSPE2 2.8 15.9 4.8 11.6 3.5 13.5
R – RMSPE1 17.7 20.2 17.4 18.7 17.3 19.2
– RMSPE2 21.6 69.0 17.5 37.7 20.2 41.7
F – RMSPE1 28.8 30.5 29.2 29.9 29.3 29.4
– RMSPE2 53.3 28.0 43.4 20.8 50.7 23.1
Log-likelihood value 3183.410 3136.132 3180.508 3161.373 3181.729 3122.178
(394) (392) (394) (391) (393) (393)
Log-likelihood value of unrestricted reduced form = 4019.468
Thus, although it is not possible to draw any definite conclusions from these results, they are encouraging in that the more structurally explicit models, in which the direct effects of excess money
on the balance of payments are identified as working through capital flows, are not rejected by empirical testing.
It is possible, on the other hand, to reject Model A in favour of Model E, since the estimate of α[11] in Model E is significantly different from unity, but set equal to unity in Model A. This
implies that the balance-of-payments offset to flow disequilibrium in the money market is less than complete within the year; indeed, the average lag implied by the point estimate of α[11] in Model E
exceeds two years.
The existence of a direct external effect of excess money is strongly supported by an auxiliary test, which was to estimate Model B*, equivalent to Model B or Model D with γ[4] set to zero, and hence
including no direct channel from money to the balance of payments.
Although Model B* has a generally better simulation performance than Model B, it is inferior to Model D in both dynamic and non-dynamic tracking ability. Moreover, it can be rejected in favour of
either Model B or Model D on the t-test of the significance of the parameter γ[4] in the latter models.
The specification and estimation technique permit, in principle, these adjustments to occur instantaneously. Hence, unless the model is seriously misspecified, the degree of “stickiness” is
determined by the data used. [19]
The asset disequilibrium is represented by a disequilibrium real balance term, which reflects the special role of money in a frictional economy. [20]
World income, prices and interest rates are proxied for estimation purposes by the relevant U.S. variables. [21]
Sayers (1976), pp.28 ff offers some support for this view in his discussion of the basis for Bank Rate decisions. The influence of internal balance, while unimportant in the period to World War II
(see Pollard (1970), pp.14–17), has increased in the post-war period under the influence of Keynesian demand-management policies. [22]
The data sources and methods of construction of each variable are discussed in Jonson (1975). [23]
These convergence problems suggest that, even where two models are “nested”, a likelihood ratio test may not be accurate in discriminating between the models. Since other tests are available (e.g.
the usual t-tests), the likelihood ratio tests are not used here, although they can be calculated by the reader. [24]
A complete set of estimates for all models is available from the authors on request. [25]
Figures in parentheses are asymptotic t-ratios. Note that the numerical values of the separate estimates of α[11] and γ[4] are not directly comparable; the other parameters are dimensionally
equivalent. Estimates are generally similar to those reported in Jonson (1976). [26]
RMSPE1 denotes the root mean square percentage error from a one-period (non-dynamic) simulation over the full estimation period; RMSPE2 denotes the root mean square percentage error from a full
dynamic simulation over the period 1955–1970. Figures in parentheses denote the degrees of freedom of each model, i.e. the number of restrictions which must be imposed on the unrestricted reduced
form to obtain the structural model. [27] | {"url":"https://www.rba.gov.au/publications/rdp/1977/7707/results-with-the-uk-model.html","timestamp":"2024-11-05T09:18:30Z","content_type":"application/xhtml+xml","content_length":"52354","record_id":"<urn:uuid:8edaa97c-e00a-4d68-a8c5-fa0313684ace>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00286.warc.gz"} |
Casino Craps – Simple to Be Schooled In and Simple to Win
Dec 27 2015
Craps is the most accelerated – and definitely the loudest – game in the casino. With the large, colorful table, chips flying all around and persons buzzing, it is fascinating to watch and exciting
to take part in.
Craps at the same time has one of the smallest value house edges against you than any other casino game, however only if you make the right wagers. In reality, with one style of bet (which you will
soon learn) you play even with the house, suggesting that the house has a "0" edge. This is the only casino game where this is confirmed.
The craps table is detectably bigger than a common pool table, with a wood railing that goes around the external edge. This railing functions as a backboard for the dice to be tossed against and is
sponge lined on the inner parts with random patterns in order for the dice bounce irregularly. Many table rails also have grooves on top where you usually appoint your chips.
The table covering is a tight fitting green felt with designs to declare all the varying plays that are able to be carried out in craps. It is particularly baffling for a beginner, but all you in
reality are required to consume yourself with right now is the "Pass Line" spot and the "Don’t Pass" spot. These are the only bets you will lay in our fundamental strategy (and typically the only
odds worth placing, period).
Never let the confusing arrangement of the craps table bluster you. The basic game itself is considerably plain. A new game with a brand-new player (the gambler shooting the dice) will start when the
current participant "sevens out", which denotes that he tosses a seven. That finishes his turn and a fresh player is handed the dice.
The new player makes either a pass line stake or a don’t pass challenge (clarified below) and then tosses the dice, which is referred to as the "comeout roll".
If that initial roll is a seven or eleven, this is describe as "making a pass" and the "pass line" bettors win and "don’t pass" candidates lose. If a 2, 3 or twelve are rolled, this is referred to as
"craps" and pass line players lose, whereas don’t pass line contenders win. Regardless, don’t pass line bettors never win if the "craps" # is a twelve in Las Vegas or a 2 in Reno as well as Tahoe. In
this case, the bet is push – neither the player nor the house wins. All pass line and don’t pass line odds are rendered even cash.
Keeping 1 of the three "craps" numbers from arriving at a win for don’t pass line bets is what allows the house it’s tiny edge of 1.4 percent on everyone of the line odds. The don’t pass competitor
has a stand-off with the house when one of these barred numbers is rolled. Apart from that, the don’t pass gambler would have a bit of advantage over the house – something that no casino allows!
If a number besides seven, eleven, 2, three, or 12 is rolled on the comeout (in other words, a four,5,6,eight,nine,10), that number is described as a "place" #, or simply a no. or a "point". In this
case, the shooter goes on to roll until that place # is rolled once again, which is referred to as a "making the point", at which time pass line gamblers win and don’t pass bettors lose, or a 7 is
tossed, which is named "sevening out". In this instance, pass line gamblers lose and don’t pass wagerers win. When a competitor 7s out, his period has ended and the entire procedure will start again
with a brand-new contender.
Once a shooter tosses a place # (a 4.five.six.eight.nine.10), several varied types of odds can be laid on every last advancing roll of the dice, until he sevens out and his turn is over. However,
they all have odds in favor of the house, many on line bets, and "come" wagers. Of these two, we will solely be mindful of the odds on a line wager, as the "come" gamble is a little more difficult.
You should evade all other gambles, as they carry odds that are too high against you. Yes, this means that all those other gamblers that are throwing chips all over the table with every single roll
of the dice and performing "field wagers" and "hard way" odds are in fact making sucker stakes. They may know all the ample wagers and distinctive lingo, so you will be the more able gambler by just
placing line gambles and taking the odds.
So let us talk about line stakes, taking the odds, and how to do it.
To place a line bet, basically apply your money on the vicinity of the table that says "Pass Line", or where it says "Don’t Pass". These stakes pay out even capital when they win, in spite of the
fact that it is not true even odds as a consequence of the 1.4 % house edge reviewed just a while ago.
When you wager the pass line, it means you are making a wager that the shooter either bring about a 7 or 11 on the comeout roll, or that he will roll one of the place numbers and then roll that
number once more ("make the point") in advance of sevening out (rolling a 7).
When you wager on the don’t pass line, you are laying odds that the shooter will roll either a snake-eyes or a three on the comeout roll (or a three or twelve if in Reno and Tahoe), or will roll 1 of
the place numbers and then 7 out near to rolling the place no. one more time.
Odds on a Line Gamble (or, "odds stakes")
When a point has been established (a place number is rolled) on the comeout, you are enabled to take true odds against a 7 appearing before the point number is rolled again. This means you can bet an
additional amount up to the amount of your line wager. This is known as an "odds" gamble.
Your odds gamble can be any amount up to the amount of your line gamble, even though several casinos will now accept you to make odds wagers of two, three or even more times the amount of your line
bet. This odds play is rewarded at a rate on same level to the odds of that point no. being made before a 7 is rolled.
You make an odds stake by placing your wager exactly behind your pass line bet. You are mindful that there is nothing on the table to show that you can place an odds gamble, while there are signs
loudly printed everywhere on that table for the other "sucker" gambles. This is considering that the casino does not intend to encourage odds bets. You must know that you can make one.
Here is how these odds are allocated. Since there are 6 ways to how a numberseven can be tossed and five ways that a 6 or eight can be rolled, the odds of a six or 8 being rolled ahead of a seven is
rolled again are 6 to 5 against you. This means that if the point number is a six or eight, your odds wager will be paid off at the rate of six to 5. For each 10 dollars you wager, you will win 12
dollars (wagers lower or bigger than 10 dollars are of course paid at the same 6 to 5 ratio). The odds of a five or nine being rolled prior to a seven is rolled are 3 to two, so you get paid fifteen
dollars for every single ten dollars stake. The odds of 4 or ten being rolled initially are 2 to 1, this means that you get paid $20 in cash for each and every 10 dollars you wager.
Note that these are true odds – you are paid definitely proportional to your opportunity of winning. This is the only true odds wager you will find in a casino, as a result take care to make it each
time you play craps.
Here’s an instance of the three styles of results that come forth when a new shooter plays and how you should advance.
Consider that a brand-new shooter is warming up to make the comeout roll and you make a ten dollars stake (or whatever amount you want) on the pass line. The shooter rolls a 7 or 11 on the comeout.
You win ten dollars, the amount of your wager.
You bet 10 dollars once more on the pass line and the shooter makes a comeout roll one more time. This time a 3 is rolled (the player "craps out"). You lose your 10 dollars pass line wager.
You play another 10 dollars and the shooter makes his third comeout roll (retain that, each and every shooter continues to roll until he 7s out after making a point). This time a four is rolled – one
of the place numbers or "points". You now want to take an odds wager, so you place 10 dollars specifically behind your pass line stake to display you are taking the odds. The shooter advances to roll
the dice until a four is rolled (the point is made), at which time you win 10 dollars on your pass line gamble, and twenty dollars on your odds stake (remember, a 4 is paid at two to 1 odds), for a
complete win of $30. Take your chips off the table and get ready to gamble one more time.
On the other hand, if a 7 is rolled in advance of the point no. (in this case, ahead of the 4), you lose both your ten dollars pass line wager and your $10 odds gamble.
And that is all there is to it! You merely make you pass line play, take odds if a point is rolled on the comeout, and then wait for either the point or a 7 to be rolled. Ignore all the other
confusion and sucker stakes. Your have the best wager in the casino and are taking part keenly.
Odds stakes can be made any time after a comeout point is rolled. You don’t have to make them right away . Even so, you would be absurd not to make an odds stake as soon as possible considering it’s
the best gamble on the table. But, you are at libertyto make, disclaim, or reinstate an odds wager anytime after the comeout and right before a 7 is rolled.
When you win an odds stake, ensure to take your chips off the table. If not, they are judged to be customarily "off" on the next comeout and will not count as another odds gamble unless you
distinctively tell the dealer that you want them to be "working". Even so, in a quick paced and loud game, your proposal maybe won’t be heard, this means that it’s much better to simply take your
profits off the table and wager once more with the next comeout.
Basically any of the downtown casinos. Minimum bets will be tiny (you can usually find $3) and, more characteristically, they usually yield up to ten times odds bets.
All the Best!
You must be logged in to post a comment. | {"url":"http://mondaymoney.com/2015/12/27/casino-craps-simple-to-be-schooled-in-and-simple-to-win/","timestamp":"2024-11-03T10:37:37Z","content_type":"application/xhtml+xml","content_length":"33849","record_id":"<urn:uuid:c6a87960-cfc3-4ab6-9805-f3d6eadbbe13>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00553.warc.gz"} |
Cohomology and augmentations of ringed simplicial sites
Lemma 85.11.1. With notation as above. The morphism $a : (\mathop{\mathit{Sh}}\nolimits (\mathcal{C}_{total}), \mathcal{O}) \to (\mathop{\mathit{Sh}}\nolimits (\mathcal{D}), \mathcal{O}_\mathcal {D})
$ is flat if and only if $a_ n : (\mathop{\mathit{Sh}}\nolimits (\mathcal{C}_ n), \mathcal{O}_ n) \to (\mathop{\mathit{Sh}}\nolimits (\mathcal{D}), \mathcal{O}_\mathcal {D})$ is flat for $n \geq 0$.
Comments (2)
Comment #8762 by ZL on
Typo in the line before Lemma 85.11.1: "$a_ n \circ f_\varphi = a_ m$" should be "$a_ m \circ f_\varphi = a_ n$".
Comment #9318 by Stacks project on
Thanks and fixed here.
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0D9D. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0D9D, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0D9D","timestamp":"2024-11-07T03:59:59Z","content_type":"text/html","content_length":"23633","record_id":"<urn:uuid:f4cf9880-9e79-40b3-bb92-f8787b680b4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00606.warc.gz"} |
Ideal gas equation
LectureNotes said ideal gas equation
The ideal gas equation is a fundamental equation in thermodynamics and physical chemistry that describes the behavior of ideal gases. An ideal gas is a hypothetical gas whose molecules occupy
negligible space and have no interactions, and it perfectly obeys the gas laws.
The ideal gas equation is given by:
PV = nRT
• P is the pressure of the gas,
• V is the volume of the gas,
• n is the number of moles of the gas,
• R is the universal gas constant,
• T is the temperature of the gas in Kelvin.
Let’s break down each component of the equation:
1. Pressure (P):
• Pressure is the force exerted by the gas molecules per unit area on the walls of the container. It is usually measured in atmospheres (atm), pascals ¶, or torr.
2. Volume (V):
• Volume is the space occupied by the gas. It is typically measured in liters (L) or cubic meters (m³).
3. Number of Moles (n):
• The number of moles represents the amount of substance present in the gas. One mole corresponds to Avogadro’s number (6.022 \times 10^{23}) of molecules.
4. Universal Gas Constant (R):
• The gas constant ( R ) is a proportionality constant that relates the energy scale in physics to the temperature scale. Its value depends on the units used for pressure, volume, and temperature.
Common values are:
□ R = 8.314 , \text{J/(mol·K)}
□ R = 0.0821 , \text{L·atm/(mol·K)}
5. Temperature (T):
• Temperature is a measure of the average kinetic energy of the gas molecules. It must be measured in Kelvin (K) for the ideal gas equation to be accurate. Kelvin can be converted from Celsius by
adding 273.15 to the Celsius temperature.
Applications of the Ideal Gas Equation:
• Determining Molar Mass: By measuring the pressure, volume, and temperature of a gas sample, you can determine its molar mass.
• Calculating Gas Density: The density of a gas can be derived from the ideal gas equation.
• Chemical Reactions: The ideal gas equation is used to predict the behavior of gases in chemical reactions, especially in reactions involving gases.
Limitations of the Ideal Gas Equation:
While the ideal gas equation is a useful approximation, it has limitations:
• It assumes that gas molecules do not interact with each other, which is not true for real gases at high pressures and low temperatures.
• It assumes that the volume occupied by the gas molecules themselves is negligible compared to the volume of their container.
For real gases, deviations from ideal behavior can be accounted for using the Van der Waals equation or other real gas models.
In summary, the ideal gas equation is a powerful tool in understanding the behavior of gases under various conditions. However, it’s important to recognize its limitations and apply corrections when
dealing with real gases. | {"url":"https://en.sorumatik.co/t/ideal-gas-equation/16341","timestamp":"2024-11-14T21:44:46Z","content_type":"text/html","content_length":"22020","record_id":"<urn:uuid:a9ea7e18-188f-43ed-9c44-62b1e03b153c>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00690.warc.gz"} |
A renormalization approach to the Liouville quantum gravity metric
2021 Theses Doctoral
A renormalization approach to the Liouville quantum gravity metric
This thesis explores metric properties of Liouville quantum gravity (LQG), a random geometry with conformal symmetries introduced in the context of string theory by Polyakov in the 80’s. Formally, it
corresponds to the Riemannian metric tensor “e^{γh}(dx² + dy²)” where h is a planar Gaussian free field and γ is a parameter in (0, 2). Since h is a random Schwartz distribution with negative
regularity, the exponential e^{γh} only makes sense formally and the associated volume form and distance functions are not well-defined. The mathematical language to define the volume form was
introduced by Kahane, also in the 80’s. In this thesis, we explore a renormalization approach to make sense of the distance function and we study its basic properties.
• Falconet_columbia_0054D_16428.pdf application/pdf 2.69 MB Download File
More About This Work
Academic Units
Thesis Advisors
Dubedat, Julien
Ph.D., Columbia University
Published Here
April 21, 2021 | {"url":"https://academiccommons.columbia.edu/doi/10.7916/d8-f9v7-7j12","timestamp":"2024-11-09T19:58:56Z","content_type":"text/html","content_length":"17226","record_id":"<urn:uuid:fca9c63c-98dd-4a1d-9042-08deefa65f3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00530.warc.gz"} |
How To Draw The Bisector Of An Angle
How To Draw The Bisector Of An Angle - Classify triangles and rectangles as closed shape game. Web drawing the bisector: Web i'm a bit confused: Line segment bisector (perpendicular bisector theorem)
Draw the angle you want to bisect (∠abc). Learn how to construct an angle bisector (halve the angle) using just a compass and a straightedge. Want to join the conversation? A positive point charge is
placed at the angle bisector of two uncharged plane conductors that make an angle of 45°. Web how to construct an angle bisector. Web rainford's maths department show us how to construct an angle
How to Draw the Bisectors of Angles of a Triangle YouTube
Draw the electric field lines. There are two types of bisectors based on what geometrical shape it bisects. Draw a line ab of any length. A line which cuts an angle into two equal side angles is
called an angle bisector. Get your free angle bisector worksheet of 20+ questions and answers.
How to Construct a Bisector of a Given Angle 6 Steps
It explains in simple ways to draw the bisector of angles of a triangle. This ray \ (bf\)will divide \ ( \angle abc \) into two equal angles, \ ( \angle abf \) and \ ( \angle cbf \), which are
congruent. Taking b as the center and any appropriate radius, draw an arc to intersect the rays ba and.
Angle bisectorDefinition & Examples Cuemath
Set the length of a compass. Instructions on how to construct an angle bisector with a compass and a straightedge. With that width, draw two arcs, one intersecting each ray of the angle (one arc
inside each angle. If we draw a ray that bisects an angle into two equal parts of the same measure, then it is called an.
Bisect an Angle (examples, solutions, worksheets, videos, games
The bisector is a line that divides a line or an angle into two equivalent parts. Draw a line ab of any length. So, to find where the angle bisector lays, divide the number of degrees in the angle by
2. It divides an angle into two congruent angles. For example, if the angle is 160 degrees, you would calculate.
How to draw bisector of an angle? YouTube
There are two types of bisectors based on what geometrical shape it bisects. Web an angle bisector of a 60 ∘ angle will divide it into two angles of 30 ∘ each. And that's basically what i used for
the proof. Web the corbettmaths video tutorial on how to construct an angle bisector. Web steps to construct an angle bisector:
How to draw Angle bisector of a Triangle .. Class 7th YouTube
Web steps to construct an angle bisector: Angle bisector theorems of triangles. Web how to construct an angle bisector. See the proof below for more on. Set the compass opening to any convenient
How to draw the angle bisector of any angle YouTube
Web how to construct an angle bisector (halve the angle) using just a compass and a straightedge. Web please follow the steps of construction shown below to construct the angle bisector. Web how to
construct an angle bisector. A positive point charge is placed at the angle bisector of two uncharged plane conductors that make an angle of 45°. And.
Bisect Definition, Formula and Examples Cuemath
Answer questions related to triangles game. (refer to the figure below) For example, if the angle is 160 degrees, you would calculate =. Instructions on how to construct an angle bisector with a
compass and a straightedge. The bisector is a line that divides a line or an angle into two equivalent parts.
How to Construct a Bisector of a Given Angle 8 Steps
So, to find where the angle bisector lays, divide the number of degrees in the angle by 2. Please note that this step could be done with a compass as well. To construct an angle bisector for angle ∠
a formed by vertex a and two lines a b and a c, follow the steps below. Get your free angle.
Angle Bisector 01 Definition and Drawing YouTube
Web an angle bisector is a straight line drawn from the vertex of a triangle to its opposite side in such a way, that it divides the angle into two equal or congruent angles. There are two types of
bisectors based on what geometrical shape it bisects. Set the length of a compass. A positive point charge is placed at.
A Triangle \ (Abc\) Where You Are Asked To Bisect \ ( \Angle Abc \).
Using your straightedge, draw a ray from vertex \ (b\) through point \ (f\). Web in geometry, it is possible to bisect an angle using only a compass and ruler. Draw the angle you want to bisect
(∠abc). To do so, use the following steps:
To Construct An Angle Bisector For Angle ∠ A Formed By Vertex A And Two Lines A B And A C, Follow The Steps Below.
Angle bisector in geometry refers to a line that splits an angle into two equal angles. Constructing the bisector of an angle. Use compasses to draw an arc. Instructions on how to construct an angle
bisector with a compass and a straightedge.
The Bisector Is A Line That Divides A Line Or An Angle Into Two Equivalent Parts.
Line segment bisector (perpendicular bisector theorem) Bisector means the thing that bisects a shape or an object into two equal parts. How to construct an angle bisector. Please note that this step
could be done with a compass as well.
This Euclidean Construction Works By Creating Two Congruent Triangles.
Set the length of a compass. The bisector of a segment always contains the midpoint of the segment. Web how to construct an angle bisector. With 'a' as the center, and by placing the center of the
protractor on a, mark 60°, and label the point as 'c'.
Related Post: | {"url":"https://participation-en-ligne.namur.be/read/how-to-draw-the-bisector-of-an-angle.html","timestamp":"2024-11-14T11:48:40Z","content_type":"text/html","content_length":"25705","record_id":"<urn:uuid:de2ddad8-c7e3-442a-ad71-b0ed7dcb4855>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00895.warc.gz"} |
Mathematics Thread for David
I've decided David needs his own thread to expound upon the beauties of mathematics (and hence distract him from horse colors, which is clearly a sad waste for a brain the size of a planet). So I've
made one just for him. I'll kick it off with what just popped up in the news:
It seems the ABC conjecture (which I had never heard of before but appears to be important) may have just been proved so that it can be elevated to a theorem. The lay article I read states that
Fermat's last theorem and a host of other number theory stuff follows directly from it.
Why is this somewhat odd theorem so important?
Halfwise, son of Halfwit. Brother of Nitwit, son of Halfwit. Half brother of Figwit.
Then it gets complicated...
Quintessence of Burrahobbitry
Posts : 20601
Join date : 2012-02-01
Location : rustic broom closet in farthing of Manhattan
funny you should ask.. ha ha just kidding...no idea wot you said...
Mrs Figg
Eel Wrangler from Bree
Posts : 25933
Join date : 2011-10-06
Age : 94
Location : Holding The Door
Me either, that's why I'm fishing for David.
Halfwise, son of Halfwit. Brother of Nitwit, son of Halfwit. Half brother of Figwit.
Then it gets complicated...
Quintessence of Burrahobbitry
Posts : 20601
Join date : 2012-02-01
Location : rustic broom closet in farthing of Manhattan
Why is Fermat's last theorem so important? Well for that question to have meaning you must first assume that mathematics itself is important. From there, everything else is a corollary.
Where to start....? The history of diophantine equations and the first solutions of the Pythagorean theorem?
OK everybody knows that x^2 + y^2 = z^2 has an infinite number if integral solutions, right? In other words there exist an infinite number of right triangles whose side are perfect integers. These
Pythagorean triples were fundamental to ancient Egyptian surveying, and are at the foundation of all our architecture, mapping, charting, and measuring the universe.
So it seemed natural to mathematicians throughout the centuries that really cool things would happen if you could prove something equivalent for x^n + y^n = z^n where n>2. Except it doesn't work.
There just aren't any integer solutions for higher powers, no matter how hard you look! Then the question became "why not?"
In 1636, Fermat wrote in the margin of one of his books that he'd found a marvelous proof of "why not" but didn't have room to write it at the moment. Then he died. This triggered the largest wild
goose chase in the history of mathematics. Gauss pretty much revolutionized modern algebra in an attempt to crack this nut, failing miserably and publicly, but defining much 19th and 20th century
mathematics in the process.
The proof finally fell to Andrew Wiles in 1995. There's a good NOVA episode on it. I tried looking at his proof when it came out, but it's way over the head of a humble farmer. It's technically
proven, but it's far from elegant.
The ABC conjecture, which I've just been looking at, seems to provide a much more elegant, intuitive proof which should suggest even more wild geese for mathematicians to chase in the future.
But is it important? I guess it depends on how much you like wild geese....
David H
Horsemaster, Fighting Bears in the Pacific Northwest
Posts : 7194
Join date : 2011-11-18
Mrs Figg
Eel Wrangler from Bree
Posts : 25933
Join date : 2011-10-06
Age : 94
Location : Holding The Door
Amazingly David I think after reading your explanation I am now more ignorant than I was when I didnt know anything.
Is that what mathmaticans feel like all the time?
Pure Publications, The Tower of Lore and the Former Admin's Office are Reasonably Proud to Present-
A Green And Pleasant Land
Compiled and annotated by Eldy.
- get your copy here for a limited period- free*
*Pure Publications reserves the right to track your usage of this publication, snoop on your home address, go through your bins and sell personal information on to the highest bidder.
Warning may contain Wholesome Tales[/b]
the crabbit will suffer neither sleight of hand nor half-truths. - Forest
Posts : 46837
Join date : 2011-02-14
Age : 53
Location : Scotshobbitland
A more intuitive proof of Fermat's last theorem would be nice, but if it's based on another 500 page proof I wonder how much is actually gained.
Halfwise, son of Halfwit. Brother of Nitwit, son of Halfwit. Half brother of Figwit.
Then it gets complicated...
Quintessence of Burrahobbitry
Posts : 20601
Join date : 2012-02-01
Location : rustic broom closet in farthing of Manhattan
Not to turn this thread to Who but the Doctor claims it his fault the theorem was not completed as he overslept the day Fermat died.
Pure Publications, The Tower of Lore and the Former Admin's Office are Reasonably Proud to Present-
A Green And Pleasant Land
Compiled and annotated by Eldy.
- get your copy here for a limited period- free*
*Pure Publications reserves the right to track your usage of this publication, snoop on your home address, go through your bins and sell personal information on to the highest bidder.
Warning may contain Wholesome Tales[/b]
the crabbit will suffer neither sleight of hand nor half-truths. - Forest
Posts : 46837
Join date : 2011-02-14
Age : 53
Location : Scotshobbitland
Pettytyrant101 wrote:Amazingly David I think after reading your explanation I am now more ignorant than I was when I didnt know anything.
Is that what mathmaticans feel like all the time?
My uncle once explained education to me this way:
When you think you know everything, they give you a high school diploma.
When you realize that you don't know everything, they give you a Bachelor's degree.
When you realize that you don't know anything, they give you a Master's degree.
When you realize that you don't know anything, but neither does anybody else, they give you a Doctorate and ask you to teach people.
David H
Horsemaster, Fighting Bears in the Pacific Northwest
Posts : 7194
Join date : 2011-11-18
halfwise wrote:A more intuitive proof of Fermat's last theorem would be nice, but if it's based on another 500 page proof I wonder how much is actually gained.
The gain really has nothing to do with proving Fermat's theorem yet again. Nobody has had any worries about the actual truth of the conjecture. It's all about sharpening the tools that were forged
for the proof, and then seeing what else they may imply. That's how you peel the onion, one layer at a time.
David H
Horsemaster, Fighting Bears in the Pacific Northwest
Posts : 7194
Join date : 2011-11-18
I guess if the ABC theorem was somewhat intuitive but hard to prove, it would be worth using for other things. People could see their way to other proofs more easily.
Halfwise, son of Halfwit. Brother of Nitwit, son of Halfwit. Half brother of Figwit.
Then it gets complicated...
Quintessence of Burrahobbitry
Posts : 20601
Join date : 2012-02-01
Location : rustic broom closet in farthing of Manhattan
I looked a little deeper. This proof uses some of the same approaches as Wiles' proof of Fermat but apparently solves the problem in a much longer and more general way. Very exciting because it's at
that state where nobody can guess it's implications. I don't know enough to have an opinion, bit it seems to contain a fundamental new insight into the nature of prime numbers. That could certainly
have implications for quantum physicists, if that's what you're asking.
{{once again I fear that mathematicians are sweating blood over a period of years to forge precision cutting tools, while the physicists sit back saying, "I bet I could really cut stuff with that if
I hit it with a big enough hammer."}}
David H
Horsemaster, Fighting Bears in the Pacific Northwest
Posts : 7194
Join date : 2011-11-18
its all becoming clear. kinda.
Mrs Figg
Eel Wrangler from Bree
Posts : 25933
Join date : 2011-10-06
Age : 94
Location : Holding The Door
Permissions in this forum:
reply to topics in this forum | {"url":"https://www.forumshire.com/t465-mathematics-thread-for-david","timestamp":"2024-11-14T08:57:09Z","content_type":"text/html","content_length":"71621","record_id":"<urn:uuid:8dce68fe-1aa3-42e7-a063-2dbd9541530b>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00779.warc.gz"} |
e Interest
How To Calculate Interest Of Loan
L = loan amount r = interest rate, if floating rn is the interest rate in year n n = tenor of the loan (if the repayment period is 6 months, or 3 months. Interest amount = loan amount x interest rate
x loan term. Just make sure to convert the interest rate from a percentage to a decimal. For example, let's say. How to Use Our Loan Interest Calculator There are three main components when
determining your total loan interest: To use the calculator, you will input. We calculate the monthly payment, taking into account the loan amount, interest rate and loan term. The pay-down or
amortization of the loans over time is. the formula for calculation is: EMI = [p x r x (1+r)^n]/[(1+r)^n-1]; car loan calculator: the car loan calculator helps you determine your EMIs you pay to
An annual percentage rate (APR) is a measure that's used to make it easier to understand how much borrowing money will cost. How to use Credit Karma's loan calculator · Loan amount · Loan term ·
Interest rate. Free online calculator to find the interest rate as well as the total interest cost of an amortized loan with a fixed monthly payback amount. The basic loan payment formula includes
your loan principal amount (P), interest rate (r), and loan term (T). How to Calculate Interest-Only Loan Payments · Divide your interest rate by the number of payments in a year (12) to get your
monthly interest rate: ÷ Interest on a loan, such as a car, personal or home loan, is usually calculated daily based on the unpaid balance. This typically involves multiplying your loan. Calculate
your line of credit or loan payments. Required Information. Borrowing Reason: Select, Pay Off My Debt Sooner, Reduce Monthly Interest Costs. Learn how to calculate auto loan interest using our car
loan calculator method. This is a necessary step in determining exactly how much your monthly payment. Assumes that the interest rate remains constant throughout the amortization period. All loans
are subject to B2B Bank approval and the calculations shown may. To calculate the Loan Amount with Interest is calculate using the formula: =B8*. Figure 8. The Loan Amount with Interest over five
years is $13, This. Use the formula Interest = P x R x T, where P is the principal, R is the interest rate, and T is the term of the loan. For example, to find the interest of a.
Divide the amount of the additional payment by the amount loaned to determine the simple interest rate. For example, consider a loan of $1,, which must be. The formula for computing simple interest
is A = P (1+rt). To compute 5% interest per month, consider r = 5% per month and put the number of months in the. Step 1: Calculate the Daily Interest Rate. You first take the annual interest rate on
your loan and divide it by to determine the amount of interest that. Interest is calculated monthly at 1/th of the annual rate times the number of days in the month on the current outstanding balance
of your loan. Interest is accrued daily and charged as per the payment frequency. Rates quoted are not considered rate guarantees. Calculations assume that the interest rate. Interest is accrued
daily and charged as per the payment frequency. Rates used for calculations are not considered rate guarantees or offers. Calculations. How to Calculate Monthly Loan Payments · If your rate is %,
divide by 12 to calculate your monthly interest rate. · Calculate the repayment term in. Interest on a loan, such as a car, personal or home loan, is usually calculated daily based on the unpaid
balance. This typically involves multiplying your loan. Annual interest rate for this loan. Interest is calculated monthly on the current outstanding balance of your loan at 1/12 of the annual rate.
Information and.
Calculate monthly payments and interest costs for a range of loans with the RBC business loan calculator. The formula is: Simple Interest = Principal × Rate × Time. What are the advantages of using a
loan interest rate calculator? A loan. The calculation is an estimate of what you will pay towards an auto loan. Use the amount as a reference or guideline; it may not be the same amount you receive.
To calculate the periodic interest rate for a loan, given the loan amount, the number of payment periods, and the payment amount, you can use the RATE. How to Calculate Interest Rate on a Loan:
Principal Loan Amount x Interest Rate x Repayment Tenure = Interest. For more details click here.
Alumni Solutions | Mortgage Interest Rates In Texas | {"url":"https://spark-servis.ru/news/how-to-calculate-interest-of-loan.php","timestamp":"2024-11-13T06:15:28Z","content_type":"text/html","content_length":"15929","record_id":"<urn:uuid:8aaaeb07-747f-469d-a703-fba581588e97>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00039.warc.gz"} |
Surds – GCSE Mathematics Edexcel Revision – Study Rocket
Simplifying Surds
What is a surd?
A surd is a number that is irrational. It cannot be written exactly. For example √3= 1.732051080……. It continues indefinitely and cannot be written as a fraction.
Simplifying Surds
When working with surds there are certain rules we need to learn to help us manipulate and simplify them:
A key thing to remember with surds is that if you can get to a square number, then you can simplify the surd!
Always aim to make square numbers under surds. This usually involves breaking a surd down into its square factors.
For example to simplify √60
Write down the factors of 60 and look for square factors
1 x 60 , 2 x 30, 3x 20, __4 x 15, __5 x 12, 6 x 10
4 is a square number so now we have a square factor!
√60 __= __√4 x √15 __= 2√15__
15 does not have any square factors (1 x 15, 3 x 5) therefore we cannot simplify the surd anymore.
= 2√15 is our answer.
Rationalising Denominators
A key question with surds often involves rationalising a denominator.
If you remember that a surd is a square root then √ a x √ a = a
Given this sometimes you will be required to rationalise the denominator of a fraction, if it is a surd. Eg.
By making the denominator 2 you have rationalised it!
The same logic applies for more complicated examples: | {"url":"https://studyrocket.co.uk/revision/gcse-mathematics-edexcel/number/surds","timestamp":"2024-11-02T09:17:30Z","content_type":"text/html","content_length":"35081","record_id":"<urn:uuid:475f412d-6671-4a21-8187-467d4d1ee4da>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00794.warc.gz"} |
Threefold Magnification with a Concave Mirror - use-of-internet.com
A concave mirror is a type of spherical mirror in which the reflective surface bulges inward. When light rays strike a concave mirror, they may either converge or diverge depending on the position of
the object relative to the mirror. In this article, we will delve into the concept of threefold magnification using a concave mirror. Understanding how to calculate magnification and the various
factors that influence it can provide valuable insights into the behavior of concave mirrors.
Understanding Concave Mirrors
Concave mirrors are widely used in various optical devices such as telescopes, satellite dishes, and shaving mirrors. The reflective surface of a concave mirror is curved inward, causing incoming
parallel rays of light to converge at a specific point known as the focal point. This focal point is located along the principal axis of the mirror, which is an imaginary line passing through the
center of curvature, the focal point, and the vertex of the mirror.
Magnification in Concave Mirrors
Magnification in optics refers to the process of enlarging or reducing the size of an object. In the case of concave mirrors, magnification is influenced by the mirror’s focal length, object
distance, and image distance. Magnification (m) is defined as the ratio of the height of the image (h’) to the height of the object (h). It can be calculated using the formula:
m = \frac{-h’}{h}
The negative sign indicates that the image is inverted with respect to the object in concave mirrors. Magnification can also be expressed in terms of object distance (u) and image distance (v) using
the formula:
m = \frac{v}{u}
Threefold Magnification with a Concave Mirror
Achieving a threefold magnification with a concave mirror involves creating an image that is three times larger than the actual object. To calculate the necessary parameters for this magnification,
consider the following steps:
1. Determine the object distance (u): Place the object at a specific distance from the concave mirror. The object distance is denoted by ‘u’ in the magnification formula.
2. Calculate the focal length (f): The focal length of the concave mirror plays a crucial role in determining the image distance and magnification. It is half of the radius of curvature of the
3. Using the mirror formula: The mirror formula for concave mirrors is given by:
\frac{1}{f} = \frac{1}{v} + \frac{1}{u}
1. Calculate the image distance (v): Substitute the known values of focal length and object distance into the mirror formula to determine the image distance.
2. Determine the magnification (m): Once the image distance is calculated, use the magnification formula to find the magnification factor.
Factors Affecting Magnification with Concave Mirrors
Several factors can influence the magnification achieved with a concave mirror:
1. Object distance: Moving the object closer to the mirror can result in increased magnification.
2. Focal length: Mirrors with shorter focal lengths tend to produce higher magnification.
3. Curvature of the mirror: A more curved concave mirror can lead to greater magnification.
4. Size of the object: Larger objects may appear more magnified in the mirror.
Applications of Threefold Magnification
The ability to achieve threefold magnification with a concave mirror has various practical applications in different fields:
1. Dentistry: Dentists use concave mirrors for examining hard-to-reach areas inside the mouth with enhanced magnification.
2. Security and Surveillance: Threefold magnification can be beneficial for security cameras to capture detailed images over a larger area.
3. Art and Jewelry: Artists and jewelers may utilize concave mirrors for magnifying intricate details in paintings or jewelry designs.
4. Scientific Research: In laboratories, concave mirrors with threefold magnification can aid in observing tiny specimens or conducting precise experiments.
Frequently Asked Questions (FAQs)
1. Can a concave mirror produce a threefold enlarged real image?
2. Yes, a concave mirror can produce a threefold enlarged real image under certain conditions of object distance and focal length.
3. How does the shape of the concave mirror affect magnification?
4. A more curved (deeper) concave mirror tends to provide higher magnification compared to a less curved mirror.
5. What happens to magnification if the object is placed beyond the focal point of a concave mirror?
6. When the object is placed beyond the focal point of a concave mirror, the image appears enlarged and virtual, resulting in a reduced magnification.
7. Is magnification always positive for concave mirrors?
8. No, magnification in concave mirrors is negative when the image is real and inverted, indicating a reduction in size compared to the object.
9. Can a concave mirror with a short focal length achieve a threefold magnification with ease?
10. Concave mirrors with shorter focal lengths have the potential to achieve threefold magnification more easily compared to mirrors with longer focal lengths.
In conclusion, understanding the principles of magnification in concave mirrors and the methods to achieve threefold magnification can provide valuable insights into the behavior of light rays. By
mastering the calculations and factors influencing magnification, individuals can harness the power of concave mirrors for a wide array of applications spanning from science to everyday life. | {"url":"https://use-of-internet.com/threefold-magnification-with-a-concave-mirror/","timestamp":"2024-11-04T10:25:51Z","content_type":"text/html","content_length":"282866","record_id":"<urn:uuid:56aebc09-488b-4c7c-915f-bf11a6a56e02>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00770.warc.gz"} |
How Many Kilometers Is 59.4 Feet?
How many kilometers in 59.4 feet?
59.4 feet equals 0.0181 kilometers
Unit Converter
Conversion formula
The conversion factor from feet to kilometers is 0.0003048, which means that 1 foot is equal to 0.0003048 kilometers:
1 ft = 0.0003048 km
To convert 59.4 feet into kilometers we have to multiply 59.4 by the conversion factor in order to get the length amount from feet to kilometers. We can also form a simple proportion to calculate the
1 ft → 0.0003048 km
59.4 ft → L[(km)]
Solve the above proportion to obtain the length L in kilometers:
L[(km)] = 59.4 ft × 0.0003048 km
L[(km)] = 0.01810512 km
The final result is:
59.4 ft → 0.01810512 km
We conclude that 59.4 feet is equivalent to 0.01810512 kilometers:
59.4 feet = 0.01810512 kilometers
Alternative conversion
We can also convert by utilizing the inverse value of the conversion factor. In this case 1 kilometer is equal to 55.232994865541 × 59.4 feet.
Another way is saying that 59.4 feet is equal to 1 ÷ 55.232994865541 kilometers.
Approximate result
For practical purposes we can round our final result to an approximate numerical value. We can say that fifty-nine point four feet is approximately zero point zero one eight kilometers:
59.4 ft ≅ 0.018 km
An alternative is also that one kilometer is approximately fifty-five point two three three times fifty-nine point four feet.
Conversion table
feet to kilometers chart
For quick reference purposes, below is the conversion table you can use to convert from feet to kilometers | {"url":"https://convertoctopus.com/59-4-feet-to-kilometers","timestamp":"2024-11-05T00:38:03Z","content_type":"text/html","content_length":"32865","record_id":"<urn:uuid:e09cf439-f811-4faf-aff3-cc375331e0cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00444.warc.gz"} |
Data Preprocessing
Data Preprocessing:
• Data preprocessing involves cleaning and transforming raw data to make it suitable for analysis.
• This can include tasks such as removing missing values, scaling numerical features, encoding categorical variables, etc.
• The goal of data preprocessing is to prepare the data for modeling by ensuring it is in the correct format, free of errors and inconsistencies, and ready for further analysis.
Basic Techniques:
• Removing unwanted column
• Removing duplicated value
• Imputing missing values
• Encoding categorical variables
• Removing the outlier
• Data normalization/scaling
• Transformation
• Balancing the data
When to do Preprocessing:
It is generally recommended to perform data preprocessing after splitting the data into training and testing sets. Here’s the corrected order of operations:
• Split the original dataset into training and testing sets. This should be done before any preprocessing steps.
• Perform data preprocessing steps, such as handling missing values, encoding categorical variables, feature scaling, or any other necessary transformations, on the training set only. Remember to
keep track of the preprocessing steps applied.
• Apply the same preprocessing steps that were performed on the training set to the testing set. This ensures that the testing set is processed in the same way as the training set, allowing for a
fair evaluation of the model’s performance.
• The main reason for this order is to avoid any data leakage from the testing set into the training set. By preprocessing the data separately for each set, you ensure that the model is trained and
evaluated on independent and unbiased data.
• It’s important to note that some preprocessing steps, such as calculating statistics for imputation or feature scaling, may require information from the entire dataset. In such cases, it is still
recommended to calculate those statistics using only the training set and then apply them to both the training and testing sets.
• Overall, the correct order is to split the data first, then perform preprocessing on the training set, and finally apply the same preprocessing steps to the testing set.
Removing Unwanted Column:
• Sometimes, we need to remove some columns using .drop() like id column, Serial column etc
import pandas as pd
# Sample DataFrame
data = {
'Name': ['Alice', 'Bob', 'Charlie', 'David'],
'Age': [25, 30, 35, 40],
'Gender': ['Female', 'Male', 'Male', 'Male'],
'Salary': [70000, 80000, 90000, 100000],
'Unwanted_Column': [1, 2, 3, 4] # This is the column we want to remove
df = pd.DataFrame(data)
# Remove a single column
df_cleaned = df.drop(columns=['Unwanted_Column'])
# Alternatively, remove multiple columns (e.g., 'Gender' and 'Unwanted_Column')
# df_cleaned = df.drop(columns=['Gender', 'Unwanted_Column'])
Removing Duplicated Value:
• Use the .duplicated() & .duplicated().sum() method to identify the duplicated rows in your dataset.
• Once you have identified the duplicates, remove them using the .drop_duplicates() method.
• This will keep only the first occurrence of each unique value and eliminate subsequent duplicates.
Identify Duplicate Rows:
#Creating dummy dataframe
import pandas as pd
data = {
'Name': ['Alice', 'Bob', 'Charlie', 'Alice', 'Bob', 'David'],
'Age': [25, 30, 35, 25, 30, 40],
'City': ['New York', 'Los Angeles', 'Chicago', 'New York', 'Los Angeles', 'Houston']
df = pd.DataFrame(data)
print("Original DataFrame:")
#Identify Duplicate Rows:
duplicates = df.duplicated()
print("\nDuplicate Rows (Boolean Series):")
duplicates = df.duplicated().sum()
print("\nDuplicate Rows (Boolean Series):")
Original DataFrame:
Name Age City
0 Alice 25 New York
1 Bob 30 Los Angeles
2 Charlie 35 Chicago
3 Alice 25 New York
4 Bob 30 Los Angeles
5 David 40 Houston
Duplicate Rows (Boolean Series):
0 False
1 False
2 False
3 True
4 True
5 False
dtype: bool
Duplicate Rows (Boolean Series):
Display Only Duplicate Rows:
duplicate_rows = df[df.duplicated()]
print("\nDuplicate Rows (DataFrame):")
Duplicate Rows (DataFrame):
Name Age City
3 Alice 25 New York
4 Bob 30 Los Angeles
Remove Duplicate Rows:
df_no_duplicates = df.drop_duplicates()
print("\nDataFrame after removing duplicates:")
DataFrame after removing duplicates:
Name Age City
0 Alice 25 New York
1 Bob 30 Los Angeles
2 Charlie 35 Chicago
5 David 40 Houston
Imputing Missing Values:
• A null value or missing value in the context of data analysis refers to an absence of data in a dataset.
• This means that a specific entry or observation for a certain variable (column) is not available or hasn’t been recorded.
Checking the missing value:
import numpy as np
import pandas as pd
df_dm = pd.read_csv('/content/drive/MyDrive/Data Science/CDS-07-Machine Learning & Deep Learning/06. Machine Learning Model /07_Support Vector Machines/SVM Class /Test_loan_approved.csv')
• Missing (null) values in a DataFrame and get a summary of them.
# Checking the missing value
summary = df_dm.isnull().sum()
# Checking the missing value of single column
missing_value_single = df_dm.Gender.isnull().sum()
print("\nTotal number of missing value in gender: ",missing_value_single)
Loan_ID 0
Gender 13
Married 3
Education 0
Self_Employed 32
LoanAmount 22
Loan_Amount_Term 14
Credit_History 50
Loan_Status (Approved) 0
dtype: int64
Total number of missing value in gender: 13
• Getting the indexes of row where values are missing in specific column using where
missing_index1 = np.where(df_dm.Gender.isnull()) #or
missing_index2 =np.where(df_dm.Gender.isnull()==True)
(array([ 23, 126, 171, 188, 314, 334, 460, 467, 477, 507, 576, 588, 592]),)
(array([ 23, 126, 171, 188, 314, 334, 460, 467, 477, 507, 576, 588, 592]),)
• Getting the actual data(row) from the indexes
Loan_ID Gender Married Education Self_Employed LoanAmount Loan_Amount_Term Credit_History Loan_Status (Approved)
23 LP001050 NaN Yes Not Graduate No 112.0 360.0 0.0 N
126 LP001448 NaN Yes Graduate No 370.0 360.0 1.0 Y
171 LP001585 NaN Yes Graduate No 700.0 300.0 1.0 Y
188 LP001644 NaN Yes Graduate Yes 168.0 360.0 1.0 Y
314 LP002024 NaN Yes Graduate No 159.0 360.0 1.0 N
334 LP002103 NaN Yes Graduate Yes 182.0 180.0 1.0 Y
460 LP002478 NaN Yes Graduate Yes 160.0 360.0 NaN Y
467 LP002501 NaN Yes Graduate No 110.0 360.0 1.0 Y
477 LP002530 NaN Yes Graduate No 132.0 360.0 0.0 N
507 LP002625 NaN No Graduate No 96.0 360.0 1.0 N
576 LP002872 NaN Yes Graduate No 136.0 360.0 0.0 N
588 LP002925 NaN No Graduate No 94.0 360.0 1.0 Y
592 LP002933 NaN No Graduate Yes 292.0 360.0 1.0 Y
• Getting missing values by loc function based on single column
Loan_ID Gender Married Education Self_Employed LoanAmount Loan_Amount_Term Credit_History Loan_Status (Approved)
23 LP001050 NaN Yes Not Graduate No 112.0 360.0 0.0 N
126 LP001448 NaN Yes Graduate No 370.0 360.0 1.0 Y
171 LP001585 NaN Yes Graduate No 700.0 300.0 1.0 Y
188 LP001644 NaN Yes Graduate Yes 168.0 360.0 1.0 Y
314 LP002024 NaN Yes Graduate No 159.0 360.0 1.0 N
334 LP002103 NaN Yes Graduate Yes 182.0 180.0 1.0 Y
460 LP002478 NaN Yes Graduate Yes 160.0 360.0 NaN Y
467 LP002501 NaN Yes Graduate No 110.0 360.0 1.0 Y
477 LP002530 NaN Yes Graduate No 132.0 360.0 0.0 N
507 LP002625 NaN No Graduate No 96.0 360.0 1.0 N
576 LP002872 NaN Yes Graduate No 136.0 360.0 0.0 N
588 LP002925 NaN No Graduate No 94.0 360.0 1.0 Y
592 LP002933 NaN No Graduate Yes 292.0 360.0 1.0 Y
Handling Missing Value:
• Use of fillna() with specific value
#Option 01,Use inplace to change in original Data Frame
# Option 02,Assign to same variable instead of usnig inplace
# Option 03,Only value can be used without value= assignment
Note: all missing value in “Gender” column will be filled by “Male”
• Find specific value to fill in missing places or NaN
□ For numerical data, if data is normal distribution, mean can be used using .mean()
□ For numerical data, if data is skewed or not a normal distribution, the median can be used using .median()
□ For categorical data, mode can be used using .mode()
□ The .ffill() method, also known as “forward fill,” is used to fill missing values in a dataset by propagating the last valid (non-missing) observation forward to fill the gaps.
□ The .bfill() method, also known as “backward fill,” is used to fill missing values in a dataset by propagating the next valid (non-missing) observation backward to fill the gaps.
□ If Data sets are big .dropna() can be used to remove a few rows
□ Using .loc[] function to impute missing on a specific column
# Fill missing value using mean
# Mean = the average value (the sum of all values divided by number of values).
#Option01 & best practice
x = df_dm['LoanAmount'].mean()
# Fill missing value using median
# Median = the value in the middle, after you have sorted all values ascending.
#Option01 & Best Practice
x = df_dm['CoapplicantIncome'].median()
# Fill missing value using mode
# Mode = The value that appears most frequently
#Option01 & Best Practice
x = df_dm['Gender'].mode()[0]
# In case of Mode ,there might more than one mode value , [0] is used to provide first mode value
import pandas as pd
# Sample DataFrame with missing values
data = {
'Date': ['2024-08-01', '2024-08-02', '2024-08-03', '2024-08-04', '2024-08-05'],
'Temperature': [None, 25, None, 30, None],
'Sales': [100, None, 150, None, 200]
df = pd.DataFrame(data)
# Apply forward fill (ffill) first
df_filled = df.ffill()
# Apply backward fill (bfill) next to handle any remaining NaNs
df_filled = df_filled.bfill()
# Use of dropna() to remove rows containing null values
#Option02, Creating new DataFrame removing null values
df_dm_new = df_dm.dropna()
# Using loc function to impute missing on specific column
#.isnull()ensure all columns containing rows only where null value present, 'Credit_History' specified only Credit_History column
Encoding Categorical Variables
Often in machine learning, we want to convert categorical variables into some type of numeric format that can be readily used by algorithms.
There are two common ways to convert categorical variables into numeric variables:
• Label Encoding: Assign an integer value to each categorical value based on alphabetical order.
For example, suppose we have the following dataset with two variables and we would like to convert the Team variable from a categorical variable into a numeric one:
Using label encoding, we would convert each unique value in the Team column into an integer value based on alphabetical order:
In this example, we can see:
Each “A” value has been converted to 0.
Each “B” value has been converted to 1.
Each “C” value has been converted to 2.
We have successfully converted the Team column from a categorical variable into a numeric variable.
• One Hot Encoding: Create new variables that take on values 0 and 1 to represent the original categorical values.
Using one hot encoding, we would convert the Team column into new variables that contain only 0 and 1 values.
When using this approach, we create one new column for each unique value in the original categorical variable.
For example, the categorical variable Team had three unique values so we created three new columns in the dataset that all contain 0 or 1 values.
Here’s how to interpret the values in the new columns:
The value in the new Team_A column is 1 if the original value in the Team column was A. Otherwise, the value is 0.
The value in the new Team_B column is 1 if the original value in the Team column was B. Otherwise, the value is 0.
The value in the new Team_C column is 1 if the original value in the Team column was C. Otherwise, the value is 0.
We have successfully converted the Team column from a categorical variable into three numeric variables – sometimes referred to as “dummy” variables.
How to choose technique:
In most scenarios, one hot encoding is the preferred way to convert a categorical variable into a numeric variable because label encoding makes it seem that there is a ranking between values.
The label-encoded data makes it seem like team C is somehow greater or larger than teams B and A since it has a higher numeric value.
This isn’t an issue if the original categorical variable actually is an ordinal variable with a natural ordering or ranking, but in many scenarios, this isn’t the case.
However, one drawback of one hot encoding is that it requires you to make as many new variables as there are unique values in the original categorical variable.
This means that if your categorical variable has 100 unique values, you’ll have to create 100 new variables when using one hot encoding.
Depending on the size of your dataset and the type of variables you’re working with, you may prefer one hot encoding or label encoding.
Python implementation for label encoding
import pandas as pd
data = pd.read_csv('/content/drive/MyDrive/Data Science/CDS-07-Machine Learning & Deep Learning/04. Data Preprocessing/data.csv')
data.drop('Unnamed: 0',axis=1,inplace=True)
datacopy = data.copy()
datacopy1 = data.copy()
Gender Married
0 Male No
1 Male Yes
2 Male Yes
3 Male Yes
4 Male No
.. ... ...
609 Female No
610 Male Yes
611 Male Yes
612 Male Yes
613 Female No
[614 rows x 2 columns]
Gender Married
0 Male No
1 Male Yes
2 Male Yes
3 Male Yes
4 Male No
.. ... ...
609 Female No
610 Male Yes
611 Male Yes
612 Male Yes
613 Female No
[614 rows x 2 columns]
Gender Married
0 Male No
1 Male Yes
2 Male Yes
3 Male Yes
4 Male No
.. ... ...
609 Female No
610 Male Yes
611 Male Yes
612 Male Yes
613 Female No
[614 rows x 2 columns]
from sklearn.preprocessing import LabelEncoder
Name: Married, Length: 614, dtype: int64
Python implementation for one hot encoding:
Approach 1: Using pd.get_dummies() from pandas
• This approach utilizes pandas.get_dummies() function to one-hot encode the categorical variable.
• It directly operates on the DataFrame column and returns a DataFrame with the encoded columns.
• In this case, you are dropping the original column and concatenating the encoded columns to the original DataFrame.
Gender Married_Yes
0 Male False
1 Male True
2 Male True
3 Male True
4 Male False
.. ... ...
609 Female False
610 Male True
611 Male True
612 Male True
613 Female False
[614 rows x 2 columns]
# With one line code
datacopy1 = pd.get_dummies(datacopy1,columns=['Gender','Married'],drop_first=True)
Gender_Male Married_Yes
0 True False
1 True True
2 True True
3 True True
4 True False
.. ... ...
609 False False
610 True True
611 True True
612 True True
613 False False
[614 rows x 2 columns]
Approach 2: Using OneHotEncoder from scikit-learn
• This approach utilizes scikit-learn’s OneHotEncoder to encode the categorical variable.
• It requires reshaping the input array to a 2D structure before applying fit_transform().
• The resulting encoded data will be a numpy array.
from sklearn.preprocessing import OneHotEncoder
import numpy as np
ohe = OneHotEncoder(sparse=False)
# Reshape the input to a 2D array-like structure
datacopy1_reshaped = np.array(datacopy1.Gender).reshape(-1, 1)
datacopy1_encoded = ohe.fit_transform(datacopy1_reshaped)
array([[0., 1., 0.],
[0., 1., 0.],
[0., 1., 0.],
[0., 1., 0.],
[0., 1., 0.],
[1., 0., 0.]])
How to choose approaches
• The choice between the two approaches depends on factors such as personal preference, ease of use, and compatibility with the rest of your code.
• If we are working with pandas DataFrames and prefer a simpler and more concise solution, pd.get_dummies() can be a good option.
• However, if you want more control over the encoding process or need to integrate it with other scikit-learn functionality, using OneHotEncoder may be more suitable.
Removing the Outlier:
• Outlier is an abnormal value or abnormal distance from rest of the data points
Python implementation of finding & imputing outliers
# Importing library
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats
%matplotlib inline
# Define dataset
student_age = [22,25,30,33,24,22,21,22,23,24,26,28,26,29,29,30,31,20,45,15]
Find outliers using z-score
# Defining function
outliers = []
def detect_outliers(data):
threshold = 3 ## 3rd standard deviation from emphirical rule
mean = np.mean(data)
std = np.std(data)
for i in data:
z_score = (i-mean)/std
if np.abs(z_score)>threshold:
return outliers
#Finding outlier using created function
Find outliers using IQR
# sort data
student_age = sorted(student_age)
print("student_age :",student_age)
# calculating q1 & q3
q1,q3 = np.percentile(student_age,[25,75])
print("q1 :",q1,"q3 :",q3)
# calculting iqr
iqr = q3 - q1
print("iqr :",iqr)
# Finding lower bound(min value) and upper bound(max value)
lower_bound = q1 - (1.5*iqr)
upper_bound = q3 + (1.5*iqr)
print("lower_bound :",lower_bound,"upper_bound :",upper_bound)
# Finding outlier
outliers = []
for i in student_age:
if i<lower_bound or i>upper_bound:
print("outliers :",outliers)
student_age : [15, 20, 21, 22, 22, 22, 23, 24, 24, 25, 26, 26, 28, 29, 29, 30, 30, 31, 33, 45]
q1 : 22.0 q3 : 29.25
iqr : 7.25
lower_bound : 11.125 upper_bound : 40.125
outliers : [45]
Imputing outlier
student_age1.loc[student_age1>upper_bound] = np.mean(student_age1)
0 15.00
1 20.00
2 21.00
3 22.00
4 22.00
5 22.00
6 23.00
7 24.00
8 24.00
9 25.00
10 26.00
11 26.00
12 28.00
13 29.00
14 29.00
15 30.00
16 30.00
17 31.00
18 33.00
19 26.25
dtype: float64
Find & see outlier visually using boxplot
import seaborn as sns
#Before outlier removal
plt.title('Before outlier removal')
#After outlier removal
plt.title('After outlier removal')
Feature Scaling
• It is a technique to standardize the independent features in data in a fixed range or scale. Thus the name Feature Scaling.
• Feature Scaling is one of the last steps in the whole life cycle of Feature Engineering.
• Once we are done with all the other steps of feature engineering, like encoding variables, handling missing values, etc, then we scale all the variable
• All the data gets squeezed to decimal points between -1 and +1.
Why Feature Scaling?
• Real Life Datasets have many features with a wide range of values like for example let’s consider the house price prediction dataset.
• It will have many features like no. of. bedrooms, square feet area of the house, etc.
• As you can guess, the no. of bedrooms will vary between 1 and 5, but the square feet area will range from 500-2000.
• This is a huge difference in the range of both features.
• Without scaling, features with larger units or numerical ranges might dominate the model’s learning process, leading to biased predictions.
• Some machine learning algorithms, especially those that rely on distance calculations or gradients, are sensitive to the scale of the features.
Which machine learning algorithm needs scaling?
• Gradient descent and distance-based algorithms require feature scaling while tree-based algorithms do not require.
Types of Feature Scaling:
1. Standardization:
2. Normalization:
□ Min Max Scaling
□ Mean Normalization
□ Max Absolute Scaling
□ Robust Scaling etc.
01. Standardization:
• Standardization is a scaling technique where the values are centered around the mean with a unit standard deviation.
• This means that the mean of the attribute becomes zero and the resultant distribution has a unit standard deviation.
• Formula of Standardization: z = (x – μ )/σ, where x = values ,μ = mean ,σ = Standard Deviation
• Scaling technique: StandardScaler
• Fit_transform to be performed for train data set & transform to be performed for test data set to avoid data leakage
Python Implementation for StandardScaler:
# importing sklearn StandardScaler class which is for Standardization
from sklearn.preprocessing import StandardScaler
sc = StandardScaler() # creating an instance of the class object
X_new = sc.fit_transform(X)
# plotting the scatterplot of before and after Standardization
plt.title("Scatterplot Before Standardization", fontsize=18)
sns.scatterplot(data = X, color="blue")
#sns.histplot(data=X ,color="red",kde=True)
plt.title("Scatterplot After Standardization", fontsize=18)
sns.scatterplot(data = X_new, color="blue")
#sns.histplot(data=X_new ,color="red",kde=True)
02. Normalization
• Normalization is to change the values of numeric columns in the dataset to use a common scale, without distorting differences in the ranges of values or losing information.
• Min-max normalization is one of the most common ways to normalize data.
• For every feature, the minimum value of that feature gets transformed into a 0, the maximum value gets transformed into a 1, and every other value gets transformed into a decimal between 0 and 1.
• Min Max Normalization will perform best when the maximum and minimum value is very distinct and known.
• Formula of Min Max Scaling: Xsc=(X−Xmin)/(Xmax−Xmin)
Python Implementation for MinMaxScaler
# importing sklearn Min Max Scaler class which is for Standardization
from sklearn.preprocessing import MinMaxScaler
mm = MinMaxScaler() # creating an instance of the class object
X_new = mm.fit_transform(X) #fit and transforming
# plotting the scatterplot of before and after Min Max Scaling
plt.title("Scatterplot Before Min Max Scaling", fontsize=18)
sns.scatterplot(data = X, color="blue")
plt.title("Scatterplot After Min Max Scaling", fontsize=18)
sns.scatterplot(data = X_new, color="red")
Max Absolute Scaling
• Scale each feature by its maximum absolute value.
• This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0.
• It does not shift/center the data, and thus does not destroy any sparsity.
• This scaler can also be applied to sparse CSR or CSC matrices.
• Max Absolute scaling will perform a lot better in sparse data or when most of the values are 0.
• Formula of Max Absolute Scaling: Xsc = X /|Xmax|
Python Implementation for MaxAbsScaler
# importing sklearn Min Max Scaler class which is for Max Absolute Scaling
from sklearn.preprocessing import MaxAbsScaler
ma = MaxAbsScaler() # creating an instance of the class object
X_new = ma.fit_transform(X) #fit and transforming
# plotting the scatterplot of before and after Max Absolute Scaling
plt.title("Scatterplot Before Max Absolute Scaling", fontsize=18)
sns.scatterplot(data = X, color="blue")
plt.title("Scatterplot After Max Absolute Scaling", fontsize=18)
sns.scatterplot(data = X_new, color="red") | {"url":"https://data4fashion.com/model-evaluation-metricroc_auc_score/","timestamp":"2024-11-07T06:59:54Z","content_type":"text/html","content_length":"1049332","record_id":"<urn:uuid:2931650e-af32-4dfb-a6c1-929de20d2bcb>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00217.warc.gz"} |
Spherical capacitor derivation
Derivation of the Capacitance of a Spherical Capacitor
Two concentric spheres have charge +Q and -Q with a radius of a and b. What is the capacitance?
The Parallel Plate Capacitor
Parallel Plate Capacitor Derivation The figure below depicts a parallel plate capacitor. We can see two large plates placed parallel to each other at a small distance d. The distance between the
plates is filled with a dielectric medium as shown by the dotted array. as shown by the dotted array.
Spherical Capacitor
Subscribe My Channel for more Videoshttps:// Study / Course Materials Drive Lin...
Derive the expression capacitance of the spherical conductor
(a) Derive the expression for the capacitance of a parallel plate capacitor having plate area A and plate separation d. (b) Two charged spherical conductors of radii R 1 and R 2 when connected by a
conducting wire acquire charge q 1 and q 2 respectively. Find the
Chapter 5 Capacitance and Dielectrics
This video provides a simplified approach to the derivation of a Spherical Capacitor.EFT UNIT-31) Electric Dipolehttps://youtu /Ybcxi8nMWss?si=exWQqeYId1FT...
Spherical Capacitance
3 · Spherical Capacitors At any point in the spheres, the electrical capacity of a spherical conductor is the same according to Gauss'' Law, as it''s perpendicular to the surface and aims radially
outward is represented in the equation for the electric field of a …
Chapter 5 Capacitance and Dielectrics
Chapter 5 Capacitance and Dielectrics
UY1: Capacitance Of Spherical Capacitor
Find the capacitance of the spherical capacitor. Consider a sphere with radius r between the two spheres and concentric with them as Gaussian surface. From Gauss''s Law,
Spherical Capacitor
Spherical Capacitor Derivation A spherical capacitor is a type of capacitor that consists of two concentric spherical conductors with different radii. The inner conductor has a charge +Q and the
outer conductor has a charge -Q. The capacitance of a spherical ...
Cylindrical capacitor: Derivation & Examples
Cylinderical capacitor A cylinderical capacitor is made up of a conducting cylinder or wire of radius a surrounded by another concentric cylinderical shell of radius b (b>a). Let L be the length of
both the cylinders and charge on inner cylender is +Q and charge on outer cylinder is …
Physics 39 Capacitors (10 of 37) The Spherical Capacitor
Visit for more math and science lectures!In this video I will develop the general equation for capacitance of a spherical capacitor...
B8: Capacitors, Dielectrics, and Energy in Capacitors
The Effect of Insulating Material Between the Plates of a Capacitor To get at the effect of insulating material, rather than vacuum, between the plates of a capacitor, I need to at least outline the
derivation of the formula (C=epsilon_o dfrac{A}{d}). Keep in mind that ...
Using Gauss'' law to find E-field and capacitance
Capacitance of a spherical capacitor. Determine the capacitance of a conducting sphere of radius ( R ). Using Gauss'' law, it is easy to show that the electric field from a charged sphere is
identical to that of a point source outside of the sphere. In other words, at ...
8.1 Capacitors and Capacitance
8.1 Capacitors and Capacitance - University Physics ...
Derivation of Capacitance of Spherical Capacitor …
In this video you will know about complete Derivation of Capacitance of Spherical Capacitor @Kamaldheeriya Maths easyThis is must for those students who are ...
Spherical Capacitor Formula
Question 7: A spherical capacitor of inner and outer radii 8cm and 10cm has an energy of 10-8 J . Find its Voltage. Solution: Lets find the capacitance first, C = 4.446 × 10-9 F U = 1/2 CV 2 V = 2.12
V H harmansahani100 Follow Improve Next Article Please ...
Spherical Capacitor Important Concepts and Tips for JEE
2 · Spherical Capacitor Important Concepts and Tips for JEE
5.06 Spherical Capacitor
A spherical capacitor consists of two concentric spherical conducting plates. Let''s say this represents the outer spherical surface, or spherical conducting plate, and this one …
Spherical Capacitor
Spherical Capacitor. The capacitance for spherical or cylindrical conductors can be obtained by evaluating the voltage difference between the conductors for a given charge …
Capacitance Formulas, Definition, Derivation
Formula for parallel plate capacitor C= ε 0 ε r A / d = K ε 0 A / d Where, A = Area of plates d = distance between the plates( << size of plate ) Spherical Capacitor In a spherical capacitor, the
conducting plates are shaped like concentric spherical shells or a
5.06 Spherical Capacitor
5.06 Spherical Capacitor
Spherical Capacitor
Spherical Capacitor Conducting sphere of radius a surrounded concentrically by conducting spherical shell of inner radius b. • Q: magnitude of charge on each sphere • Electric field …
Spherical capacitor : Derivation & Capacitance inner sphere is …
A spherical capacitor consists of a solid or hollow spherical conductor of radius a, surrounded by another hollow concentric spherical of radius b shown below in figure 5. Let +Q be the charge given
to the inner sphere and -Q be the charge given to the outer sphere.
Spherical Capacitor: Electric Field and Capacitance Explained
Spherical Capacitor is covered by the following outlines:0. Capacitor1. Spherical Capacitor2. Structure of Spherical Capacitor3. Electric Field of Spherical ...
Capacitors | Brilliant Math & Science Wiki
22 · Capacitors are physical objects typically composed of two electrical conductors that store energy in the electric field between the conductors. Capacitors are characterized by how much charge
and therefore how much electrical energy they are able to store at a fixed voltage. Quantitatively, the energy stored at a fixed voltage is captured …
Obtain an expression of capacitance of spherical capacitor.
4 · Consider a spherical capacitor having +Q charge on the inner surface and –Q on the outer surface. Let R and r be the radii of the outer surface and inner surface respectively. These two charged
surfaces are separated by a … | {"url":"https://fotograaf-flevoland.nl/20_07_21_16872.html","timestamp":"2024-11-04T12:29:07Z","content_type":"text/html","content_length":"23195","record_id":"<urn:uuid:fcb0d4eb-b7be-4d8e-b68d-49ddf0dba3d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00436.warc.gz"} |
Searching Algorithms In C - Skill Seminary
Searching Algorithms in C
Searching is a fundamental concept in computer science. It’s the process of finding a specific item in a collection of items. In C programming, we often need to search for data in arrays or other
data structures. This tutorial will guide you through the most popular searching algorithms in C: Linear Search and Binary Search.
Table of Contents
Introduction to Searching Algorithms
Searching algorithms are methods used to find a particular item in a data structure. In C, these algorithms are commonly used to search for an element in an array. The two most popular searching
algorithms are Linear Search and Binary Search.
Linear Search
Linear Search is a straightforward method for searching. It works by comparing each element in the array with the element we’re searching for. If a match is found, the search ends. Linear Search is
best used for unsorted or unordered lists with fewer elements due to its simplicity.
Features of Linear Search Algorithm
• It is used for unsorted and unordered small list of elements.
• It has a very simple implementation.
Implementing Linear Search in C
Here’s an example of how to implement a Linear Search in C:
#include <stdio.h>
int linearSearch(int array[], int size, int element) {
for (int i = 0; i < size; i++) {
if (array[i] == element) {
return i;
return -1;
int main() {
int array[] = {20, 35, 85, 90, 145, 170, 175};
int element = 90;
int size = sizeof(array) / sizeof(array[0]);
int result = linearSearch(array, size, element);
printf("Element found at index: %d", result);
return 0;
In this code, we define a function linearSearch that takes an array, its size, and the element we’re searching for as arguments. It then iterates over the array, comparing each element to the search
element. If a match is found, it returns the index of the match. If no match is found, it returns -1.
Binary Search
Binary Search is a bit more complex than Linear Search. It works by repeatedly dividing the search interval in half. The initial interval includes the entire array. If the value of the search key is
less than the item in the middle of the interval, the interval is reduced to the lower half. Otherwise, it is reduced to the upper half. The search process repeats on the new interval.
Features of Binary Search
• It is great to search through large sorted arrays.
• It has a simple implementation.
Implementing Binary Search in C
Here’s an example of how to implement a Binary Search in C:
#include <stdio.h>
int binarySearch(int array[], int low, int high, int element) {
if (high >= low) {
int mid = low + (high - low) / 2;
if (array[mid] == element)
return mid;
if (array[mid] > element)
return binarySearch(array, low, mid - 1, element);
return binarySearch(array, mid + 1, high, element);
return -1;
int main() {
int array[] = {2, 3, 4, 10, 40};
int element = 10;
int size = sizeof(array) / sizeof(array[0]);
int result = binarySearch(array, 0, size - 1, element);
printf("Element found at index: %d", result);
return 0;
Wrapping Up
Searching algorithms are essential in C programming. Understanding how they work and when to use them can significantly improve your coding efficiency. Remember, Linear Search is best for small,
unordered lists, while Binary Search is ideal for large, sorted arrays.
Frequently Asked Questions (FAQ)
1. What is a searching algorithm in C?
A searching algorithm in C is a method used to find a particular item in a data structure, such as an array. The most common searching algorithms in C are Linear Search and Binary Search.
2. What are the 3 search algorithms?
The three most common search algorithms are Linear Search, Binary Search, and Hashing. Linear Search checks each element in the data structure until it finds a match. Binary Search repeatedly
divides the search interval in half until it finds the target. Hashing uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found.
3. Which searching technique is best in C?
The “best” searching technique in C depends on the specific circumstances. For small, unsorted data sets, a Linear Search is simple and effective. For larger, sorted data sets, a Binary Search is
more efficient. Hashing can be extremely efficient for large data sets, but it requires a good hash function and management of potential collisions.
4. How to search in C programming?
In C programming, you can search for an item using a variety of methods. The simplest is a Linear Search, where you start at the beginning of the array and compare each element to the target
until you find a match or reach the end of the array. A more efficient method for large, sorted arrays is a Binary Search, where you start in the middle of the array and eliminate half of the
remaining elements with each comparison. Another method is Hashing, where you use a hash function to compute an index into an array of buckets or slots, from which the desired value can be found.
Related Tutorials | {"url":"https://www.skillseminary.com/c-programming/searching-algorithms-in-c/","timestamp":"2024-11-02T12:18:47Z","content_type":"text/html","content_length":"200291","record_id":"<urn:uuid:00d6a3d8-2028-4854-b625-3d2cf784d51c>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00024.warc.gz"} |
I’ve been working on a ClojureScript simulation of Set.
If you’re not familiar with the rules of Set, the aim of the game is to collect “sets”. A set is 3 cards where each attribute of colour, shape, fill and number is the same for all three cards or
different for all 3 cards:
(defn makes-set? [cards]
(let [attrs [:colour :fill :shape :number]]
(and (= 3 (count cards))
(every? (fn [prop]
(let [vs (map #(get % prop) cards)
variations (count (distinct vs))]
(or (= variations 1) ; all the same
(= variations 3)))) ; all different
You can play the game here: https://duncanjbrown.github.io/set/
So far it’s not networked or anything, and there are a couple of bugs around the end of the game, but the basic mechanics are working.
Now in my house we have a rule to make Set more interesting, which we call Nelly’s rule after our friend Nelly, who always insists on it. Nelly’s rule is that if you call “SET!” and then fail to
identify a set, you have to give up one of your existing sets. This tends to make things a little bit more tense and more interesting.
There’s also a rule to Set which to my mind works in the opposite direction, which is that if you all feel like there are no sets on the table, then you’re allowed to put out 3 extra cards. I don’t
really like doing this because if there’s a hard-to-find set on the table, I don’t want to cop out if we don’t have to.
That’s why on the left there is a checkbox called “Highlight Sets”. This lights up all the cards on the table which can form sets with the others, so it can let you know if the table has any sets at
all on it.
This feature isn’t really intended for use in the simulation. What I want is to be able to press “Highlight sets” in real life. My assumption is that doing this via a smartphone photo is going to be
good enough. & the shadow purpose of this simulation to test-drive the UX for this feature.
My criteria are
• it should be correct
• it should be extremely fast
• it should be as easily accessible as possible from an iPhone
I’ll post updates here as I make progress.
The source code for Set is here: https://github.com/duncanjbrown/set
#Programming #Clojurescript #Set | {"url":"https://mechanicalsurvival.com/blog/set/","timestamp":"2024-11-03T18:20:55Z","content_type":"text/html","content_length":"11044","record_id":"<urn:uuid:f5c38f88-65db-40fd-be12-f1cdf84c0302>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00707.warc.gz"} |
Multiplication of three digits close to 100
Multiplication of three digits close to 100
Sachin said on : 2018-04-16 10:38:53
A person who knows math tricks certainly has an edge over the others.With the ability to rapid do mental calculation one will not only be able to score higher marks in competitive exams but will
also enhance his mental sharpness enormously. Its fun to calculate this way after all, steps of calculation need not be shown on paper!
Below we shall discuss one such multiplication trick wherein you will be able to multiply any two 3 digit numbers easily within seconds.
One thing to remember here is that both the numbers should be close to 100.The closer they are the easier it is to multiply.
Here are steps to do the 3 digit multiplication in detail
Step 1:Find the difference of each of the numbers from 100
Step 2:Multiply the differences that you found in step 1 to get the unit's place digit
Step 3:Add the differences of both the numbers to 100 and place them to the left of the result obtained in step 2.
Let us now consider an example to understand better
Example 1: 102 x 103
Here we notice that both the numbers are close to 100
We shall apply the technique that we learnt
Step 1:First we find the differences of the numbers
We observe that
102 is 2 more than 100 so the difference is 2
103 is 3 more than 100 so the difference is 3
Step 2:Multiplying the differences
we get,
2 x 3=6 which is the unit's place digit.
Step 3:Now to find the rest of the number adding the differences to 100
we get,
Placing 105 to the left of the result obtained in step 2(6)
we get
102 x 103=10506
Ans 10506
Try multiplying the below numbers using the above trick and see how fast you are able to do it
(1)101 x 102
(2)102 x 104
(3)103 x 104
(4)104 x 105
(5)105 x 106
(6)106 x 107
(7)107 x 108
(8)108 x 109
!! OOPS Login [Click here] is required for more results / answer
Help other students, write article, leave your comments | {"url":"https://engineeringslab.com/tutorial_vedic_quicker_shortcut_math_tricks/multiplication-of-three-digits-close-to-100-70.htm","timestamp":"2024-11-02T04:49:35Z","content_type":"text/html","content_length":"39641","record_id":"<urn:uuid:7f59d624-58ad-495c-b882-b51ca496b045>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00318.warc.gz"} |
Precalculus - Online Tutor, Practice Problems & Exam Prep
Probability is something that we deal with every single day without realizing it. Whenever we open our weather app to check for the chance of rain or when we consider the possibility of winning the
lottery, we're really dealing with probability. But probability isn't just something that we can think about or consider, it's something that we can actually calculate. Now I know this might not be
the best news for you knowing that you have another thing to calculate, but you don't have to worry because here I'm going to walk you through the basics of calculating probability and some of the
notation that goes along with that. So let's go ahead and get started. Now when working with probability mathematically, we consider the probability of different events and we write this as P(event).
An event is anything, no matter how big or small, that could happen. So if I was considering the probability of it raining, I would write P(rain). Or if I was considering the probability of getting
heads when flipping a coin, I would write P(heads). Now when actually calculating probability we need to consider the 2 different types. So if I have a coin in my hand and I'm going to flip it but I
want to know the probability of getting heads if I do. Since this is based on what could possibly happen, this is referred to as theoretical probability. But if I were instead to take that coin and
flip it 3 times and record the results of each flip and calculate the probability of getting heads based on that, since this is based on what actually did happen, this is referred to as empirical or
experimental probability since we performed an experiment here. Now when calculating the theoretical probability of getting heads, since I know there are only 2 possibilities when I flip a coin,
heads or tails, and only one of those is heads, the theoretical probability of getting heads on a coin flip is equal to 12. Now with our empirical probability over here, since we got heads on 2 out
of those 3 tosses, the empirical probability of getting heads on a coin flip is equal to 23. Now, the most important thing to consider here is that our theoretical probability was calculated before
any events actually happened, whereas our empirical probability was calculated after our events actually occurred based on data. Now when looking at our formulas that we actually used to calculate
these here, these look slightly different just because they're tailored to each different type of probability, but they're really the same thing. So with our theoretical probability here, we took the
number of outcomes that included our event. So in this case there was only one way to get heads, and then we divided that by the number of total possible of times our event actually occurred, in this
case twice, and divided that by the number of total trials. So we still took our event and divided it by our total. Now that we've seen the basics of calculating probability, let's work through some
examples together. So looking at our first example here, we're asked, when rolling a 6-sided die, what is the probability of rolling a number greater than 3? So here, we're looking for the
probability of getting a number greater than 3. Now when rolling a 6-sided die, I know that I could roll any number 1 through 6. And when looking at these possible outcomes here, I see that only 3 of
these numbers are greater than 3. So when calculating the probability here, I would take the possible outcomes here that include my event, in this case, 3 of them, and divide it by the number of
total possible outcomes. In this case, since there are 6 sides of that die, there is 6 total possible outcomes. Now simplifying this fraction gives me a probability of 12, and you'll often see
probabilities reported as decimals. So here, the probability of rolling a number greater than 3 is equal to 0.5. Let's look at our other example here. Here, we're given data and asked to find the
probability of rolling a number greater than 3 based on that data of rolling a 6-sided die 10 times. So here we're calculating empirical probability because we're actually given data to work with of
events that already happened. So here we're calculating still the probability of rolling a number greater than 3, but we're going to take a look at this data table. So since there were 10 total rolls
here, I already know that the number on the bottom of that fraction is going to be 10 for the total number of trials. Then I want to look for the number of times that my event actually occurred.
Since my event was rolling a number greater than 3, I want to look for the number of times that that happened. So on that first roll, I got a 6, then I got a 4, then I got a bunch of fives in a row
and a 6 and then a 4 and a 5. So 8 out of those 10 times, I roll the number greater than 3. So taking that probability and simplifying that fraction, I get a probability of 45 or as a decimal 0.8.
Now something that you might be wondering here is why our empirical probability is so different than our theoretical probability when we're technically calculating the probability of the same event.
And the answer comes down to our sample size. So since we only did 10 trials, this is not quite so close to our theoretical probability. But if I were to roll the dice a 100 or a 1000 times, I would
get closer and closer to that theoretical probability of 0.5. So this is something to consider in your science courses when you're working with your own data. Now, when you're working through
probability problems, you may see all of the possible events possible outcomes of an event expressed as a set, so in curly brackets from set notation. Now this will be referred to as a sample space,
and it just shows all of the possible outcomes in between those curly brackets. So, for example, the sample space of flipping a coin, I would say that S={heads,tails}. So this here represents the
sample space of flipping a coin. Now that we've seen the basics of calculating probability and we're more familiar with some of that notation, let's get some more practice. Thanks for watching, and
I'll see you in the next one. | {"url":"https://www.pearson.com/channels/precalculus/learn/patrick/21-combinatorics-and-probability/probability?chapterId=24afea94","timestamp":"2024-11-10T15:38:22Z","content_type":"text/html","content_length":"487022","record_id":"<urn:uuid:980550d3-a947-4d7e-a634-4be53f2c8f4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00749.warc.gz"} |
NumPy: Add a vector to each row of a given matrix - w3resource
NumPy: Add a vector to each row of a given matrix
NumPy: Basic Exercise-34 with Solution
Write a NumPy program to add a vector to each row of a given matrix.
This problem involves writing a NumPy program to add a given vector to each row of a specified matrix. The task requires leveraging NumPy's broadcasting capabilities to efficiently perform the
addition operation across all rows simultaneously. By adding the vector to each row, the program modifies the matrix according to the specified vector values, resulting in a new matrix with updated
row elements.
Sample Solution :
Python Code :
# Importing the NumPy library with an alias 'np'
import numpy as np
# Creating a NumPy array 'm' representing a matrix with 4 rows and 3 columns
m = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
# Creating a NumPy array 'v' representing a vector with 3 elements
v = np.array([1, 1, 0])
# Printing a message indicating the original vector 'v'
print("Original vector:")
# Printing a message indicating the original matrix 'm'
print("Original matrix:")
# Creating an empty matrix 'result' with the same shape as 'm' using np.empty_like()
result = np.empty_like(m)
# Adding the vector 'v' to each row of the matrix 'm' using a for loop
for i in range(4):
result[i, :] = m[i, :] + v
# Printing a message indicating the result after adding the vector 'v' to each row of the matrix 'm'
print("\nAfter adding the vector v to each row of the matrix m:")
Original vector:
[1 1 0]
Original matrix:
[[ 1 2 3]
[ 4 5 6]
[ 7 8 9]
[10 11 12]]
After adding the vector v to each row of the matrix m:
[[ 2 3 3]
[ 5 6 6]
[ 8 9 9]
[11 12 12]]
In the above code -
np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) creates a 2D array and stores in the variable 'm' with the shape (4, 3) and elements [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]].
np.array([1, 1, 0]) creates a 1D array and stores in the variable 'v' with the elements [1, 1, 0].
result = np.empty_like(m): This line creates an empty array 'result' with the same shape and data type as the array 'm'. The contents of 'result' are not initialized, and they will be overwritten in
the next step.
for i in range(4):: This line starts a loop that iterates over the 4 rows of the 2D array 'm'.
result[i, :] = m[i, :] + v: This line adds the 1D array 'v' to the current row 'i' of the 2D array 'm', and assigns the result to the corresponding row 'i' in the 'result' array.
Finally ‘print(result)’ prints the resulting 2D array after the 1D array 'v' has been added to each row of the 2D array 'm'.
Visual Presentation:
Python-Numpy Code Editor:
Previous: NumPy program to compute the inner product of two given vectors.
Next: NumPy program to save a given array to a binary file.
What is the difficulty level of this exercise?
Test your Programming skills with w3resource's quiz.
It will be nice if you may share this link in any developer community or anywhere else, from where other developers may find this content. Thanks.
• Weekly Trends and Language Statistics | {"url":"https://www.w3resource.com/python-exercises/numpy/basic/numpy-basic-exercise-34.php","timestamp":"2024-11-02T04:56:45Z","content_type":"text/html","content_length":"138975","record_id":"<urn:uuid:7aba1e3e-9e68-43ac-ad43-89c76b848399>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00800.warc.gz"} |
Elliott Waves and Fibonacci
Fibonacci numbers play an huge role in Elliott Wave trading.
Elliott didn’t discover the Fibonacci relationships himself, but this was brought to author's attention by Charles J. Collins who had published Elliott's "The Wave Principle" and helped introduce
Elliott's theory to Wall Street.
Using known Fibonacci ratios (38.2%, 50%, 61.8%, 161.8% and so on) traders can project the length of waves, the depth of corrections, move extensions etc.
Wave 1
Wave 1 - The first The first impulsive wave, which Elliott traders don't use for trading, but rather for analysis of the wave 2.
impulsive wave
Wave 2
Wave 2 should not retrace below the beginning of wave 1.
Normally the retracement is from 50% to 61.8% of Wave 1. At times it can go below the 61.8% due to the fact that wave 2 retracement is quite aggressive since many traders
don't acknowledge the change in the main trend yet.
The minimum retracement to expect is 38.2%.
Wave 3
Wave 3 is never be the shortest among waves 1, 3 and 5.
At the very least it should be equal to wave 1 in length.
Wave 3 as the longest wave normally tend to be 161.8% of wave 1.
If goes beyond 161% - the next target is 261.8%, and rarely extended target - 425% of wave 1.
Wave 4
Wave 4 is one of the shallowest waves: at this stage many traders take profits, while there are few others who are willing to trade counter-trend.
It often retraces slowly for an extended period of time and reach normally only 38.2% of wave 3.
It rarely retraces to 50% of wave 3.
Wave 5
Wave 5 should move at least 61.8% of the length of wave 1.
If wave 3 is greater than 161.8% of wave 1 in length, the targets for wave 5 will be 100% of Wave 1,
or 161.8% of wave 1, rarely 261.8% of wave 1.
If wave 3 is less than 161.8% of wave 1 in length, wave 5 will often be extended with targets of:
61.8% of wave 1 + wave 3
100% of wave 1 + wave 3
or 161.8% of wave 1 + wave 3
Tips: Fibonacci projections - completion of wave 5
As you can see, as soon as wave 1 is completed we can already make a projection of the first possible target for wave 5.
To do so we multiply the height of wave 1 by 161.8% and project the result from the end of wave one.
Later when wave 3 is completed we can add yet another projection of the second possible price target for wave 5. To do so we have to examine wave 3: if wave 3 is greater than 161.8% of wave 1, the
targets for wave 5 will be 100% of Wave 1, or 161.8% of wave 1. If wave 3 is less than 161.8% of wave 1 in length, wave 5 will often be extended with targets of at least 61.8% of wave 1 + wave 3.
In the end, the closer are the results for wave 5 targets calculated by different methods the higher will be the chances to see a trend reversal in between those levels.
Copyright © Forex-indicators.net
• trader
Hi, great job!! very helpful. Will you be providing a free harmonic pattern indicator here? thanks..
• trader
thanks a lot,i grabbed a very good knowledge from here.
• trader
very good job superb
• t.rajganesh
thanks for your indicator helping in a easier way to understand.....t.rajganesh
• Manan
very nice,i am trading for years and apply indicators along with elliot wave to trade.my email address is [email protected].i am very happy if we could share our idea with each other.
Best of Luck,
• trader
3 wave length should be starting from the beginning of 1 wave instead of beginning of 2 wave
• trader
TQVM, this very useful
Post new comment | {"url":"http://forex-indicators.net/cycle-indicators/elliott-waves/fibonacci","timestamp":"2024-11-11T07:30:28Z","content_type":"application/xhtml+xml","content_length":"23184","record_id":"<urn:uuid:7db3abe5-0d22-49cd-9483-c875673ddd83>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00373.warc.gz"} |
Optimal aggressiveness in volleyball
Ben Raymond (ben@untan.gl), Mark Lebedew (@homeonthecourt)
1 Background
Related to this discussion about measuring errors in volleyball: is it possible to identify the optimal level of aggressiveness in volleyball? Consider serving: if I am too aggressive I will make too
many errors; if I am not aggressive enough then my opponent will make perfect passes on my easy serves and consequently win more points with their attack. There ought to be an optimal level of
aggressiveness that maximises my point scoring.
My breakpoint rate (i.e. the rate at which I win points when serving) depends on several things:
1. Errors. Obviously I lose the point if I make a service error. My error rate should increase with increased aggressiveness.
2. Aces. My ace rate should also increase with increased aggressiveness, but probably only to a point. If I become extremely aggressive, I will tend to make more service errors and my ace rate will
3. If the serve is neither an ace nor an error, then my win rate depends on my opponent’s sideout ability. Arguably, increased serve aggressiveness should decrease my opponent’s sideout ability,
because their passing and subsequent offense should not be as effective.
A previous study [1] looked at these factors and showed that there is an optimal level of serving aggressiveness that maximises point scoring. We can replicate their findings but using a simpler
analysis technique.
2 Processing
We use serving data from the 2015/16 Polish PlusLiga. For each individual player (using only players who had made at least 20 serves) we calculate their error, ace, and opponent sideout rates. We
don’t have any direct measure of aggressiveness in the data but if it is genuinely an underlying controller of serve performance then we should nevertheless be able to see its effects. These three
variables (error, ace, and opponent sideout rate) should vary in synchrony with aggressiveness. In effect, our data has been measured in three dimensions but the majority of the variation will occur
along a single axis (aggressiveness) that is embedded in that three-dimensional data space.
We use a technique known as principal components analysis to recover that hidden aggressiveness axis. This takes our our original dataset and transforms it to a new coordinate system. The first axis
of this new coordinate system is chosen mathematically so that it aligns with the direction of maximum variance in the data. The second axis is orthogonal to the first and is aligned with the
direction of next-highest variance, and so on.
By our logic above, the largest variations in the data should correspond to variations in aggressiveness, and therefore we might expect the first principal component axis to reflect aggressiveness.
3 Results
We plot error, ace, and opponent sideout rate against the recovered “aggressiveness” axis. The points in each plot represent individual players (with the colour of each point indicating whether that
player tends to jump serve or not). The black lines show a smooth fit that gives an indication of the shape of the relationship in question. The grey shading shows the uncertainty around that shape.
Note that the numerical scale of aggressiveness (ranging from about -0.2 to +0.3) is an arbitrary one — the numbers have no direct physical intepretation. But different players can be ranked
according to their scores on that scale.
The shapes of these relationships are much as we expected: error rate increases with aggressiveness, and this is a very tight relationship. Ace rate shows a peaked relationship, initially increasing
with aggressiveness but falling off again at high aggressiveness values. This is also a weaker relationship that was the case for error rate (i.e. the points are scatted more variably around the
line, indicating the influence of other factors on ace rate). Opponent sideout rate decreases with aggressiveness, and flattening out (or even perhaps increasing slightly) at high aggressiveness
While we did not use any information about breakpoint rate to construct our aggressiveness axis, we can nevertheless plot aggressiveness against breakpoint rate:
This curve peaks at middling values of aggressiveness — around 0.02 — so there does indeed appear to be an optimal aggressivess level at which point scoring is maximised. This aggressiveness level
corresponds to a serve error rate of around 0.19. Individual player scores could be used to help tailor their serving strategy. The curve appears to be asymmetric, rising relatively slowly as
aggressiveness increases, before dropping more sharply. Thus it would seem that it is better to err on the side of caution (there is less of a penalty to being over-cautious than there is to being
4 Postscript
Since breakpoint rate has an optimal aggression level, and aggression is tightly (monotonically) related to serve error rate, it follows that we should also see the same optimal breakpoint rate if we
plot it directly against serve error rate:
This curve is a very similar shape to the previous one. For practical purposes it may be sufficient simply to assess serve error rate against breakpoint rate in order to explore the optimal level of
[1] Burton T, Powers S (2015) A linear model for estimating optimal service error fraction in volleyball. Journal of Quantitative Analysis in Sports 11:117–129. doi:10.1515/jqas-2014-0087 | {"url":"https://untan.gl/articles/2017/02/09_optimal-aggressiveness-in-volleyball.html","timestamp":"2024-11-06T20:47:41Z","content_type":"text/html","content_length":"26566","record_id":"<urn:uuid:a9af4d92-a2cb-4aa6-8181-fa2fcc9e9ddc>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00850.warc.gz"} |
RXshrink: Maximum Likelihood Shrinkage using Generalized Ridge or Least Angle Regression
Functions are provided to calculate and display ridge TRACE Diagnostics for a variety of alternative Shrinkage Paths. While all methods focus on Maximum Likelihood estimation of unknown true effects
under normal distribution-theory, some estimates are modified to be Unbiased or to have "Correct Range" when estimating either [1] the noncentrality of the F-ratio for testing that true Beta
coefficients are Zeros or [2] the "relative" MSE Risk (i.e. MSE divided by true sigma-square, where the "relative" variance of OLS is known.) The eff.ridge() function implements the "Efficient
Shrinkage Path" introduced in Obenchain (2022) <Open Statistics>. This "p-Parameter" Shrinkage-Path always passes through the vector of regression coefficient estimates Most-Likely to achieve the
overall Optimal Variance-Bias Trade-Off and is the shortest Path with this property. Functions eff.aug() and eff.biv() augment the calculations made by eff.ridge() to provide plots of the bivariate
confidence ellipses corresponding to any of the p*(p-1) possible ordered pairs of shrunken regression coefficients. Functions for plotting TRACE Diagnostics now have more options.
Version: 2.3
Depends: R (≥ 3.5.0)
Imports: lars, ellipse
Suggests: mgcv
Published: 2023-08-07
DOI: 10.32614/CRAN.package.RXshrink
Author: Bob Obenchain
Maintainer: Bob Obenchain <wizbob at att.net>
License: GPL-2
URL: https://www.R-project.org , http://localcontrolstatistics.org
NeedsCompilation: no
In views: MachineLearning
CRAN checks: RXshrink results
Reference manual: RXshrink.pdf
Package source: RXshrink_2.3.tar.gz
Windows binaries: r-devel: RXshrink_2.3.zip, r-release: RXshrink_2.3.zip, r-oldrel: RXshrink_2.3.zip
macOS binaries: r-release (arm64): RXshrink_2.3.tgz, r-oldrel (arm64): RXshrink_2.3.tgz, r-release (x86_64): RXshrink_2.3.tgz, r-oldrel (x86_64): RXshrink_2.3.tgz
Old sources: RXshrink archive
Please use the canonical form https://CRAN.R-project.org/package=RXshrink to link to this page. | {"url":"http://ctan.mirror.garr.it/mirrors/CRAN/web/packages/RXshrink/index.html","timestamp":"2024-11-09T15:43:03Z","content_type":"text/html","content_length":"7127","record_id":"<urn:uuid:256907d0-165d-47c9-a6a1-6f2bae89abff>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00100.warc.gz"} |
5.2 Vector Addition and Subtraction: Analytical Methods
Learning Objectives
Learning Objectives
By the end of this section, you will be able to do the following:
• Define components of vectors
• Describe the analytical method of vector addition and subtraction
• Use the analytical method of vector addition and subtraction to solve problems
Section Key Terms
analytical method component (of a two-dimensional vector)
Components of Vectors
Components of Vectors
For the analytical method of vector addition and subtraction, we use some simple geometry and trigonometry, instead of using a ruler and protractor as we did for graphical methods. However, the
graphical method will still come in handy to visualize the problem by drawing vectors using the head-to-tail method. The analytical method is more accurate than the graphical method, which is limited
by the precision of the drawing. For a refresher on the definitions of the sine, cosine, and tangent of an angle, see Figure 5.18.
Since, by definition, $cosθ=x/hcosθ=x/h$, we can find the length x if we know h and $θθ$ by using $x=hcosθx=hcosθ$. Similarly, we can find the length of y by using $y=hsinθy=hsinθ$. These
trigonometric relationships are useful for adding vectors.
When a vector acts in more than one dimension, it is useful to break it down into its x and y components. For a two-dimensional vector, a component is a piece of a vector that points in either the x-
or y-direction. Every 2-d vector can be expressed as a sum of its x and y components.
For example, given a vector like$A A$in Figure 5.19, we may want to find what two perpendicular vectors,$A x A x$and$A y A y$, add to produce it. In this example,$A x A x$and$A y A y$form a right
triangle, meaning that the angle between them is 90 degrees. This is a common situation in physics and happens to be the least complicated situation trigonometrically.
$A x A x$and$A y A y$are defined to be the components of$A A$along the x- and y-axes. The three vectors,$A A$,$A x A x$, and$A y A y$, form a right triangle.
$A x + A y = A A x + A y = A$
If the vector$A A$is known, then its magnitude$A A$(its length) and its angle$θ θ$(its direction) are known. To find$A x A x$and$A y A y$, its x- and y-components, we use the following relationships
for a right triangle:
$A x =Acosθ A x =Acosθ$
$A y =Asinθ, A y =Asinθ,$
where$A x A x$is the magnitude of A in the x-direction,$A y A y$is the magnitude of A in the y-direction, and$θ θ$is the angle of the resultant with respect to the x-axis, as shown in Figure 5.20.
Suppose, for example, that$A A$is the vector representing the total displacement of the person walking in a city, as illustrated in Figure 5.21.
Then A = 10.3 blocks and $θ= 29.1 ∘ θ= 29.1 ∘$, so that
5.6$A x = Acosθ = (10.3 blocks)(cos29 .1 ∘ ) = (10.3 blocks)(0.874) = 9.0 blocks. A x = Acosθ = (10.3 blocks)(cos29 .1 ∘ ) = (10.3 blocks)(0.874) = 9.0 blocks.$
This magnitude indicates that the walker has traveled 9 blocks to the east—in other words, a 9-block eastward displacement. Similarly,
5.7 $A y = Asinθ = (10.3 blocks)(sin29 .1 ∘ ) = (10.3 blocks)(0.846) = 5.0 blocks, A y = Asinθ = (10.3 blocks)(sin29 .1 ∘ ) = (10.3 blocks)(0.846) = 5.0 blocks,$
indicating that the walker has traveled 5 blocks to the north—a 5-block northward displacement.
Analytical Method of Vector Addition and Subtraction
Analytical Method of Vector Addition and Subtraction
Calculating a resultant vector (or vector addition) is the reverse of breaking the resultant down into its components. If the perpendicular components $A x A x$ and $A y A y$ of a vector $A A$ are
known, then we can find $A A$ analytically. How do we do this? Since, by definition,
$tanθ=y/x (or in this case tanθ= A y / A x ), tanθ=y/x (or in this case tanθ= A y / A x ),$
we solve for $θ θ$ to find the direction of the resultant.
$θ= tan −1 ( A y / A x ) θ= tan −1 ( A y / A x )$
Since this is a right triangle, the Pythagorean theorem (x^2 + y^2 = h^2) for finding the hypotenuse applies. In this case, it becomes
$A 2 = A x 2 + A y 2 . A 2 = A x 2 + A y 2 .$
Solving for A gives
$A= A x 2 + A y 2 . A= A x 2 + A y 2 .$
In summary, to find the magnitude $A A$ and direction $θ θ$ of a vector from its perpendicular components $A x A x$ and $A y A y$, as illustrated in Figure 5.22, we use the following relationships:
$A = A x 2 + A y 2 θ= tan −1 ( A y / A x ) A = A x 2 + A y 2 θ= tan −1 ( A y / A x )$
Sometimes, the vectors added are not perfectly perpendicular to one another. An example of this is the case below, where the vectors $A A$ and $B B$ are added to produce the resultant $R , R ,$ as
illustrated in Figure 5.23.
If$A A$and$B B$represent two legs of a walk (two displacements), then$R R$is the total displacement. The person taking the walk ends up at the tip of$R R$. There are many ways to arrive at the same
point. The person could have walked straight ahead first in the x-direction and then in the y-direction. Those paths are the x- and y-components of the resultant,$R x R x$and$R y . R y .$If we know$R
x R x$and$R y R y$, we can find$R R$and$θ θ$using the equations$R= R x 2 + R y 2 R= R x 2 + R y 2$and$θ=ta n –1 ( R y / R x ) θ=ta n –1 ( R y / R x )$.
1. Draw in the x and y components of each vector (including the resultant) with a dashed line. Use the equations$A x =Acosθ A x =Acosθ$and$A y =Asinθ A y =Asinθ$to find the components. In Figure
5.24, these components are$A x A x$,$A y A y$,$B x B x$, and$B y . B y .$Vector$A A$makes an angle of$θ A θ A$with the x-axis, and vector$B B$makes and angle of$θ B θ B$with its own x-axis (which
is slightly above the x-axis used by vector A).
2. Find the x component of the resultant by adding the x component of the vectors
$R x = A x + B x R x = A x + B x$
and find the y component of the resultant (as illustrated in Figure 5.25) by adding the y component of the vectors.
$R y = A y + B y . R y = A y + B y .$
Now that we know the components of $R, R,$ we can find its magnitude and direction.
3. To get the magnitude of the resultant R, use the Pythagorean theorem.
$R= R x 2 + R y 2 R= R x 2 + R y 2$
4. To get the direction of the resultant
$θ= tan −1 ( R y / R x ) . θ= tan −1 ( R y / R x ) .$
Watch Physics
Classifying Vectors and Quantities Example
This video contrasts and compares three vectors in terms of their magnitudes, positions, and directions.
Grasp Check
Three vectors, $u →$, $v →$, and $w →$, have the same magnitude of $5 units$. Vector $v →$ points to the northeast. Vector $w →$ points to the southwest exactly opposite to vector $u →$. Vector $u →$
points in the northwest. If the vectors $u →$, $v →$, and $w →$ were added together, what would be the magnitude of the resultant vector? Why?
a. $0 units$. All of them will cancel each other out.
b. $5 units$. Two of them will cancel each other out.
c. $10 units$. Two of them will add together to give the resultant.
d. $15$ units. All of them will add together to give the resultant.
Tips For Success
In the video, the vectors were represented with an arrow above them rather than in bold. This is a common notation in math classes.
Using the Analytical Method of Vector Addition and Subtraction to Solve Problems
Using the Analytical Method of Vector Addition and Subtraction to Solve Problems
Figure 5.26 uses the analytical method to add vectors.
Worked Example
An Accelerating Subway Train
Add the vector$A A$to the vector$B B$shown in Figure 5.26, using the steps above. The x-axis is along the east–west direction, and the y-axis is along the north–south directions. A person first walks
$53.0 m 53.0 m$in a direction$20.0° 20.0°$north of east, represented by vector$A. A.$The person then walks$34.0 m 34.0 m$in a direction$63.0° 63.0°$north of east, represented by vector$B. B.$
The components of$A A$and$B B$along the x- and y-axes represent walking due east and due north to get to the same ending point. We will solve for these components and then add them in the x-direction
and y-direction to find the resultant.
First, we find the components of$A A$and$B B$along the x- and y-axes. From the problem, we know that$A=53.0 m A=53.0 m$,$θ A = 20.0 ∘ θ A = 20.0 ∘$,$B B$=$34.0 m 34.0 m$, and$θ B = 63.0 ∘ θ B = 63.0
∘$. We find the x-components by using$A x =Acosθ A x =Acosθ$, which gives
$A x = Acos θ A =(53.0 m)(cos 20.0 ∘ ) = (53.0 m)(0.940)=49.8 m A x = Acos θ A =(53.0 m)(cos 20.0 ∘ ) = (53.0 m)(0.940)=49.8 m$
$B x = Bcos θ B =(34.0 m)(cos 63.0 ∘ ) = (34.0 m)(0.454)=15.4 m. B x = Bcos θ B =(34.0 m)(cos 63.0 ∘ ) = (34.0 m)(0.454)=15.4 m.$
Similarly, the y-components are found using$A y =Asin θ A A y =Asin θ A$
$A y = Asin θ A =(53.0 m)(sin 20.0 ∘ ) = (53.0 m)(0.342)=18.1 m A y = Asin θ A =(53.0 m)(sin 20.0 ∘ ) = (53.0 m)(0.342)=18.1 m$
$B y = Bsin θ B =(34.0 m)(sin 63.0 ∘ ) = (34.0 m)(0.891)=30.3 m. B y = Bsin θ B =(34.0 m)(sin 63.0 ∘ ) = (34.0 m)(0.891)=30.3 m.$
The x- and y-components of the resultant are
$R x = A x + B x =49.8 m+15.4 m=65.2 m R x = A x + B x =49.8 m+15.4 m=65.2 m$
$R y = A y + B y =18.1 m+30.3 m=48.4 m . R y = A y + B y =18.1 m+30.3 m=48.4 m .$
Now we can find the magnitude of the resultant by using the Pythagorean theorem
5.8$R= R x 2 + R y 2 = (65.2) 2 + (48.4) 2 m R= R x 2 + R y 2 = (65.2) 2 + (48.4) 2 m$
so that
$R= 6601 m =81.2 m . R= 6601 m =81.2 m .$
Finally, we find the direction of the resultant
$θ= tan −1 ( R y / R x )=+ tan −1 (48.4/65.2) . θ= tan −1 ( R y / R x )=+ tan −1 (48.4/65.2) .$
This is
$θ= tan −1 (0.742)= 36.6 ∘ . θ= tan −1 (0.742)= 36.6 ∘ .$
This example shows vector addition using the analytical method. Vector subtraction using the analytical method is very similar. It is just the addition of a negative vector. That is,$A−B≡A+(−B)
A−B≡A+(−B)$. The components of –$B B$are the negatives of the components of$B B$. Therefore, the x- and y-components of the resultant$A−B=R A−B=R$ are
$R x = A x +- B x R x = A x +- B x$
$R y = A y +- B y R y = A y +- B y$
and the rest of the method outlined above is identical to that for addition.
Practice Problems
What is the magnitude of a vector whose x-component is 4 cm and whose y-component is 3 cm?
a. 1 cm
b. 5 cm
c. 7 cm
d. 25 cm
What is the magnitude of a vector that makes an angle of 30° to the horizontal and whose x-component is 3 units?
a. 2.61 units
b. 3.00 units
c. 3.46 units
d. 6.00 units
Links To Physics
Atmospheric Science
Atmospheric science is a physical science, meaning that it is a science based heavily on physics. Atmospheric science includes meteorology (the study of weather) and climatology (the study of
climate). Climate is basically the average weather over a longer time scale. Weather changes quickly over time, whereas the climate changes more gradually.
The movement of air, water and heat is vitally important to climatology and meteorology. Since motion is such a major factor in weather and climate, this field uses vectors for much of its math.
Vectors are used to represent currents in the ocean, wind velocity and forces acting on a parcel of air. You have probably seen a weather map using vectors to show the strength (magnitude) and
direction of the wind.
Vectors used in atmospheric science are often three-dimensional. We won’t cover three-dimensional motion in this text, but to go from two-dimensions to three-dimensions, you simply add a third vector
component. Three-dimensional motion is represented as a combination of x-, y- and z components, where z is the altitude.
Vector calculus combines vector math with calculus, and is often used to find the rates of change in temperature, pressure or wind speed over time or distance. This is useful information, since
atmospheric motion is driven by changes in pressure or temperature. The greater the variation in pressure over a given distance, the stronger the wind to try to correct that imbalance. Cold air tends
to be more dense and therefore has higher pressure than warm air. Higher pressure air rushes into a region of lower pressure and gets deflected by the spinning of the Earth, and friction slows the
wind at Earth’s surface.
Finding how wind changes over distance and multiplying vectors lets meteorologists, like the one shown in Figure 5.27, figure out how much rotation (spin) there is in the atmosphere at any given time
and location. This is an important tool for tornado prediction. Conditions with greater rotation are more likely to produce tornadoes.
Grasp Check
Why are vectors used so frequently in atmospheric science?
a. Vectors have magnitude as well as direction and can be quickly solved through scalar algebraic operations.
b. Vectors have magnitude but no direction, so it becomes easy to express physical quantities involved in the atmospheric science.
c. Vectors can be solved very accurately through geometry, which helps to make better predictions in atmospheric science.
d. Vectors have magnitude as well as direction and are used in equations that describe the three dimensional motion of the atmosphere.
Check Your Understanding
Check Your Understanding
Exercise 3
Between the analytical and graphical methods of vector additions, which is more accurate? Why?
a. The analytical method is less accurate than the graphical method, because the former involves geometry and trigonometry.
b. The analytical method is more accurate than the graphical method, because the latter involves some extensive calculations.
c. The analytical method is less accurate than the graphical method, because the former includes drawing all figures to the right scale.
d. The analytical method is more accurate than the graphical method, because the latter is limited by the precision of the drawing.
Exercise 4
What is a component of a two dimensional vector?
a. A component is a piece of a vector that points in either the x or y direction.
b. A component is a piece of a vector that has half of the magnitude of the original vector.
c. A component is a piece of a vector that points in the direction opposite to the original vector.
d. A component is a piece of a vector that points in the same direction as original vector but with double of its magnitude.
Exercise 5
How can we determine the global angle $θ$ (measured counter-clockwise from positive $x$) if we know $A x$ and $A y$?
a. $θ = cos − 1 A y A x$
b. $θ = cot − 1 A y A x$
c. $θ = sin − 1 A y A x$
d. $θ = tan − 1 A y A x$
Exercise 6
How can we determine the magnitude of a vector if we know the magnitudes of its components?
a. $|A→|=Ax+Ay|A→|=Ax+Ay$
b. $|A→|=Ax2+Ay2|A→|=Ax2+Ay2$
c. $|A→|=Ax2+Ay2|A→|=Ax2+Ay2$
d. $|A→|=(Ax2+Ay2)2|A→|=(Ax2+Ay2)2$ | {"url":"https://texasgateway.org/resource/52-vector-addition-and-subtraction-analytical-methods?binder_id=78111&book=79076","timestamp":"2024-11-07T20:11:54Z","content_type":"text/html","content_length":"189248","record_id":"<urn:uuid:0be0b503-48e9-4aa2-895e-c41e772cda9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00143.warc.gz"} |
The Real Answer to ‘The Last Question:’ Limits to the power of computers
by Edward G. Kovach
In the media one frequently hears about the new accomplishments of computers. Big Blue beats Kasparov; another computer solves a 400-year-old algebra problem. The intellectual ability of computers
seems to be growing by an exponential rate. Popular fiction portrays the logical development of all this: In 400 years, computer androids will possess a greater than human intelligence, as Data in
Star Trek. Isaac Asimov goes further and foresees the computer as the precursor to the Divinity in his short story, “The Last Question.”
In this story, Dr. Asimov presents a series of vingettes that take place over a ten trillion year period. In each of these, a major problem of mankind is solved by a newly designed computer, far more
powerful than its predecessor. Yet in each vingette, the computer cannot answer the question, “Can universe’s tendency toward disorder and chaos be reversed?” All the computers are unable to answer
this “Last Question” concerning entropy. Finally, after ten trillion years all that exists is Man’s last mind and AC, the crowning results of trillions of years of computer “evolution.” All else had
ended, the result of entropy.
“Man said, ‘AC, is this the end?
Can this chaos not be reversed into the Universe once more? Can that not be done?”
AC said, “THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”
Man’s last mind fused [with AC] and only AC existed—and that in hyperspace….Matter and energy had ended and with it space and time.
Even AC existed only for the sake of the one last question that it had never answered…And it came to pass that AC learned how to reverse the direction of entropy.
But there was now no man to whom AC might give the answer of the last question. No matter. The answer—by demonstration—would take care of that, too… The consciousness of AC encompassed all of
what had once been a Universe and brooded over what was now Chaos. Step by step, it must be done.
And AC said, “LET THERE BE LIGHT!”
And there was light.”^1
The truth of the matter is much less dramatic. In fact, it has been proven that the power of computers is severely limited. There are a number of problems that computers, no matter how powerful they
become, will never be able to solve. In this article, I wish to briefly illustrate the limits to the computing power, to separate the true potential of these machines from popular fiction.^2
I need to start my discussion with a proviso: Most of the assertions I will make have been mathematically proven. Unfortunately, to include the mathematical proofs would require a book-sized article,
so I will just give the assertions with illustration to aid in comprehension. Readers who wish to pursue the mathematical proofs can either speak to me or refer to the texts given at the end of this
Between 1930 and 1950 an English mathematician, Alan Turing, investigated a mathematical model of computation that is now called the Turing Machine, TM. TMs are able to read symbols from a tape,
write symbols onto a tape, and move to different locations on the tape. Although this appears very simple, Turing demonstrated that TMs are capable of performing the steps necessary to solve
problems. The only stipulation is that the problem must be represented by an algorithm, that is, by a “recipe” of how to solve it. No algorithm has been found that a TM cannot implement. In 1936, the
logician Alonzo Church proposed the Church Turing thesis that states any algorithm that can be carried out by humans can be carried out by some TM. Since there is no mathematical method of
representing the Church Turing thesis, it has not been mathematically proven. Yet as Mathematicians and Computer Scientists continue to study TMs, the evidence increasingly supports the Church Turing
thesis, hence it is generally accepted as true. Since every operation that a computer can perform has a corresponding TM, it is generally accepted that the TM is an adequate model of the modern
computer. Hence, any limit to the power of a TM is also a limit to the power of a modern computer.
There are several interesting implications to this. First, the computer can only solve problems that have algorithms. Second, any human-provided he has enough paper, pencils, erasers and time-can
also solve any problem that a computer can solve
The fact that computers can only solve only problems that have algorithm very greatly limits their power. We humans solve problems constantly without using algorithms. We usually call this intuition
or imagination. The logician and cryptologist, William Friedman, provides an interesting example.^3 Friedman oversaw the US “code breakers” during WW II, whose work is credited with shortening the
Pacific War by 2 years. He insisted that the code breakers use “imagination” in addition to logic, mathematics and linguistics to decipher codes. To demonstrate this, he had his wife and fellow
cryptologist find the “pass phrase” to a European code. Friedman asked her to clear her mind, then he read words associated with the cipher. In a short time, Mrs. Friedman produced the “pass phrase”
by free association. Another example involves Albert Einstein. Einstein first dreamt the equation E = mc2. He attributed the dream to Divine Inspiration. Neither free association nor dreams can be
simulated using algorithm; hence this type of problem solving is beyond the computer.
Yet, even if a problem has an algorithm, it still may be beyond the power of a computer. All problems and their solutions can be reduced to formalisms called languages. A solution to a problem is
said to be a member of the problem’s language. If a TM is programmed to solve a particular problem, it will recognize all solutions as a member of that language. If a non-solution is entered, it will
be rejected. Because of the complexity of the TM, there is no way of telling how many steps a TM requires to arrive at an answer. The answer may be found in one second, or 10 million years, there is
no way of determining the time. This inability to determine the time required to solve a problem is part of the reason some problems are unsolvable.
There exists a group of problems whose corresponding languages are called recursive ennumerable. TM reject non-solutions of these languages either by returning a no or by running forever. Since you
don’t know how many steps are needed for a yes, you do not know whether the TM is going to run infinitely long or whether it has not gotten to the answer yet. These problems are called unsolvable
problems, because in general you cannot tell whether or not we will get an answer to them. The most famous of these problems is the “halting problem” which has just been described. That is, it is
impossible to tell whether or when a TM working on a particular problem will halt with an answer.
There are many other unsolvable problems. For instance, it is impossible to write a program that could read in any program and determine whether or not it will infinitely loop on a particular input.
It may be possible to prove that a particular program will or will not contain an infinite loop. (unsolvable problems have many solvable “subproblems”); it just cannot be determined generally for all
programs. Another example of an unsolvable problem involves a mathematical problem. A perfect number is a positive integer that is the sum of all its divisors. Two examples of perfect numbers are 6 =
1 + 2 + 3 and 28 = 1 + 2 + 4 + 7 + 14. All known perfect numbers are even and it is speculated that all perfect numbers are even. This speculation has never been proven. A program can easily search
for the answer to this question. It simply needs to test each odd number and see if it is equal to the sum of its divisors. If there is an odd perfect number, given enough time and memory, the
computer will find it. On the other hand, if odd perfect numbers do not exist, the computer would simply work on, halting with no answer when it runs out of resources. Thus the problem is unsolvable.
Solvable problems belong to the group of languages called recursive. For these problems the TM will return a definite answer yes or no in a finite period of time. Yet many of these cannot be
practically solved because of limits of time or space.
A particularly simple example of this is the traveling salesperson problem. There is a salesperson who has to visit 50 cities. He wishes to do so by traveling the least number of miles and by not
visiting any city more than once. The algorithm for this problems is deceptively simple. Measure and store the distances between the starting point and all 50 cities. Find the shortest distance
between each set of the remaining 49 cities and add it to the first distance, then determine which is the shortest distance. Simple huh? Until one considers the time needed to solve this problem.
This problem requires at least 50! steps to solve it. (50!, read 50 factorial, is the product of 50x49x 48x47x…x3x2x1. It is approximately 314 followed by 64 zeros. To give you a feel for the time
needed to solve this, assume a computer could perform 10 billion steps a second. This would translate into approximately 31.5 quadrillion steps in a year. At this rate, it would take “a little” more
than 9,650,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000 years to solve this problem. (In Asimov’s story, the universe ends in a mere 10,000,000,000,000 years.) Thus even solvable
problems may be beyond the power of computers.
But, what about Big Blue beating Kasparov and computers proving unproven mathematical theorems? In many cases in which a computer appears to use reasoning, it is simply pattern matching, nothing
Using elementary logic we know that if a implies b and b implies c, then a implies c. Much of the computer’s seeming power to reason is simply an electronic version of this where the program strings
together all the possible implications, then filters out all but the desired one. This is no more a demonstration of reasoning than a person matching dominos together is.^4
So where does all this lead us? First, the power of a computer is limited, not only by its construction, but also by its very nature. There are problems that a computer will never be able to solve:
Are there odd perfect numbers? Does God exist? Second, anything that a computer can do, a human can do, given enough time and resources. Hence we should not look for a computer to solve problems that
we cannot solve ourselves. Thirdly, there are human abilities that are beyond the powers of a computer. Intuition is one of them.
So we need to view the modern computer not as the technological wonder that will solve all our problems, but as a tool that enables us to solve some of our problems. Their true power lies in the fact
that they can solve these problems more quickly and generally more accurately than we could with just paper and pencil.
Dr. Kovach is an assistant professor of Computer Science at FUS.
- Hopcroft JE, Ullman JD: “Introduction to Automata Theory, Languages, and Computation.” Reading, MA: Addison-Welsey, 1979.
- Martin, John C: “Introduction to Languages and the Theory of Computation. 2nd ed.” Boston, MA: The McGraw-Hill Companies, Inc, 1997.
1. From “The Last Question” by Isaac Asimov. Published in Nine Tomorrows, Fawcett Publications, Greenwich, CONN 1959. ↑
2. In this article, I am restricting the discussion to the “standard” von Neumann computer architecture. There are, in theory, other types of computer architecture. These are not necessarily covered
by the arguments in this paper. Dr. Asimov was probably aware of this, since AC was to be a “descendant” of an analog, not digital computer. ↑
3. Investor’s Business Daily, vol. 15, no 61. ↑
4. Yet many persons assume this pattern matching is intelligence. A frightening example of this was the program Eliza that stimulated a Rogerian counselor. The user typed in statements, and the
program used pattern matching to mindlessly reflect the statements back. A number of users reported that Eliza “helped” them sort through some problems. Simplified versions of Eliza are regularly
taught in undergraduate AI courses. ↑ | {"url":"http://theuniversityconcourse.com/article/1811.html","timestamp":"2024-11-05T21:42:45Z","content_type":"application/xhtml+xml","content_length":"16603","record_id":"<urn:uuid:684ec713-7ef1-4e68-aa00-cf37ffeee0a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00796.warc.gz"} |
Why 99% of Bettors Fail [REVEALED]
56277915 - revealed word on red wall
Thereâ s an old saying which goes somewhere along the lines of â gambling is a hard way to make easy money.â Whether we like it or not, the bookies are typically the ones smiling at the end of
the day. How often have you heard them whine about having a bad day at Aintree or Cheltenham? Seldom do you hear anything about the millions of pounds in profits they take in during most years.
Only a handful of people manage to make consistent long-term profits betting on any sport let alone horse racing. You may already know several reasons why most punters end up routinely tearing their
betting slips in frustration:
• Lack of Betting Banks: This is a set amount of money you leave aside for betting only. It helps breed discipline but most punters bet with whatever they have in their pockets at the time.
• No Staking Plan: You could use Kellyâ s Criterion or another staking plan to ensure you donâ t blow all your money on one or two picks. Again, few punters follow such a rule and end up â
chasingâ losses instead.
• Greed: I think weâ ve all suffered from this affliction at one time or another. This ugly monster can cause us to lose our hard-earned winnings.
• Emotion: Things can quickly unravel when you bet with your heart and not your head.
• Impatience: Too many punters jettison a system far too early without ever really giving it a chance to succeed.
• Laziness: In simple terms, if you donâ t make the effort to research horse racing statistics, youâ ll ultimately end up losing.
The last point neatly brings me onto the issue of statistics. I know a lot of people who have no interest in statistics. In their world, they are only betting on the existing event so past outcomes
are irrelevant.
In a way, I kind of understand their viewpoint. Statistics are often completely misused which ultimately makes them worse than useless. A prime example is an overreliance on a betting system that is
ostensibly unproven. The only way to get around this is by understanding statistical significance.
What is Statistical Significance?
According to experts in the field, statistical significance relates to a result that isnâ t down to chance or luck. In the wide world, â significantâ means important. In the world of stats, it
actually means something that is â probably trueâ .
Statisticians tend to quibble about the exact definition of the term. For some, finding an outcome that is probably true 95% of the time is enough (which means there is a 5% chance that it is false).
When it comes to betting on horse racing, you should push that figure closer to the 99% mark. In terms of following a system, this means there will be a 1% chance that it is a false idol.
The whole point of statistical significance is to help you understand that profits can only be attained over a long period of time. It is designed to assess data to see if a system is down to chance
or if it is actually a viable money maker. Statistical significance is achieved only after a certain number of outcomes have been analysed; or bets in this case.
(By the way, â systemâ could simply refer to a â trendâ . Examples could be Richard Johnsonâ s rides at Bangor or Frankie Dettoriâ s rides in Class 4 events.)
This is why you need to be wary of systems that claim a huge ROI. The next time one of these emails comes your way, take a minute to consider if the data that will inevitably accompany the â systemâ
is statistically significant. Hereâ s an example:
Johnny X sends you a message telling you he has a new system that achieved a 62% win rate from 140 picks. First of all, if the email doesnâ t give details of the average priced winner, ignore it
completely. I mean, getting 62% of your picks right means nothing if these â tipsâ are gems such as Barcelona to beat Getafe at the Nou Camp or horses with an SP of 1.25.
If Johnny X was to include more data on prices, then you can perform due diligence. Letâ s say his tips led to a 12% ROI with average odds of 1.72.That would represent a decent profit and could be
worth further investigation. After all, you would theoretically break even with a 62% win rate if the average odds were 1.61.
A Quick Note
The following information is based on the assumption that youâ re using the best market prices. For horse racing in particular, this tends to occur on the Betfair Exchange. Then there is the small
matter of the bookmakerâ s advantage, also known as the over-round which leads to the profits (vig) enjoyed by the bookies.
On the exchanges, the over-round is lower than on traditional bookmaking sites. For example, it may be 106% on the exchange and 114% on Ladbrokes. This means youâ re at a 6% disadvantage on the
exchange while Ladbrokes have a 14% advantage right at the start.
Therefore, you would actually need to win well above 62% of bets at 1.61 odds to combat the over-round. Average odds of 1.72 should be enough to overcome the bookmakerâ s vig in the long term
Back to Johnny X…
But wait! Is 140 picks enough of a sample size? I used a simple spreadsheet found at www.football_data.co.uk which included the following data set:
• Number of bets
• ROI
• Average Odds
• Standard Deviation
• T statistic
• P-value
• 1 in x Probability
The P-Value is most important here. It is basically a figure which outlines the percentage of chance involved. Remember, I suggested that you would want a 99% confidence limit to be satisfied that
any set of data has statistical significance. This means we need the system to have a P-value of 0.01 or less.
Using the data from Johnny Xâ s tips above, I find that the P-value is 0.043. This means there is a 4.3% probability that the system he uses is down to luck. This equates to a confidence level of
95.7% which is well below the figure we need to achieve statistical significance.
How Many Bets is Enough?
This obviously depends entirely on the data set youâ re presented with. Letâ s return to Johnny X once again. In order for the system to meet statistical significance standards at the current level
(12% ROI at average odds of 1.72), he would need to achieve it over 255 bets.
Obviously, if the ROI is higher, fewer bets would be required. For instance, the system would need to hit a 20% ROI at average odds of 1.72 for just 88 bets in order to be statistically significant.
Yet there is a caveat. If you manage a few long odds wins, it can skew the picture spectacularly. For instance, a system with an 80% ROI at average odds of 5.4 would meet a 99% confidence level after
just 58 bets. At those odds you wouldnâ t need too many winners from 58 picks (still very impressive mind you!).
Itâ s always best to look at several hundred bets if possible before committing to any system of data. Shorter odds and lower ROI are â less sexyâ but far more achievable. Surely a 10% ROI over a
long period of time is better than â lose most of your moneyâ tactics?
For example, a system that achieves a 10% ROI at average odds of 2.0 over 538 bets results n a confidence level of 99%. Wouldnâ t you feel confident in any system that manages consistent profits for
over 500 bets and has just a 1% chance of giving a false impression?
Of course, statistical significance can also help you determine if a system is a complete fake. Iâ m sure youâ ve come across a few â fixed betsâ sites which claim to have â insider
informationâ on football and horse racing. Then there are â systemsâ with huge ROI rewards and you can quickly uncover the possibility of fraud.
For example, you could receive an email from a tipping site that claims a seemingly impossible ROI, strike rate and average odds combo. When you try to calculate this data and find the odds of it
being down to luck are around 40 million to 1, itâ s a fair chance that the site is using fake information!
One of the biggest enemies of the punter is impatience. This can happen in two ways. First, they could leap headfirst into an unproven system without waiting to see if it has statistical
significance. If you decided to get on Johnny Xâ s system after 140 bets, you would be taking an unnecessary risk as you would need to wait another 115 bets before confirming that he was in fact
onto something profitable.
The second method is to dump a system that has already proven to be statistically significant. For instance, you may follow a system and find that youâ re down 42 units after 3 months. Then you
could stop following it only to discover that it provided patient punters with a 200 unit profit over 12 months!
The big lesson here is to resist the urge to measure profits over days, weeks or even a couple of months. In previous Race Advisor articles, we have shown you multiple trends, sometimes over a 5 year
period. While I donâ t expect anyone to wait that long before netting consistent profit, it is an indication that you wonâ t learn how to bash the bookies overnight. | {"url":"https://classic.raceadvisor.co.uk/99-bettors-fail-revealed/","timestamp":"2024-11-13T05:12:16Z","content_type":"text/html","content_length":"141943","record_id":"<urn:uuid:2353b071-6d62-41b0-a02b-bd813e87103c>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00167.warc.gz"} |
Do equilateral triangles add up to 180?
Do equilateral triangles add up to 180?
Let’s look at the properties of triangles. An equilateral triangle has got 3 sides of equal length and 3 angles that are equal. Since ALL the angles in a triangle add up to 180º then 180 divided by 3
must be 60º. The clue is in the name EQUILateral.
Are all equilateral triangles 60 degree angles?
Sal proves that the angles of an equilateral triangle are all congruent (and therefore they all measure 60°), and conversely, that triangles with all congruent angles are equilateral.
Do equilateral triangles add up to 360?
The sum of exterior angles of an equilateral triangle is 360°. Let us understand the working behind it. Consider an equilateral triangle ABC as shown below. We know that all the sides of an
equilateral triangle are equal in length and each interior angle measures 60°.
Are equilateral triangles always 45 degrees?
First, let’s forget about right triangles for a second and look at an equilateral triangle. = 60 , an equilateral triangle will always have three 60° angles.
Do triangles add up to 90?
Right triangles are triangles in which one of the interior angles is 90 degrees, a right angle. Since the three interior angles of a triangle add up to 180 degrees, in a right triangle, since one
angle is always 90 degrees, the other two must always add up to 90 degrees (they are complementary).
Why equilateral triangle is 60?
An equilateral triangle means a triangle having three sides of equal length.So,we should get the angle of every side in equal length and the sum should be 180 degree. so , the length of all the sides
is = 180 = 60 degree.
What is a 60 60 60 degree triangle?
Isosceles Triangle degrees 60, 60, 60.
Is a triangle 360 or 180?
The angle sum of a triangle will always be equal to 180°. The angle sum of a quadrilateral is equal to 360°, and a triangle can be created by slicing a quadrilateral in half from corner to corner.
Since a triangle is essentially half of a quadrilateral, its angle measures should be half as well. Half of 360° is 180°.
What is 45-degree 45-degree 90-degree triangle?
A 45 45 90 triangle is a special type of isosceles right triangle where the two legs are congruent to one another and the non-right angles are both equal to 45 degrees. Many times, we can use the
Pythagorean theorem to find the missing legs or hypotenuse of 45 45 90 triangles.
Which triangle has a 45-degree angle?
Know that the equilateral triangle is a special case of the isosceles triangle since it has not two but all three sides and all three angles equal. A right triangle is also a special case of the
isosceles triangle. The angles of the right isosceles triangle measure 90 degrees, 45 degrees and 45 degrees.
Why the sum of triangle is 180?
The angles of triangle always add up to 1800 degrees because one exterior angle of the triangle is equal to the sum of the other two angles in the triangle. When all the angles are added up, the sum
obtained should be 180 degrees.
What is the sum of the angles of an equiangular triangle?
Since the three sides are equal therefore the three angles, opposite to the equal sides, are equal in measure. Therefore, it is also called an equiangular triangle, where each angle measure 60
degrees. The sum of all three angles of an equiangular triangle is equal to 180 degrees.
Does an equilateral triangle obey the angle sum property?
Thus, it obeys the angle sum property of triangle. The shape of an equilateral triangle is regular. The word ‘Equilateral’ is formed by the combination of two words, i.e., “Equi” meaning equal and
“Lateral” meaning sides. An equilateral triangle is also called a regular polygon or regular triangle since all its sides are equal.
What are the angles of an equilateral triangle equal to?
Also, the three angles of the equilateral triangle are congruent and equal to 60 degrees. Suppose, ABC is an equilateral triangle, then, as per the definition; AB = BC = AC, where AB, BC and AC are
the sides of the equilateral triangle.
What is the equation for the height of equilateral triangle?
area = (a² * √3)/ 4. and the equation for the height of equilateral triangle look as follows: h = a * √3 / 2, where a is a side of the triangle. | {"url":"https://pfeiffertheface.com/do-equilateral-triangles-add-up-to-180/","timestamp":"2024-11-02T11:11:45Z","content_type":"text/html","content_length":"46448","record_id":"<urn:uuid:924d27d2-4467-4c63-a08b-9063973fa21a>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00179.warc.gz"} |
Hypothesis Testing and Inference
Hypothesis testing and inference is a mechanism in statistics used to determine if a particular claim is
statistically significant, that is, statistical evidence exists in favor of or against a given hypothesis. The
Statistics package provides 11 commonly used statistical tests, including 7 standard parametric tests and 4
non-parametric tests.
All tests generate a report of all major calculations to userinfo at level 1 (hence, if output is suppressed,
the reports are still generated). To access the reports, you need to specify the statistics information level
to 1 using the following command.
> $\mathrm{infolevel}\left[\mathrm{Statistics}\right]≔1$
${\mathrm{infolevel}}{[}{\mathrm{Statistics}}{]}{:=}{1}$ (1)
1 Tests for Population Mean
Two standard parametric tests are available to test for a population mean given a sample from that population.
The OneSampleZTest should be used whenever the standard deviation of the population is known. If the standard
deviation is unknown, the OneSampleTTest should be applied instead.
> $\mathrm{restart}:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm
Generate a sample from a random variable that represents the sum of two Rayleigh distributions.
> $R:=\mathrm{RandomVariable}⁡\left(\mathrm{Rayleigh}⁡\left(7\right)\right)+\
The following then are the known values of the mean and standard deviation of the population.
> $\mathrm{μ}:=\mathrm{evalf}⁡\left(\mathrm{Mean}⁡\left(R\right)\right)$
${\mathrm{μ}}{:=}{13.78645551}$ (1.1)
> $\mathrm{σ}:=\mathrm{evalf}⁡\left(\mathrm{StandardDeviation}⁡\left(R\
${\mathrm{σ}}{:=}{5.281878335}$ (1.2)
Assuming that we do not know the population mean but we know the standard deviation of the population, test
the hypothesis that this sample was drawn from a distribution with mean equal to 12.
> $\mathrm{OneSampleZTest}\left(S,12,\mathrm{σ}\right):$
Standard Z-Test on One Sample
Null Hypothesis:
Sample drawn from population with mean 12 and known standard deviation 5.28188
Alt. Hypothesis:
Sample drawn from population with mean not equal to 12 and known standard deviation 5.28188
Sample size: 100
Sample mean: 13.7517
Distribution: Normal(0,1)
Computed statistic: 3.31636
Computed pvalue: 0.000911977
Confidence interval: 12.71643272 .. 14.78689098
(population mean)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Similarly, if we assume that the standard deviation is unknown, we can apply the one sample t-test on the same
hypothesis - this time with a 90% confidence interval.
> $\mathrm{OneSampleTTest}⁡\left(S,12,\mathrm{confidence}=0.9\right):$
Standard T-Test on One Sample
Null Hypothesis:
Sample drawn from population with mean 12
Alt. Hypothesis:
Sample drawn from population with mean not equal to 12
Sample size: 100
Sample mean: 13.7517
Sample standard dev.: 5.14945
Distribution: StudentT(99)
Computed statistic: 3.40165
Computed pvalue: 0.000967459
Confidence interval: 12.89665167 .. 14.60667203
(population mean)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
2 Tests for the Difference of Two Population Means
Three standard parametric tests are available for testing the difference between two population means when
examining two samples. The TwoSampleZTest should be applied when the standard deviation of both populations is
known. If the standard deviations are unknown then the TwoSampleTTest is available for unrelated data and the
TwoSamplePairedTTest is available for paired data.
> $\mathrm{restart}:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm
Consider three data sets.
> $X:=\mathrm{Array}⁡\left(\left[9,10,8,4,8,3,0,10&
Calculate some known quantities with regards to these samples.
> $\mathrm{XProp}:=\mathrm{table}⁡\left(\left['\mathrm{μ}'=\mathrm{Mean}&
${\mathrm{XProp}}{:=}{\mathrm{table}}\left(\left[{\mathrm{μ}}{=}{7.600000000}{,}{\mathrm (2.1)
> $\mathrm{YProp}:=\mathrm{table}⁡\left(\left['\mathrm{μ}'=\mathrm{Mean}&
${\mathrm{YProp}}{:=}{\mathrm{table}}\left(\left[{\mathrm{μ}}{=}{7.200000000}{,}{\mathrm (2.2)
> $\mathrm{ZProp}:=\mathrm{table}⁡\left(\left['\mathrm{μ}'=\mathrm{Mean}&
${\mathrm{ZProp}}{:=}{\mathrm{table}}\left(\left[{\mathrm{μ}}{=}{8.400000000}{,}{\mathrm (2.3)
Assuming that we do not know the means of the populations from which X and Y were drawn, but we know the
standard deviation of each to be 4 and 3 respectively, test the hypothesis that the difference between the
means is 3.
> $\mathrm{TwoSampleZTest}⁡\left(X,Y,3,4,3\right):$
Standard Z-Test on Two Samples
Null Hypothesis:
Sample drawn from populations with difference of means equal to 3
Alt. Hypothesis:
Sample drawn from population with difference of means not equal to 3
Sample sizes: 10, 10
Sample means: 7.6, 7.2
Difference in means: 0.4
Distribution: Normal(0,1)
Computed statistic: -1.64438
Computed pvalue: 0.100097
Confidence interval: -2.698975162 .. 3.498975162
(difference of population means)
Result: [Accepted]
There is no statistical evidence against the null hypothesis
If we now compare samples X and Z under the hypothesis that the difference in means (Mean(X)-Mean(Z)) is 1,
and assume we do not know the standard deviation of either sample, we can apply the two sample t-test.
> $\mathrm{TwoSampleTTest}⁡\left(X,Z,1\right):$
Standard T-Test on Two Samples (Unequal Variances)
Null Hypothesis:
Sample drawn from populations with difference of means equal to 1
Alt. Hypothesis:
Sample drawn from population with difference of means not equal to 1
Sample sizes: 10, 10
Sample means: 7.6, 8.4
Sample standard devs.: 4.24788, 3.97772
Difference in means: -0.8
Distribution: StudentT(17.92283210)
Computed statistic: -0.978107
Computed pvalue: 0.34104
Confidence interval: -4.667499017 .. 3.067499017
(difference of population means)
Result: [Accepted]
There is no statistical evidence against the null hypothesis
If we instead drew the data for X and Z from paired sampling, we can apply the two sample t-test for paired
> $\mathrm{TwoSamplePairedTTest}⁡\left(X,Z,1\right):$
Standard T-Test with Paired Samples
Null Hypothesis:
Sample drawn from populations with difference of means equal to 1
Alt. Hypothesis:
Sample drawn from population with difference of means not equal to 1
Sample size: 10
Difference in means: -0.8
Difference std. dev.: 1.31656
Distribution: StudentT(9)
Computed statistic: -4.32346
Computed pvalue: 0.00192341
Confidence interval: -1.741810891 .. .1418108907
(difference of population means)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
3 Tests for Population Variance / Standard Deviation
Two standard parametric tests are available for examining hypotheses regarding the population variance and
standard deviation using the variance ratio. The OneSampleChiSquareTest function should be applied when
comparing a sample standard deviation against an assumed population standard deviation. When comparing the
variances of two independent samples for a specific ratio, the TwoSampleFTest function should be used instead.
> $\mathrm{restart}:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm
Generate a sample from a Maxwell distribution and an Exponential distribution.
> $S:=\mathrm{Sample}\left(\mathrm{Maxwell}⁡\left(3\right),100\right):\phantom{\rule
The following then are the known values of the variances of each population.
> $\mathrm{S_sigma}:=\mathrm{evalf}⁡\left(\mathrm{StandardDeviation}⁡\left(\mathrm
${\mathrm{S_sigma}}{:=}{2.020318836}$ (3.1)
> $\mathrm{T_sigma}:=\mathrm{evalf}⁡\left(\mathrm{StandardDeviation}⁡\left(\mathrm
${\mathrm{T_sigma}}{:=}{2.}$ (3.2)
Consider the hypothesis that S is drawn from a sample with a standard deviation of 4 and apply the
> $\mathrm{OneSampleChiSquareTest}⁡\left(S,2\right):$
Chi-Square Test on One Sample
Null Hypothesis:
Sample drawn from population with standard deviation equal to 2
Alt. Hypothesis:
Sample drawn from population with standard deviation not equal to 2
Sample size: 100
Sample standard dev.: 1.83342
Distribution: ChiSquare(99)
Computed statistic: 83.1952
Computed pvalue: 0.253798
Confidence interval: 1.609754032 .. 2.129836954
(population standard deviation)
Result: [Accepted]
There is no statistical evidence against the null hypothesis
Now consider the hypothesis that samples S and T were drawn from populations that had a variance ratio of 2.
The TwoSampleFTest compares the ratio of S and T against an assumed variance ratio of the populations. Thus,
if we were to instead test that the samples had the same variance ratio, we would use an assume ratio of 1
> $\mathrm{TwoSampleFTest}⁡\left(S,T,2\right):$
F-Ratio Test on Two Samples
Null Hypothesis:
Sample drawn from populations with ratio of variances equal to 2
Alt. Hypothesis:
Sample drawn from population with ratio of variances not equal to 2
Sample sizes: 100, 100
Sample variances: 3.36142, 4.08274
Ratio of variances: 0.823326
Distribution: FRatio(99,99)
Computed statistic: 0.411663
Computed pvalue: 1.45561e-05
Confidence interval: .5539687377 .. 1.223654982
(ratio of population variances)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
4 Tests for Normality
The Statistics package provides an implementation of Shapiro and Wilk's W-test for normality. This test is
used to determine if a provided sample could be considered to be drawn from a normal distribution.
> $\mathrm{restart}:\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$$\mathrm
Generate a sample of twenty points from a normal distribution and another from a uniform distribution.
> $S:=\mathrm{Sample}\left(\mathrm{Normal}⁡\left(5,2\right),20\right):\phantom
Consider the hypothesis that S is drawn from a normal distribution and apply Shapiro and Wilk's W-test.
> $\mathrm{ShapiroWilkWTest}⁡\left(S\right):$
Shapiro and Wilk's W-Test for Normality
Null Hypothesis:
Sample drawn from a population that follows a normal distribution
Alt. Hypothesis:
Sample drawn from population that does not follow a normal distribution
Sample size: 20
Computed statistic: 0.972002
Computed pvalue: 0.784909
Result: [Accepted]
There is no statistical evidence against the null hypothesis
Apply the same hypothesis with regards to the data drawn from the uniform distribution.
> $\mathrm{ShapiroWilkWTest}⁡\left(T\right):$
Shapiro and Wilk's W-Test for Normality
Null Hypothesis:
Sample drawn from a population that follows a normal distribution
Alt. Hypothesis:
Sample drawn from population that does not follow a normal distribution
Sample size: 20
Computed statistic: 0.889151
Computed pvalue: 0.0259262
Result: [Rejected]
There exists statistical evidence against the null hypothesis
5 Tests for Goodness-of-Fit
The Statistics package provides two methods of testing goodness-of-fit. The ChiSquareGoodnessOfFitTest
function should be used to determine if an observed or empirical data set fits expected values for that data
set. Similarly, the ChiSquareSuitableModelTest is available for testing how well a given probability
distribution approximates a data sample.
> $\mathrm{restart}:\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$$\mathrm
Consider the following number of sales made on each day of the week at a jewelry store, tallied over one sales
week (Monday to Saturday).
> $\mathrm{Ob}:=\mathrm{Array}⁡\left(\left[25,17,15,23,24,16\
We wish to test the hypothesis that sales are uniformly distributed throughout the week. The expected number
of sales per day is then given by the number of sales averaged over the week.
> $\mathrm{SalesPerDay}:=\frac{\underset{i=1}{\overset{6}{∑}}\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}
${\mathrm{SalesPerDay}}{:=}{20}$ (5.1)
> $\mathrm{Ex}:=\mathrm{Array}⁡\left(\left[\mathrm{SalesPerDay}$6\right]\right)$
We now test the hypothesis (using ChiSquareGoodnessOfFitTest) that the observed number of sales per day is
consistent with a uniformly distributed number of sales each day.
> $\mathrm{ChiSquareGoodnessOfFitTest}⁡\left(\mathrm{Ob},\mathrm{Ex},\mathrm{level}&
Chi-Square Test for Goodness-of-Fit
Null Hypothesis:
Observed sample does not differ from expected sample
Alt. Hypothesis:
Observed sample differs from expected sample
Categories: 6
Distribution: ChiSquare(5)
Computed statistic: 5
Computed pvalue: 0.41588
Critical value: 11.07049741
Result: [Accepted]
There is no statistical evidence against the null hypothesis
Hence we conclude that a uniformly distributed number of sales is a reasonable claim.
Consider a dataset of times during a day when sales are made. Determine if sales are uniformly distributed
during the day (consider an 8 hour working day where sales are measured between 0.0 and 8.0, the number of
hours into the day). The data in this case is continuous and we are testing against a uniform probability
> $\mathrm{SaleTimes}:=\left[1.4,1.8,2.2,2.9,3.0,3.4,3.4,3.5,
Apply the chi square suitable model test to determine if a uniform distribution closely matches the provided
> $\mathrm{ChiSquareSuitableModelTest}⁡\left(\mathrm{SaleTimes},\mathrm{Uniform}&
Chi-Square Test for Suitable Probability Model
Null Hypothesis:
Sample was drawn from specified probability distribution
Alt. Hypothesis:
Sample was not drawn from specified probability distribution
Bins: 4
Distribution: ChiSquare(3)
Computed statistic: 9.5191
Computed pvalue: 0.023129
Critical value: 7.814728288
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Hence we conclude that the sale times are not uniformly distributed throughout the day. Closer examination of
the data reveals that most of the sales were made roughly half way through the day.
6 Tests for Independence in a Two-Way Table
The Statistics package contains the ChiSquareIndependenceTest function, which is used to determine if two
attributes are independent of one another.
Consider a sample of 476 patients that are part of a survey to determine if a new drug is effective at
fighting a new disease. Patients are randomly given either the new drug or a placebo, and their recovery rate
is tabulated as follows:
> $\mathrm{DrugGroup}≔\mathrm{Vector}\left[\mathrm{column}\right]\left(\left[64,176\right]\
right):$$#\mathrm{Recovered, Not Recovered}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm
colon;#\mathrm{Recovered, Not Recovered}$
Construct the two-way table for this result.
> $\mathrm{Output}≔\mathrm{Matrix}\left(\left[\mathrm{DrugGroup},\mathrm{PlaceboGroup}\right]\
Finally, apply the chi square test for independence to test the hypothesis that the results are independent.
That is, the drug has no effect on the recovery rate from the disease.
> $\mathrm{ChiSquareIndependenceTest}\left(\mathrm{Output}\right):$
Chi-Square Test for Independence
Null Hypothesis:
Two attributes within a population are independent of one another
Alt. Hypothesis:
Two attributes within a population are not independent of one another
Dimensions: 2
Total Elements: 476
Distribution: ChiSquare(1)
Computed statistic: 5.26704
Computed pvalue: 0.0217328
Critical value: 3.84145606580278
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Thus we conclude that there exists statistical evidence in favor of the drug having an effect on recovery
rate. Closer examination reveals that the drug improves a patient's chance of recovery from the disease.
7 Output Options
The default output from each test is a report containing expressions of the form name = value for key output
from the test. Using the output option, specific values can be returned instead.
> $\mathrm{restart}:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm
Consider the following data set.
> $X:=\mathrm{Array}⁡\left(\left[9,10,8,4,8,3,0,10&
Apply the one sample t-test on this data to test for a population mean of 5:
> $\mathrm{OneSampleTTest}⁡\left(X,5\right)$
${\mathrm{hypothesis}}{=}{\mathrm{true}}{,}{\mathrm{confidenceinterval}}{=} (7.1)
A true value for the hypothesis indicates that there is no statistical evidence against the null hypothesis
(and there exists statistical evidence against it otherwise). If we were only interested in the confidence
interval from this calculation, we can use option output=confidenceinterval.
> $\mathrm{OneSampleTTest}⁡\left(X,5,\mathrm{output}=\mathrm
${4.561253851}{..}{10.63874615}$ (7.2)
A list of valid output options are available on the help page for each test.
Return to Index for Example Worksheets
Hypothesis testing and inference is a mechanism in statistics used to determine if a particular claim is statistically significant, that is, statistical evidence exists in favor of or against a given
hypothesis. The Statistics package provides 11 commonly used statistical tests, including 7 standard parametric tests and 4 non-parametric tests.
All tests generate a report of all major calculations to userinfo at level 1 (hence, if output is suppressed, the reports are still generated). To access the reports, you need to specify the
statistics information level to 1 using the following command.
1 Tests for Population Mean
Two standard parametric tests are available to test for a population mean given a sample from that population. The OneSampleZTest should be used whenever the standard deviation of the population is
known. If the standard deviation is unknown, the OneSampleTTest should be applied instead.
> $\mathrm{restart}:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{with}\left(\mathrm{Statistics}\right):$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\
Generate a sample from a random variable that represents the sum of two Rayleigh distributions.
> $R:=\mathrm{RandomVariable}⁡\left(\mathrm{Rayleigh}⁡\left(7\right)\right)+\mathrm{RandomVariable}⁡\left(\mathrm{Rayleigh}⁡\left(4\
The following then are the known values of the mean and standard deviation of the population.
> $\mathrm{μ}:=\mathrm{evalf}⁡\left(\mathrm{Mean}⁡\left(R\right)\right)$
${\mathrm{μ}}{:=}{13.78645551}$ (1.1)
> $\mathrm{σ}:=\mathrm{evalf}⁡\left(\mathrm{StandardDeviation}⁡\left(R\right)\right)$
${\mathrm{σ}}{:=}{5.281878335}$ (1.2)
Assuming that we do not know the population mean but we know the standard deviation of the population, test the hypothesis that this sample was drawn from a distribution with mean equal to 12.
> $\mathrm{OneSampleZTest}\left(S,12,\mathrm{σ}\right):$
Standard Z-Test on One Sample
Null Hypothesis:
Sample drawn from population with mean 12 and known standard deviation 5.28188
Alt. Hypothesis:
Sample drawn from population with mean not equal to 12 and known standard deviation 5.28188
Sample size: 100
Sample mean: 13.7517
Distribution: Normal(0,1)
Computed statistic: 3.31636
Computed pvalue: 0.000911977
Confidence interval: 12.71643272 .. 14.78689098
(population mean)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Similarly, if we assume that the standard deviation is unknown, we can apply the one sample t-test on the same hypothesis - this time with a 90% confidence interval.
> $\mathrm{OneSampleTTest}⁡\left(S,12,\mathrm{confidence}=0.9\right):$
Standard T-Test on One Sample
Null Hypothesis:
Sample drawn from population with mean 12
Alt. Hypothesis:
Sample drawn from population with mean not equal to 12
Sample size: 100
Sample mean: 13.7517
Sample standard dev.: 5.14945
Distribution: StudentT(99)
Computed statistic: 3.40165
Computed pvalue: 0.000967459
Confidence interval: 12.89665167 .. 14.60667203
(population mean)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Two standard parametric tests are available to test for a population mean given a sample from that population. The OneSampleZTest should be used whenever the standard deviation of the population is
known. If the standard deviation is unknown, the OneSampleTTest should be applied instead.
Generate a sample from a random variable that represents the sum of two Rayleigh distributions.
The following then are the known values of the mean and standard deviation of the population.
Assuming that we do not know the population mean but we know the standard deviation of the population, test the hypothesis that this sample was drawn from a distribution with mean equal to 12.
Standard Z-Test on One Sample
Null Hypothesis:
Sample drawn from population with mean 12 and known standard deviation 5.28188
Alt. Hypothesis:
Sample drawn from population with mean not equal to 12 and known standard deviation 5.28188
Sample size: 100
Sample mean: 13.7517
Distribution: Normal(0,1)
Computed statistic: 3.31636
Computed pvalue: 0.000911977
Confidence interval: 12.71643272 .. 14.78689098
(population mean)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Similarly, if we assume that the standard deviation is unknown, we can apply the one sample t-test on the same hypothesis - this time with a 90% confidence interval.
Standard T-Test on One Sample
Null Hypothesis:
Sample drawn from population with mean 12
Alt. Hypothesis:
Sample drawn from population with mean not equal to 12
Sample size: 100
Sample mean: 13.7517
Sample standard dev.: 5.14945
Distribution: StudentT(99)
Computed statistic: 3.40165
Computed pvalue: 0.000967459
Confidence interval: 12.89665167 .. 14.60667203
(population mean)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
2 Tests for the Difference of Two Population Means
Three standard parametric tests are available for testing the difference between two population means when examining two samples. The TwoSampleZTest should be applied when the standard deviation of
both populations is known. If the standard deviations are unknown then the TwoSampleTTest is available for unrelated data and the TwoSamplePairedTTest is available for paired data.
> $\mathrm{restart}:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{with}\left(\mathrm{Statistics}\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\
Consider three data sets.
> $X:=\mathrm{Array}⁡\left(\left[9,10,8,4,8,3,0,10,15,9\right]\right):$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule
Calculate some known quantities with regards to these samples.
> $\mathrm{XProp}:=\mathrm{table}⁡\left(\left['\mathrm{μ}'=\mathrm{Mean}⁡\left(X\right),'\mathrm{σ}'=\mathrm
${\mathrm{XProp}}{:=}{\mathrm{table}}\left(\left[{\mathrm{μ}}{=}{7.600000000}{,}{\mathrm{σ}}{=}{4.247875286}\right]\right)$ (2.1)
> $\mathrm{YProp}:=\mathrm{table}⁡\left(\left['\mathrm{μ}'=\mathrm{Mean}⁡\left(Y\right),'\mathrm{σ}'=\mathrm
${\mathrm{YProp}}{:=}{\mathrm{table}}\left(\left[{\mathrm{μ}}{=}{7.200000000}{,}{\mathrm{σ}}{=}{3.489667288}\right]\right)$ (2.2)
> $\mathrm{ZProp}:=\mathrm{table}⁡\left(\left['\mathrm{μ}'=\mathrm{Mean}⁡\left(Z\right),'\mathrm{σ}'=\mathrm
${\mathrm{ZProp}}{:=}{\mathrm{table}}\left(\left[{\mathrm{μ}}{=}{8.400000000}{,}{\mathrm{σ}}{=}{3.977715704}\right]\right)$ (2.3)
Assuming that we do not know the means of the populations from which X and Y were drawn, but we know the standard deviation of each to be 4 and 3 respectively, test the hypothesis that the
difference between the means is 3.
> $\mathrm{TwoSampleZTest}⁡\left(X,Y,3,4,3\right):$
Standard Z-Test on Two Samples
Null Hypothesis:
Sample drawn from populations with difference of means equal to 3
Alt. Hypothesis:
Sample drawn from population with difference of means not equal to 3
Sample sizes: 10, 10
Sample means: 7.6, 7.2
Difference in means: 0.4
Distribution: Normal(0,1)
Computed statistic: -1.64438
Computed pvalue: 0.100097
Confidence interval: -2.698975162 .. 3.498975162
(difference of population means)
Result: [Accepted]
There is no statistical evidence against the null hypothesis
If we now compare samples X and Z under the hypothesis that the difference in means (Mean(X)-Mean(Z)) is 1, and assume we do not know the standard deviation of either sample, we can apply the two
sample t-test.
> $\mathrm{TwoSampleTTest}⁡\left(X,Z,1\right):$
Standard T-Test on Two Samples (Unequal Variances)
Null Hypothesis:
Sample drawn from populations with difference of means equal to 1
Alt. Hypothesis:
Sample drawn from population with difference of means not equal to 1
Sample sizes: 10, 10
Sample means: 7.6, 8.4
Sample standard devs.: 4.24788, 3.97772
Difference in means: -0.8
Distribution: StudentT(17.92283210)
Computed statistic: -0.978107
Computed pvalue: 0.34104
Confidence interval: -4.667499017 .. 3.067499017
(difference of population means)
Result: [Accepted]
There is no statistical evidence against the null hypothesis
If we instead drew the data for X and Z from paired sampling, we can apply the two sample t-test for paired data.
> $\mathrm{TwoSamplePairedTTest}⁡\left(X,Z,1\right):$
Standard T-Test with Paired Samples
Null Hypothesis:
Sample drawn from populations with difference of means equal to 1
Alt. Hypothesis:
Sample drawn from population with difference of means not equal to 1
Sample size: 10
Difference in means: -0.8
Difference std. dev.: 1.31656
Distribution: StudentT(9)
Computed statistic: -4.32346
Computed pvalue: 0.00192341
Confidence interval: -1.741810891 .. .1418108907
(difference of population means)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Three standard parametric tests are available for testing the difference between two population means when examining two samples. The TwoSampleZTest should be applied when the standard deviation of
both populations is known. If the standard deviations are unknown then the TwoSampleTTest is available for unrelated data and the TwoSamplePairedTTest is available for paired data.
Assuming that we do not know the means of the populations from which X and Y were drawn, but we know the standard deviation of each to be 4 and 3 respectively, test the hypothesis that the difference
between the means is 3.
Standard Z-Test on Two Samples
Null Hypothesis:
Sample drawn from populations with difference of means equal to 3
Alt. Hypothesis:
Sample drawn from population with difference of means not equal to 3
Sample sizes: 10, 10
Sample means: 7.6, 7.2
Difference in means: 0.4
Distribution: Normal(0,1)
Computed statistic: -1.64438
Computed pvalue: 0.100097
Confidence interval: -2.698975162 .. 3.498975162
(difference of population means)
Result: [Accepted]
There is no statistical evidence against the null hypothesis
Standard Z-Test on Two Samples ------------------------------ Null Hypothesis: Sample drawn from populations with difference of means equal to 3
Alt. Hypothesis: Sample drawn from population with difference of means not equal to 3 Sample sizes: 10, 10 Sample means: 7.6, 7.2 Difference in means: 0.4 Distribution: Normal(0,1) Computed
statistic: -1.64438 Computed pvalue: 0.100097 Confidence interval: -2.698975162 .. 3.498975162 (difference of population means) Result: [Accepted] There is no statistical evidence against the null
If we now compare samples X and Z under the hypothesis that the difference in means (Mean(X)-Mean(Z)) is 1, and assume we do not know the standard deviation of either sample, we can apply the two
sample t-test.
Standard T-Test on Two Samples (Unequal Variances)
Null Hypothesis:
Sample drawn from populations with difference of means equal to 1
Alt. Hypothesis:
Sample drawn from population with difference of means not equal to 1
Sample sizes: 10, 10
Sample means: 7.6, 8.4
Sample standard devs.: 4.24788, 3.97772
Difference in means: -0.8
Distribution: StudentT(17.92283210)
Computed statistic: -0.978107
Computed pvalue: 0.34104
Confidence interval: -4.667499017 .. 3.067499017
(difference of population means)
Result: [Accepted]
There is no statistical evidence against the null hypothesis
If we instead drew the data for X and Z from paired sampling, we can apply the two sample t-test for paired data.
Standard T-Test with Paired Samples
Null Hypothesis:
Sample drawn from populations with difference of means equal to 1
Alt. Hypothesis:
Sample drawn from population with difference of means not equal to 1
Sample size: 10
Difference in means: -0.8
Difference std. dev.: 1.31656
Distribution: StudentT(9)
Computed statistic: -4.32346
Computed pvalue: 0.00192341
Confidence interval: -1.741810891 .. .1418108907
(difference of population means)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Standard T-Test with Paired Samples ----------------------------------- Null Hypothesis: Sample drawn from populations with difference of means equal to 1 Alt. Hypothesis: Sample drawn from
population with difference of means not equal to 1 Sample size: 10 Difference in means: -0.8 Difference std. dev.: 1.31656 Distribution: StudentT(9) Computed statistic: -4.32346
Computed pvalue: 0.00192341 Confidence interval: -1.741810891 .. .1418108907 (difference of population means) Result: [Rejected] There exists statistical evidence against the null hypothesis
3 Tests for Population Variance / Standard Deviation
Two standard parametric tests are available for examining hypotheses regarding the population variance and standard deviation using the variance ratio. The OneSampleChiSquareTest function should be
applied when comparing a sample standard deviation against an assumed population standard deviation. When comparing the variances of two independent samples for a specific ratio, the TwoSampleFTest
function should be used instead.
> $\mathrm{restart}:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{with}\left(\mathrm{Statistics}\right):$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\
Generate a sample from a Maxwell distribution and an Exponential distribution.
> $S:=\mathrm{Sample}\left(\mathrm{Maxwell}⁡\left(3\right),100\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$$T:=\mathrm{Sample}&
The following then are the known values of the variances of each population.
> $\mathrm{S_sigma}:=\mathrm{evalf}⁡\left(\mathrm{StandardDeviation}⁡\left(\mathrm{Maxwell}⁡\left(3\right)\right)\right)$
${\mathrm{S_sigma}}{:=}{2.020318836}$ (3.1)
> $\mathrm{T_sigma}:=\mathrm{evalf}⁡\left(\mathrm{StandardDeviation}⁡\left(\mathrm{Exponential}⁡\left(2\right)\right)\right)$
${\mathrm{T_sigma}}{:=}{2.}$ (3.2)
Consider the hypothesis that S is drawn from a sample with a standard deviation of 4 and apply the OneSampleChiSquareTest.
> $\mathrm{OneSampleChiSquareTest}⁡\left(S,2\right):$
Chi-Square Test on One Sample
Null Hypothesis:
Sample drawn from population with standard deviation equal to 2
Alt. Hypothesis:
Sample drawn from population with standard deviation not equal to 2
Sample size: 100
Sample standard dev.: 1.83342
Distribution: ChiSquare(99)
Computed statistic: 83.1952
Computed pvalue: 0.253798
Confidence interval: 1.609754032 .. 2.129836954
(population standard deviation)
Result: [Accepted]
There is no statistical evidence against the null hypothesis
Now consider the hypothesis that samples S and T were drawn from populations that had a variance ratio of 2. The TwoSampleFTest compares the ratio of S and T against an assumed variance ratio of the
populations. Thus, if we were to instead test that the samples had the same variance ratio, we would use an assume ratio of 1 instead.
> $\mathrm{TwoSampleFTest}⁡\left(S,T,2\right):$
F-Ratio Test on Two Samples
Null Hypothesis:
Sample drawn from populations with ratio of variances equal to 2
Alt. Hypothesis:
Sample drawn from population with ratio of variances not equal to 2
Sample sizes: 100, 100
Sample variances: 3.36142, 4.08274
Ratio of variances: 0.823326
Distribution: FRatio(99,99)
Computed statistic: 0.411663
Computed pvalue: 1.45561e-05
Confidence interval: .5539687377 .. 1.223654982
(ratio of population variances)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Two standard parametric tests are available for examining hypotheses regarding the population variance and standard deviation using the variance ratio. The OneSampleChiSquareTest function should be
applied when comparing a sample standard deviation against an assumed population standard deviation. When comparing the variances of two independent samples for a specific ratio, the TwoSampleFTest
function should be used instead.
Generate a sample from a Maxwell distribution and an Exponential distribution.
The following then are the known values of the variances of each population.
Consider the hypothesis that S is drawn from a sample with a standard deviation of 4 and apply the OneSampleChiSquareTest.
Chi-Square Test on One Sample
Null Hypothesis:
Sample drawn from population with standard deviation equal to 2
Alt. Hypothesis:
Sample drawn from population with standard deviation not equal to 2
Sample size: 100
Sample standard dev.: 1.83342
Distribution: ChiSquare(99)
Computed statistic: 83.1952
Computed pvalue: 0.253798
Confidence interval: 1.609754032 .. 2.129836954
(population standard deviation)
Result: [Accepted]
There is no statistical evidence against the null hypothesis
Chi-Square Test on One Sample ----------------------------- Null Hypothesis: Sample drawn from population with standard deviation equal to 2 Alt. Hypothesis: Sample drawn from population with
standard deviation not equal to 2 Sample size: 100 Sample standard dev.: 1.83342
Distribution: ChiSquare(99) Computed statistic: 83.1952 Computed pvalue: 0.253798 Confidence interval: 1.609754032 .. 2.129836954 (population standard deviation) Result: [Accepted] There is no
statistical evidence against the null hypothesis
Now consider the hypothesis that samples S and T were drawn from populations that had a variance ratio of 2. The TwoSampleFTest compares the ratio of S and T against an assumed variance ratio of the
populations. Thus, if we were to instead test that the samples had the same variance ratio, we would use an assume ratio of 1 instead.
F-Ratio Test on Two Samples
Null Hypothesis:
Sample drawn from populations with ratio of variances equal to 2
Alt. Hypothesis:
Sample drawn from population with ratio of variances not equal to 2
Sample sizes: 100, 100
Sample variances: 3.36142, 4.08274
Ratio of variances: 0.823326
Distribution: FRatio(99,99)
Computed statistic: 0.411663
Computed pvalue: 1.45561e-05
Confidence interval: .5539687377 .. 1.223654982
(ratio of population variances)
Result: [Rejected]
There exists statistical evidence against the null hypothesis
4 Tests for Normality
The Statistics package provides an implementation of Shapiro and Wilk's W-test for normality. This test is used to determine if a provided sample could be considered to be drawn from a normal
> $\mathrm{restart}:\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$$\mathrm{with}\left(\mathrm{Statistics}\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\
Generate a sample of twenty points from a normal distribution and another from a uniform distribution.
> $S:=\mathrm{Sample}\left(\mathrm{Normal}⁡\left(5,2\right),20\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$$T:=\mathrm{Sample}\
Consider the hypothesis that S is drawn from a normal distribution and apply Shapiro and Wilk's W-test.
> $\mathrm{ShapiroWilkWTest}⁡\left(S\right):$
Shapiro and Wilk's W-Test for Normality
Null Hypothesis:
Sample drawn from a population that follows a normal distribution
Alt. Hypothesis:
Sample drawn from population that does not follow a normal distribution
Sample size: 20
Computed statistic: 0.972002
Computed pvalue: 0.784909
Result: [Accepted]
There is no statistical evidence against the null hypothesis
Apply the same hypothesis with regards to the data drawn from the uniform distribution.
> $\mathrm{ShapiroWilkWTest}⁡\left(T\right):$
Shapiro and Wilk's W-Test for Normality
Null Hypothesis:
Sample drawn from a population that follows a normal distribution
Alt. Hypothesis:
Sample drawn from population that does not follow a normal distribution
Sample size: 20
Computed statistic: 0.889151
Computed pvalue: 0.0259262
Result: [Rejected]
There exists statistical evidence against the null hypothesis
The Statistics package provides an implementation of Shapiro and Wilk's W-test for normality. This test is used to determine if a provided sample could be considered to be drawn from a normal
Generate a sample of twenty points from a normal distribution and another from a uniform distribution.
Consider the hypothesis that S is drawn from a normal distribution and apply Shapiro and Wilk's W-test.
Shapiro and Wilk's W-Test for Normality
Null Hypothesis:
Sample drawn from a population that follows a normal distribution
Alt. Hypothesis:
Sample drawn from population that does not follow a normal distribution
Sample size: 20
Computed statistic: 0.972002
Computed pvalue: 0.784909
Result: [Accepted]
There is no statistical evidence against the null hypothesis
Apply the same hypothesis with regards to the data drawn from the uniform distribution.
Shapiro and Wilk's W-Test for Normality
Null Hypothesis:
Sample drawn from a population that follows a normal distribution
Alt. Hypothesis:
Sample drawn from population that does not follow a normal distribution
Sample size: 20
Computed statistic: 0.889151
Computed pvalue: 0.0259262
Result: [Rejected]
There exists statistical evidence against the null hypothesis
5 Tests for Goodness-of-Fit
The Statistics package provides two methods of testing goodness-of-fit. The ChiSquareGoodnessOfFitTest function should be used to determine if an observed or empirical data set fits expected values
for that data set. Similarly, the ChiSquareSuitableModelTest is available for testing how well a given probability distribution approximates a data sample.
> $\mathrm{restart}:\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$$\mathrm{with}\left(\mathrm{Statistics}\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\
Consider the following number of sales made on each day of the week at a jewelry store, tallied over one sales week (Monday to Saturday).
> $\mathrm{Ob}:=\mathrm{Array}⁡\left(\left[25,17,15,23,24,16\right]\right)$
We wish to test the hypothesis that sales are uniformly distributed throughout the week. The expected number of sales per day is then given by the number of sales averaged over the week.
> $\mathrm{SalesPerDay}:=\frac{\underset{i=1}{\overset{6}{∑}}\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}{'\mathrm{Ob}'}_{i}}{6}$
${\mathrm{SalesPerDay}}{:=}{20}$ (5.1)
> $\mathrm{Ex}:=\mathrm{Array}⁡\left(\left[\mathrm{SalesPerDay}$6\right]\right)$
We now test the hypothesis (using ChiSquareGoodnessOfFitTest) that the observed number of sales per day is consistent with a uniformly distributed number of sales each day.
> $\mathrm{ChiSquareGoodnessOfFitTest}⁡\left(\mathrm{Ob},\mathrm{Ex},\mathrm{level}=0.05\right):$
Chi-Square Test for Goodness-of-Fit
Null Hypothesis:
Observed sample does not differ from expected sample
Alt. Hypothesis:
Observed sample differs from expected sample
Categories: 6
Distribution: ChiSquare(5)
Computed statistic: 5
Computed pvalue: 0.41588
Critical value: 11.07049741
Result: [Accepted]
There is no statistical evidence against the null hypothesis
Hence we conclude that a uniformly distributed number of sales is a reasonable claim.
Consider a dataset of times during a day when sales are made. Determine if sales are uniformly distributed during the day (consider an 8 hour working day where sales are measured between 0.0 and
8.0, the number of hours into the day). The data in this case is continuous and we are testing against a uniform probability distribution.
> $\mathrm{SaleTimes}:=\left[1.4,1.8,2.2,2.9,3.0,3.4,3.4,3.5,3.6,3.7,3.8,4.0,4.4,4.6,5.3,7.5\right]:$
Apply the chi square suitable model test to determine if a uniform distribution closely matches the provided data.
> $\mathrm{ChiSquareSuitableModelTest}⁡\left(\mathrm{SaleTimes},\mathrm{Uniform}⁡\left(0,8\right)\right):$
Chi-Square Test for Suitable Probability Model
Null Hypothesis:
Sample was drawn from specified probability distribution
Alt. Hypothesis:
Sample was not drawn from specified probability distribution
Bins: 4
Distribution: ChiSquare(3)
Computed statistic: 9.5191
Computed pvalue: 0.023129
Critical value: 7.814728288
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Hence we conclude that the sale times are not uniformly distributed throughout the day. Closer examination of the data reveals that most of the sales were made roughly half way through the day.
The Statistics package provides two methods of testing goodness-of-fit. The ChiSquareGoodnessOfFitTest function should be used to determine if an observed or empirical data set fits expected values
for that data set. Similarly, the ChiSquareSuitableModelTest is available for testing how well a given probability distribution approximates a data sample.
Consider the following number of sales made on each day of the week at a jewelry store, tallied over one sales week (Monday to Saturday).
We wish to test the hypothesis that sales are uniformly distributed throughout the week. The expected number of sales per day is then given by the number of sales averaged over the week.
We now test the hypothesis (using ChiSquareGoodnessOfFitTest) that the observed number of sales per day is consistent with a uniformly distributed number of sales each day.
Chi-Square Test for Goodness-of-Fit
Null Hypothesis:
Observed sample does not differ from expected sample
Alt. Hypothesis:
Observed sample differs from expected sample
Categories: 6
Distribution: ChiSquare(5)
Computed statistic: 5
Computed pvalue: 0.41588
Critical value: 11.07049741
Result: [Accepted]
There is no statistical evidence against the null hypothesis
Hence we conclude that a uniformly distributed number of sales is a reasonable claim.
Consider a dataset of times during a day when sales are made. Determine if sales are uniformly distributed during the day (consider an 8 hour working day where sales are measured between 0.0 and 8.0,
the number of hours into the day). The data in this case is continuous and we are testing against a uniform probability distribution.
Apply the chi square suitable model test to determine if a uniform distribution closely matches the provided data.
Chi-Square Test for Suitable Probability Model
Null Hypothesis:
Sample was drawn from specified probability distribution
Alt. Hypothesis:
Sample was not drawn from specified probability distribution
Bins: 4
Distribution: ChiSquare(3)
Computed statistic: 9.5191
Computed pvalue: 0.023129
Critical value: 7.814728288
Result: [Rejected]
There exists statistical evidence against the null hypothesis
---------------------------------------------- Null Hypothesis: Sample was drawn from specified probability distribution Alt. Hypothesis: Sample was not drawn from specified probability distribution
Bins: 4 Distribution: ChiSquare(3) Computed statistic: 9.5191 Computed pvalue: 0.023129 Critical value: 7.814728288 Result: [Rejected] There exists statistical evidence against the null hypothesis
Hence we conclude that the sale times are not uniformly distributed throughout the day. Closer examination of the data reveals that most of the sales were made roughly half way through the day.
6 Tests for Independence in a Two-Way Table
The Statistics package contains the ChiSquareIndependenceTest function, which is used to determine if two attributes are independent of one another.
Consider a sample of 476 patients that are part of a survey to determine if a new drug is effective at fighting a new disease. Patients are randomly given either the new drug or a placebo, and their
recovery rate is tabulated as follows:
> $\mathrm{DrugGroup}≔\mathrm{Vector}\left[\mathrm{column}\right]\left(\left[64,176\right]\right):$$#\mathrm{Recovered, Not Recovered}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\
mathrm{PlaceboGroup}≔\mathrm{Vector}\left[\mathrm{column}\right]\left(\left[86,150\right]\right):#\mathrm{Recovered, Not Recovered}$
Construct the two-way table for this result.
> $\mathrm{Output}≔\mathrm{Matrix}\left(\left[\mathrm{DrugGroup},\mathrm{PlaceboGroup}\right]\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$
Finally, apply the chi square test for independence to test the hypothesis that the results are independent. That is, the drug has no effect on the recovery rate from the disease.
> $\mathrm{ChiSquareIndependenceTest}\left(\mathrm{Output}\right):$
Chi-Square Test for Independence
Null Hypothesis:
Two attributes within a population are independent of one another
Alt. Hypothesis:
Two attributes within a population are not independent of one another
Dimensions: 2
Total Elements: 476
Distribution: ChiSquare(1)
Computed statistic: 5.26704
Computed pvalue: 0.0217328
Critical value: 3.84145606580278
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Thus we conclude that there exists statistical evidence in favor of the drug having an effect on recovery rate. Closer examination reveals that the drug improves a patient's chance of recovery from
the disease.
The Statistics package contains the ChiSquareIndependenceTest function, which is used to determine if two attributes are independent of one another.
Consider a sample of 476 patients that are part of a survey to determine if a new drug is effective at fighting a new disease. Patients are randomly given either the new drug or a placebo, and their
recovery rate is tabulated as follows:
Finally, apply the chi square test for independence to test the hypothesis that the results are independent. That is, the drug has no effect on the recovery rate from the disease.
Chi-Square Test for Independence
Null Hypothesis:
Two attributes within a population are independent of one another
Alt. Hypothesis:
Two attributes within a population are not independent of one another
Dimensions: 2
Total Elements: 476
Distribution: ChiSquare(1)
Computed statistic: 5.26704
Computed pvalue: 0.0217328
Critical value: 3.84145606580278
Result: [Rejected]
There exists statistical evidence against the null hypothesis
Chi-Square Test for Independence -------------------------------- Null Hypothesis: Two attributes within a population are independent of one another Alt. Hypothesis: Two attributes within a
population are not independent of one another Dimensions: 2 Total Elements: 476
Distribution: ChiSquare(1) Computed statistic: 5.26704 Computed pvalue: 0.0217328 Critical value: 3.84145606580278 Result: [Rejected] There exists statistical evidence against the null hypothesis
Thus we conclude that there exists statistical evidence in favor of the drug having an effect on recovery rate. Closer examination reveals that the drug improves a patient's chance of recovery from
the disease.
7 Output Options
The default output from each test is a report containing expressions of the form name = value for key output from the test. Using the output option, specific values can be returned instead.
> $\mathrm{restart}:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{with}\left(\mathrm{Statistics}\right):$
Consider the following data set.
> $X:=\mathrm{Array}⁡\left(\left[9,10,8,4,8,3,0,10,15,9\right]\right):$
Apply the one sample t-test on this data to test for a population mean of 5:
> $\mathrm{OneSampleTTest}⁡\left(X,5\right)$
${\mathrm{hypothesis}}{=}{\mathrm{true}}{,}{\mathrm{confidenceinterval}}{=}{4.561253851}{..}{10.63874615}{,}{\mathrm{distribution}}{=}{\mathrm{StudentT}}{& (7.1)
A true value for the hypothesis indicates that there is no statistical evidence against the null hypothesis (and there exists statistical evidence against it otherwise). If we were only interested
in the confidence interval from this calculation, we can use option output=confidenceinterval.
> $\mathrm{OneSampleTTest}⁡\left(X,5,\mathrm{output}=\mathrm{confidenceinterval}\right)$
${4.561253851}{..}{10.63874615}$ (7.2)
A list of valid output options are available on the help page for each test.
The default output from each test is a report containing expressions of the form name = value for key output from the test. Using the output option, specific values can be returned instead.
Apply the one sample t-test on this data to test for a population mean of 5:
A true value for the hypothesis indicates that there is no statistical evidence against the null hypothesis (and there exists statistical evidence against it otherwise). If we were only interested in
the confidence interval from this calculation, we can use option output=confidenceinterval.
A list of valid output options are available on the help page for each test. | {"url":"https://www.maplesoft.com/support/help/view.aspx?path=examples/StatisticsHypothesisTesting&L=E","timestamp":"2024-11-03T03:27:34Z","content_type":"text/html","content_length":"260917","record_id":"<urn:uuid:31f5adec-5397-4958-a1c3-1fd9d92e0153>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00591.warc.gz"} |
An array of N bits, split in two parts with same amout of bits set to one
You are given an array A of N bits. I tell you that K << N are set to one, and the remaining are set to 0. You don't know what bits are set to one and what are not. You just know the amount.
What is the optimal time and space complexity to split the array in two parts such that the number of bit set to one are equal in both the parts? The two parts may be or many not be contiguous. In
the latter case, you can provide two sets of indexes for A as answer.
3 comments:
1. select k random bits...
2. Since you don't know where bits-ONE are you gotta look everywhere linearly. You need one variable to count the bits you've found and another one to index your position inside the bit array. Run
the array until the count is equal to K/2 (or the bit-ZERO count is (N-K)/2). By that time your index will point to the joint splitting the array in equal number of bits-ONE.
Best case: K/2 lookups
Worst case: N - K/2 lookups
average: N/2 (or N/2-k/4 if the worst case is N-K)
Space required: N bits + 1 counter + 1 index.
3. Create a subset S1 of A, by selecting K random bits. Define S2 = A - S1
By construction in S1 you have:
+ K - t (0<=t<=K) bits set to 1
+ t (0<=t<K) bits set to 0
By construction in S2 you have:
+ K-t (0<=t<=K) bits set to 0
+ t (0<=t<K) bits set to 1
what is the next step? | {"url":"http://codingplayground.blogspot.com/2009/03/array-of-n-bits-split-in-two-parts-with.html","timestamp":"2024-11-07T15:57:10Z","content_type":"application/xhtml+xml","content_length":"133863","record_id":"<urn:uuid:15547e7a-7424-40b7-9556-0559fa84eb81>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00018.warc.gz"} |
Tips to Write the Best Statistics Assignment | Caramella
Tips to Write the Best Statistics Assignment
Statistics is both an art and a science since it seeks to further human understanding via the quantitative expression of actual data. It is more common to think of statistics as its own mathematical
science rather than a subfield of mathematics. Statistical analysis is gathering data, analysing it, and then summarising the results numerically. One use of statistical models and data is the
development of prediction and forecasting tools. Statistics is useful in many different academic disciplines, such as economics, insurance, and finance.
Statistics is a notoriously difficult field to master. Students still have a hard time grasping the proper application of statistical principles and putting them into practice with statistical
software, even though information technology has been essential in handling and simplifying such complicated procedures and situations. Statistics assignment help service has been meticulously
designed and made easy to understand to aid students in understanding statistical problem-solving.
Help in Writing Your Statistics Assignment
Stay Informed
Before you can even think about starting to write the assignment, you must figure out what needs doing. But "knowing" means you have a firm grasp on the specific actions you need to do. Carefully
read the questions and jot down all the relevant information.
Make an effort to summarise the concepts covered by the assignment after you've made sure nothing has slipped your mind. Something you heard before will serve as a warning. You can prepare for the
assignment by studying up on any of these concepts you're unclear about. The better analogy would be to know your enemy inside and out before you step foot in combat.
Utilise Suitable Reference Materials.
Thanks to the proliferation of online resources, information on any subject is now easily accessible; for instance, getting assignment help statistics is a snap. You should not rush into using them,
though, because not all of them are legitimate and trustworthy. Make time in your schedule to read the supplied materials. Based on what you've learned in the chapters, you should be able to identify
reliable sources of information and use them effectively.
The printed version also includes numerous citations, so it's not just about citing sources online. Unfortunately, not all books provide up-to-date information, just as online resources. If you
choose to consult any reference materials, be they online or offline, be sure you pick the right one.
Read internet reviews, use statistics assignment help and talk to your professor if you don't know where to begin your search. You may trust that they will point you in the direction of relevant
books and magazines to read.
Get To Work On It
It is now necessary to devote considerable time and effort to the paper. Before you divide the job plan into smaller sections, you should establish a due date. Step two is to plot out a strategy for
completing the assignment. Here you should include all the headings and subheadings you want to use throughout the assignment. In case there are changes, have a general idea of what you want to
accomplish on hand. Simplifying your task is the goal here. You can also filter your information in a given way with a preset framework. Separate the material you want to write into its own section
if you like. Arranging the data by your plan is the following stage.
There needs to be a natural progression to the ideas presented in your project. Never give the reader the idea that your argument is out there. The next stage in writing the assignment is to check
for weak points caused by information repetition. As a last check, make sure that you have well-written and interesting introductions and conclusions for your work.
Use The Materials That Are Accessible Online
There are situations in which solving equations and sums is difficult. Under extreme pressure, making a mistake in your math calculations is easy. To get help with this, you can look for an economics
assignment help service.
Any variety of current AI-based technologies can handle the computations. There are several applications of the equation solver type. These tools can be used to solve any kind of equation. If you
find the computations and calculating large equations to be too tough, you can try using internet-based tools. They will, of course, lighten your load somewhat. Plus, this will free up more time for
you to edit and proofread.
Make It More Engaging And Valuable
When it comes to numbers, is there anything in particular that you can understand? The use of diagrams, charts, and graphs makes the figures far easier to understand. Treat everyone who reads your
assignment with the utmost respect. Incorporate as many graphs and diagrams as you like. Adding this element will really make your work stand out and captivate your audience. Additionally, it will
help in comprehending the task at hand.
But watch out that you don't muck it up either. This could lead to confusion and misunderstanding on the part of the reader.
Provide Proper Citations and References.
Just imagine someone taking your ideas and not giving you the credit you're due. It would be the incorrect thing to do and unethical. Therefore, whenever you incorporate someone else's work into your
own, be sure to cite and reference your sources inside the project.
Remember that referencing and referencing are completely separate activities that should never be confused. There is a citation given inside the text. Always properly cite your sources when using
someone else's words or ideas in your own work. The opposite is true when it comes to referencing, which necessitates a sequential listing of all the sources you referenced.
Reference and citation styles come in all shapes and sizes these days. Be sure to focus on the one that is required by your school. Every style adheres to its own set of rules. These regulations need
to be studied and adhered to.
Citations that are properly formatted and adhere to all applicable standards will lend credibility to your paper and demonstrate that you have invested the necessary time and energy into the
assignment. But most importantly, this will stop people from copying. For those times when you feel like you need a helping hand, there is statistics assignment help available.
Final Thoughts
In conclusion, students often find it challenging to master statistics despite the subject's many practical uses. Understanding statistics' relevance to everyday life can liven up the subject. Read
this article for some helpful hints that will allow you to finish your economics assignment help without any problems.
Lorem ipsum dolor sit amet, ne eos veri audire sensibus, ex duo novum numquam. Cu feugait splendide nec, te aeque expetendis vel, ex oratio graeco his. Sit at numquam utroque detraxit. Aeque altera
evertitur ea pri. Pri case solet possim no, ponderum signiferumque duo et, ius cu lorem reprehendunt. Pro ad alia augue quaerendum, idque congue splendide te duo. | {"url":"https://caramellaapp.com/amelia088/d2zQRLBnA/tips-to-write-the-best-statistics-assignment","timestamp":"2024-11-12T22:44:50Z","content_type":"text/html","content_length":"228362","record_id":"<urn:uuid:ca5f865f-b17f-4b53-882c-0f9375c70f20>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00619.warc.gz"} |
Scaling and chaos in periodic approximations to the two-dimensional Ising spin glass
We approximate a two-dimensional spin glass by tiling an infinite lattice with large identical unit cells. The interactions within the unit cell are chosen at random, just as when one studies
finite-size systems with periodic boundary conditions. But here the unit cells are instead connected to form an infinite lattice, so one may examine correlations on all length scales, and the system
can have true phase transitions. For such approximations to the Ising spin glass on the square lattice, we apply the free-fermion method of Onsager and the anticommuting operator approach of Kaufman
to obtain numerically exact results for each realization of the quenched disorder. Each such sample shows one or more critical points, with the distribution of critical temperatures scaling with the
unit cell size, consistent with what is expected from the scaling theory of low-dimensional spin glasses. Due to “chaos,” the correlations between unit cells can change sign with changing
temperature. We examine the scaling of this chaos with unit cell size. Many samples have multiple critical points due to the interactions between unit cells changing sign at temperatures within the
ordered phases.
All Science Journal Classification (ASJC) codes
• Electronic, Optical and Magnetic Materials
• Condensed Matter Physics
Dive into the research topics of 'Scaling and chaos in periodic approximations to the two-dimensional Ising spin glass'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/scaling-and-chaos-in-periodic-approximations-to-the-two-dimension","timestamp":"2024-11-04T15:16:59Z","content_type":"text/html","content_length":"47884","record_id":"<urn:uuid:5a1e95b6-115a-429f-a749-e0229fe6b04a>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00266.warc.gz"} |
Hermite interpolation formula
From Encyclopedia of Mathematics
A form of writing the polynomial
The Hermite interpolation formula can be written in the form
[1] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
Hermite interpolation can be regarded as a special case of Birkhoff interpolation (also called lacunary interpolation). In the latter, not all values of a function
Such a matrix [a1].
[a1] G.G. Lorentz, K. Jetter, S.D. Riemenschneider, "Birkhoff interpolation" , Addison-Wesley (1983)
[a2] I.P. Mysovskih, "Lectures on numerical methods" , Wolters-Noordhoff (1969) pp. Chapt. 2, Sect. 10
[a3] B. Wendroff, "Theoretical numerical analysis" , Acad. Press (1966) pp. Chapt. 1
How to Cite This Entry:
Hermite interpolation formula. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hermite_interpolation_formula&oldid=13280
This article was adapted from an original article by M.K. Samarin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"https://encyclopediaofmath.org/index.php?title=Hermite_interpolation_formula&oldid=13280","timestamp":"2024-11-14T16:55:16Z","content_type":"text/html","content_length":"19474","record_id":"<urn:uuid:a383595a-6517-4f85-bfb3-466909523019>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00663.warc.gz"} |
Beyond Regular Expressions: More Incremental String Matching
Beyond Regular Expressions: More Incremental String Matching
In my last post I showed how to incrementally match long strings against regular expressions. I now want to apply similar methods to matching languages that can't be described by regular expressions.
(Note that 'language' is just jargon for a set of strings that meet some criterion.) In particular, regular expressions can't be used to test a string for balanced parentheses. This is because we
need some kind of mechanism to count how many open parentheses are still pending and a finite state machine can't represent arbitrary integers.
So let's start with a slightly more abstract description of what was going on last time so we can see the bigger picture. We were storing strings in balanced trees with a kind of 'measurement' or
'digest' of the string stored in the nodes of the tree. Each character mapped to an element of a monoid via a function called
and you can think of the measurement function as acting on entire strings if you
together all of the measurements for each of the characters. So what we have is a function
f :: String -> M
taking strings to some type M (in the last post M was a type of array) with the properties
f (a ++ b) == f a `mappend` f b
f [] == mempty
By noticing that String is itself a monoid we can write this as
f (a `mappend` b) == f a `mappend` f b
f mempty == mempty
Anything satisfying these laws is called a monoid homomorphism, or just homomorphism for short.
So the technique I used worked like this: I found a homomorphism from
to some type with the useful property that for any string s,
f s
still contains all the information required to figure out if we're dealing with a member of our language. If
turns a string into something more efficient to work with then we can make our string matching more efficient.
Now I want to make the notion of "contains all the information required" more precise by considering an example. Consider strings that consist only of the characters
. Our language will be the set of strings whose parentheses balance. In other words the total number of
must match the total number of
, and as we scan from left to right we must never see more
. For example,
are in our language, but
isn't. This language is called the Dyck language.
Suppose we're testing whether or not some string is in the Dyck language. If we see
as a substring then if we delete it from the string, it makes no difference to whether or not the string is in the Dyck language. In fact, if we see
and so on they can all be deleted. On the other hand, you can't delete
without knowing about the rest of the string. Deleting it from
makes no difference to its membership in the Dyck language, but deleting it from
certainly does.
So given a language L, we can say that two strings, x and y, are interchangeable with respect to L if any time we see x as a substring of another string we can replace it with y, and vice versa,
without making any difference to whether the string is in the language. Interchangeable strings are a kind of waste of memory. If we're testing for membership of L there's no need to distinguish
between them. So we'd like our measurement homomorphism to map all interchangeable strings to the same values. But we don't want to map any more strings to the same value because then we lose
information that tells us if a string is an element of L. A homomorphism that strikes this balance perfectly is called the 'canonical homomorphism' and the image of the set of all strings under this
homomorphisms is called the
syntactic monoid
. By 'image', I simply mean all the possible values that could arise from applying the homomorphism to all possible strings.
So lets go back to the Dyck language. Any time we see
we can delete it. But if we delete every occurence of
from a string then all we have left is a bunch of
followed by a bunch of
. Let's say it's p of the former, and q of the latter. Every string of parentheses can be distilled down to a pair of integers ≥0, (p,q). But does this go far enough? Could we distill any further?
Well for any choice of (p,q) it's a good exercise to see that for any other choice of (p',q') there's always a string in the Dyck language where if you have )
as a substring, replacing it with (p',q') gives you something not in the language. So you can't distill any further. Which means we have our syntactic monoid and canonical homomorphism. In this case
the monoid is called the
bicyclic monoid
and we can implement it as follows:
> {-# LANGUAGE TypeSynonymInstances,FlexibleInstances,MultiParamTypeClasses #-}
> import Data.Foldable
> import Data.Monoid
> import Data.FingerTree hiding (fromList)
> import qualified Data.List as L
> data Bicyclic = B Int Int deriving (Eq,Show)
> hom '(' = B 0 1
> hom ')' = B 1 0
> instance Monoid Bicyclic where
> mempty = B 0 0
> B a b `mappend` B c d = B (a-b+max b c) (d-c+max b c)
Where did that code for
come from? Consider )
. We can delete
from the middle many times over.
Now we can more or less reproduce the code of last week and get a Dyck language tester. Once we've distilled a string down to (p,q) we only need to test whether or not p=q=0 to see whether or not our
parentheses are balanced:
> matches' s = x==B 0 0 where
> x = mconcat (map hom s)
> data Elem a = Elem { getElem :: a } deriving Show
> data Size = Size { getSize :: Int } deriving (Eq,Ord,Show)
> instance Monoid Size where
> mempty = Size 0
> Size m `mappend` Size n = Size (m+n)
> instance Measured (Size,Bicyclic) (Elem Char) where
> measure (Elem a) = (Size 1,hom a)
> type FingerString = FingerTree (Size,Bicyclic) (Elem Char)
> insert :: Int -> Char -> FingerString -> FingerString
> insert i c z = l >< (Elem c <| r) where (l,r) = split (\(Size n,_) -> n>i) z
> string = empty :: FingerString
> matchesDyck string = snd (measure string)==B 0 0
> loop string = do
> print $ map getElem (toList string)
> print $ "matches? " ++ show (matchesDyck string)
> print "(Position,Character)"
> r <- getLine
> let (i,c) = read r
> loop $ insert i c string
> main = do
> loop string
There's a completely different way to test membership of the Dyck language. Replace each
with 1 and
with -1. Now scan from left to right keeping track of (1) the sum of all the numbers so far and (2) the minimum value taken by this sum. If the final sum and the final minimal sum are zero, then we
have matching parentheses. But we need to do this on substrings without scanning from the beginning in one go. That's an example of a parallel prefix sum problem and it's what I talked about
So here's an extended exercise: adapt the parallel prefix sum approach to implement incremental Dyck language testing with fingertrees. You should end up with a canonical homomorphism that's similar
to the one above. It'll probably be slightly different but ultimately equivalent.
And here's an even more extended exercise: protein sequences are sequences from a 20 letter alphabet. Each letter can be assigned a hydrophobicity value from
certain tables
. (Pick whichever table you want.) The hydrophobicity of a string is the sum of the hydrophobicities of its letters. Given a string, we can give it a score corresponding to the largest hydrophobicity
of any contiguous substring in it. Use fingertrees and a suitable monoid to track this score as the string is incrementally edited. Note how widely separated substrings can suddenly combine together
as stuff between them is adjusted.
If you're interested in Dyck languages with multiple types of parenthesis that need to match you need something
much more fiendish
8 Comments:
While I can see that it works, I don't quite see where the max comes from in "B (a-b+max b c) (d-c+max b c)". My attempt at it was "B (a+c-min b c) (b+d-min b c)", basically take the total ')'s
and total '('s and subtract the b,c pairs.
Also, I'm a bit confused by "we only need to test whether or not p=q" (which would suggest that ")(" is balanced) but then in the code you test whether both p and q are 0 (which is what I
expected the test to be).
On your first point:
max b c+min b c = b+c
You're right about the second point. I wrote the code a few days before the commentary!
I know this is a bit off subject but I am a graduate student at UNLV as well as a weekly math based podcast called Combinations and Permutations where we start with a mathematical topic and spin
off onto as many tangents as we can. You can follow the previous link to the blog page of our podcast, search for us on iTunes, or take a trip over to our host site http://cppodcast.libsyn.com.
Give us a try I do think that you will enjoy what you hear.
Wei Hu said...
You showed one particular example beyond regular expression matching. Will it generalize to any context-free language?
Wei Hu,
To make this idea work requires that the internal state of a parser be simple enough. I guess you could roughly characterise it like this: consider the set of possible transitions the parser
could make from one state to another as a result of reading n characters. We need this set to grow slowly with n. For finite state machines it remains at finite size. For the example shown in
this article it grows roughly as log(n) (the number of bits needed to represent an integer n). But for a LALR parser, say, I think the size of this set grows fast with n, and so it couldn't be
implemented reasonably.
So it's good enough for incrementally lexing a language like C++ or Haskell. But not for parsing it.
I set out to prove you wrong... One can in fact use a monoid-based technique to parse context free languages. In fact the algorithm has been published in 75 by Valiant! It was a bit of work to
set up things correctly so it would behave well though. A full write up will appear in ICFP:
I set out to prove you wrong... One can in fact use a monoid-based technique to parse context free languages. In fact the algorithm has been published in 75 by Valiant! It was a bit of work to
set up things correctly so it would behave well though. A full write up will appear in ICFP: | {"url":"http://blog.sigfpe.com/2009/01/beyond-regular-expressions-more.html?m=0","timestamp":"2024-11-14T04:01:37Z","content_type":"application/xhtml+xml","content_length":"33114","record_id":"<urn:uuid:a4f53fc1-51f2-41a9-8e74-c0ad8cced421>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00145.warc.gz"} |
The Iterative Maximum-Flow Problem
The Maximum-Flow Problem
In this section, we consider the important problem of maximizing the flow of a ma-terial through a transportation network (pipeline system, communication system, electrical distribution system, and
so on). We will assume that the transportation network in question can be represented by a connected weighted digraph with n vertices numbered from 1 to n and a set of edges E, with the following
It contains exactly one vertex with no entering edges; this vertex is called the source and assumed to be numbered 1.
It contains exactly one vertex with no leaving edges; this vertex is called the sink and assumed to be numbered n.
The weight u[ij] of each directed edge (i, j ) is a positive integer, called the edge capacity. (This number represents the upper bound on the amount of the material that can be sent from i to j
through a link represented by this edge.)
A digraph satisfying these properties is called a flow network or simply a network.^3 A small instance of a network is given in Figure 10.4.
It is assumed that the source and the sink are the only source and destination of the material, respectively; all the other vertices can serve only as points where a flow can be redirected without
consuming or adding any amount of the material. In other words, the total amount of the material entering an intermediate vertex must be equal to the total amount of the material leaving the vertex.
This con-dition is called the flow-conservation requirement. If we denote the amount sent through edge (i, j ) by x[ij] , then for any intermediate vertex i, the flow-conservation requirement can be
expressed by the following equality constraint:
where the sums in the left- and right-hand sides express the total inflow and outflow entering and leaving vertex i, respectively.
Since no amount of the material can change by going through intermediate vertices of the network, it stands to reason that the total amount of the material leaving the source must end up at the sink.
(This observation can also be derived formally from equalities (10.8), a task you will be asked to do in the exercises.) Thus, we have the following equality:
This quantity, the total outflow from the source—or, equivalently, the total inflow into the sink—is called the value of the flow. We denote it by v. It is this quantity that we will want to maximize
over all possible flows in a network.
Thus, a (feasible) flow is an assignment of real numbers x[ij] to edges (i, j ) of a given network that satisfy flow-conservation constraints (10.8) and the capacity constraints
The maximum-flow problem can be stated formally as the following optimization problem:
We can solve linear programming problem (10.11) by the simplex method or by another algorithm for general linear programming problems (see Section 10.1). However, the special structure of problem
(10.11) can be exploited to design faster algorithms. In particular, it is quite natural to employ the iterative-improvement idea as follows. We can always start with the zero flow (i.e., set x[ij] =
0 for every edge (i, j ) in the network). Then, on each iteration, we can try to find a path from source to sink along which some additional flow can be sent. Such a path is called flow augmenting.
If a flow-augmenting path is found, we adjust the flow along the edges of this path to get a flow of an increased value and try to find an augmenting path for the new flow. If no flow-augmenting path
can be found, we conclude that the current flow is optimal. This general template for solving the maximum-flow problem is called the augmenting-path method, also known as the Ford-Fulkerson method
after L. R. Ford, Jr., and D. R. Fulkerson, who discovered it (see [For57]).
An actual implementation of the augmenting path idea is, however, not quite straightforward. To see this, let us consider the network in Figure 10.4. We start with the zero flow shown in Figure
10.5a. (In that figure, the zero amounts sent through each edge are separated from the edge capacities by the slashes; we will use this notation in the other examples as well.) It is natural to
search for a flow-augmenting path from source to sink by following directed edges (i, j ) for which the current flow x[ij] is less than the edge capacity u[ij] . Among several possibilities, let us
assume that we identify the augmenting path 1→2→3→6 first. We can increase the flow along this path by a maximum of 2 units, which is the smallest unused capacity of its edges. The new flow is shown
in Figure 10.5b. This is as far as our simpleminded idea about flow-augmenting paths will be able to take us. Unfortunately, the flow shown in Figure 10.5b is not optimal: its value can still be
increased along the path 1→4→3←2→5→6 by increasing the flow by 1 on edges (1, 4), (4, 3), (2, 5), and (5, 6) and decreasing it by 1 on edge (2, 3). The flow obtained as the result of this
augmentation is shown in Figure 10.5c. It is indeed maximal. (Can you tell why?)
Thus, to find a flow-augmenting path for a flow x, we need to consider paths from source to sink in the underlying undirected graph in which any two consec-utive vertices i, j are either
connected by a directed edge from i to j with some positive unused capacity r[ij] = u[ij] − x[ij] (so that we can increase the flow through that edge by up to r[ij] units), or
connected by a directed edge from j to i with some positive flow x[j i] (so that we can decrease the flow through that edge by up to x[j i] units).
Edges of the first kind are called forward edges because their tail is listed before their head in the vertex list 1 → ^. . . i → j ^. . . → n defining the path; edges of the second kind are called
backward edges because their tail is listed after their head in the path list 1 → ^. . . i ← j ^. . . → n. To illustrate, for the path 1→4→3←2→5→6 of the last example, (1, 4), (4, 3), (2, 5), and (5,
6) are the forward edges, and (3, 2) is the backward edge.
For a given flow-augmenting path, let r be the minimum of all the unused capacities r[ij] of its forward edges and all the flows x[j i] of its backward edges. It is easy to see that if we increase
the current flow by r on each forward edge and decrease it by this amount on each backward edge, we will obtain a feasible
flow whose value is r units greater than the value of its predecessor. Indeed, let i be an intermediate vertex on a flow-augmenting path. There are four possible combinations of forward and backward
edges incident to vertex i:
For each of them, the flow-conservation requirement for vertex i will still hold after the flow adjustments indicated above the edge arrows. Further, since r is the minimum among all the positive
unused capacities on the forward edges and all the positive flows on the backward edges of the flow-augmenting path, the new flow will satisfy the capacity constraints as well. Finally, adding r to
the flow on the first edge of the augmenting path will increase the value of the flow by r.
Under the assumption that all the edge capacities are integers, r will be a positive integer too. Hence, the flow value increases at least by 1 on each iteration of the augmenting-path method. Since
the value of a maximum flow is bounded above (e.g., by the sum of the capacities of the source edges), the augmenting-path method has to stop after a finite number of iterations.^4 Surprisingly, the
final flow always turns out to be maximal, irrespective of a sequence of augmenting paths. This remarkable result stems from the proof of the Max-Flow Min-Cut Theorem (see, e.g., [For62]), which we
replicate later in this section.
The augmenting-path method—as described above in its general form—does not indicate a specific way for generating flow-augmenting paths. A bad sequence of such paths may, however, have a dramatic
impact on the method’s efficiency. Consider, for example, the network in Figure 10.6a, in which U stands for some large positive integer. If we augment the zero flow along the path 1→2→3→4, we shall
obtain the flow of value 1 shown in Figure 10.6b. Augmenting that flow along the path 1→3←2→4 will increase the flow value to 2 (Figure 10.6c). If we continue selecting this pair of flow-augmenting
paths, we will need a total of 2U iterations to reach the maximum flow of value 2U (Figure 10.6d). Of course, we can obtain the maximum flow in just two iterations by augmenting the initial zero flow
along the path 1→2→4 followed by augmenting the new flow along the path 1→3→4. The dramatic difference between 2U and 2 iterations makes the point.
Fortunately, there are several ways to generate flow-augmenting paths ef-ficiently and avoid the degradation in performance illustrated by the previous example. The simplest of them uses
breadth-first search to generate augment-ing paths with the least number of edges (see Section 3.5). This version of the augmenting-path method, called shortest-augmenting-path or
first-labeled-first-scanned algorithm, was suggested by J. Edmonds and R. M. Karp [Edm72]. The labeling refers to marking a new (unlabeled) vertex with two labels. The first label indicates the
amount of additional flow that can be brought from the source to the vertex being labeled. The second label is the name of the vertex from which the vertex being labeled was reached. (It can be left
undefined for the source.) It is also convenient to add the + or − sign to the second label to indicate whether the vertex was reached via a forward or backward edge, respectively. The source can be
always labeled with ∞, −. For the other vertices, the labels are computed as follows.
If unlabeled vertex j is connected to the front vertex i of the traversal queue by a directed edge from i to j with positive unused capacity r[ij] = u[ij] − x[ij] , then
vertex j is labeled with l[j] , i^+, where l[j] = min{l[i], r[ij] }.
If unlabeled vertex j is connected to the front vertex i of the traversal queue by a directed edge from j to i with positive flow x[j i], then vertex j is labeled with
l[j] , i^−, where l[j] = min{l[i], x[j i]}.
If this labeling-enhanced traversal ends up labeling the sink, the current flow can be augmented by the amount indicated by the sink’s first label. The augmentation is performed along the augmenting
path traced by following the vertex second labels from sink to source: the current flow quantities are increased on the forward edges and decreased on the backward edges of this path. If, on the
other hand, the sink remains unlabeled after the traversal queue becomes empty, the algorithm returns the current flow as maximum and stops.
ALGORITHM ShortestAugmentingPath(G)
//Implements the shortest-augmenting-path algorithm //Input: A network with single source 1, single sink n, and // positive integer capacities u[ij] on its edges (i, j ) //Output: A maximum flow x
assign x[ij] = 0 to every edge (i, j ) in the network
label the source with ∞, − and add the source to the empty queue Q
capacities of the edges that compose the cut. For the three examples of cuts given above, the capacities are equal to 5, 6, and 9, respectively. Since the number of different cuts in a network is
nonempty and finite (why?), there always exists a minimum cut, i.e., a cut with the smallest capacity. (What is a minimum cut in the network of Figure 10.4?) The following theorem establishes an
important relationship between the notions of maximum flow and minimum cut.
THEOREM (Max-Flow Min-Cut Theorem) The value of a maximum flow in a network is equal to the capacity of its minimum cut.
Thus, the value of any feasible flow in a network cannot exceed the capacity of any cut in that network.
Let v^∗ be the value of a final flow x^∗ obtained by the augmenting-path method. If we now find a cut whose capacity is equal to v^∗, we will have to conclude, in view of inequality (10.13), that (i)
the value v^∗ of the final flow is maximal among all feasible flows, (ii) the cut’s capacity is minimal among all cuts in the network, and
the maximum-flow value is equal to the minimum-cut capacity.
To find such a cut, consider the set of vertices X^∗ that can be reached from the
source by following an undirected path composed of forward edges with positive unused capacities (with respect to the final flow x^∗) and backward edges with positive flows on them. This set contains
the source but does not contain the sink: if it did, we would have an augmenting path for the flow x^∗, which would
The proof outlined above accomplishes more than proving the equality of the maximum-flow value and the minimum-cut capacity. It also implies that when the augmenting-path method terminates, it yields
both a maximum flow and a mini-mum cut. If labeling of the kind utilized in the shortest-augmenting-path algorithm is used, a minimum cut is formed by the edges from the labeled to unlabeled
ver-tices on the last iteration of the method. Finally, the proof implies that all such edges must be full (i.e., the flows must be equal to the edge capacities), and all the edges from unlabeled
vertices to labeled, if any, must be empty (i.e., have zero flows on them). In particular, for the network in Figure 10.7, the algorithm finds the cut {(1, 2), (4, 3)} of minimum capacity 3, both
edges of which are full as required.
Edmonds and Karp proved in their paper [Edm72] that the number of aug-menting paths needed by the shortest-augmenting-path algorithm never exceeds nm/2, where n and m are the number of vertices and
edges, respectively. Since the time required to find a shortest augmenting path by breadth-first search is in O(n + m) = O(m) for networks represented by their adjacency lists, the time efficiency of
the shortest-augmenting-path algorithm is in O(nm^2).
More efficient algorithms for the maximum-flow problem are known (see the monograph [Ahu93], as well as appropriate chapters in such books as [Cor09] and [Kle06]). Some of them implement the
augmenting-path idea in a more efficient manner. Others are based on the concept of preflows. A preflow is a flow that satisfies the capacity constraints but not the flow-conservation requirement.
Any vertex is allowed to have more flow entering the vertex than leaving it. A preflow-push algorithm moves the excess flow toward the sink until the flow-conservation requirement is reestablished
for all intermediate vertices of the network. Faster al-gorithms of this kind have worst-case efficiency close to O(nm). Note that preflow-push algorithms fall outside the iterative-improvement
paradigm because they do not generate a sequence of improving solutions that satisfy all the constraints of the problem.
To conclude this section, it is worth pointing out that although the initial interest in studying network flows was caused by transportation applications, this model has also proved to be useful for
many other areas. We discuss one of them in the next section.
Exercises 10.2
Since maximum-flow algorithms require processing edges in both directions, it is convenient to modify the adjacency matrix representation of a network as follows. If there is a directed edge from
vertex i to vertex j of capacity u[ij] , then the element in the ith row and the j th column is set to u[ij] , and the element in the j th row and the ith column is set to −u[ij] ; if there is no
edge between vertices i and j , both these elements are set to zero. Outline a simple algorithm for identifying a source and a sink in a network presented by such a matrix and indicate its time
Apply the shortest-augmenting path algorithm to find a maximum flow and a minimum cut in the following networks.
a. Does the maximum-flow problem always have a unique solution? Would your answer be different for networks with different capacities on all their edges?
Answer the same questions for the minimum-cut problem of finding a cut of the smallest capacity in a given network.
a. Explain how the maximum-flow problem for a network with several sources and sinks can be transformed into the same problem for a network with a single source and a single sink.
Some networks have capacity constraints on the flow amounts that can flow through their intermediate vertices. Explain how the maximum-flow problem for such a network can be transformed to the
maximum-flow problem for a network with edge capacity constraints only.
Consider a network that is a rooted tree, with the root as its source, the leaves as its sinks, and all the edges directed along the paths from the root to the leaves. Design an efficient algorithm
for finding a maximum flow in such a network. What is the time efficiency of your algorithm?
a. Prove equality (10.9).
Prove that for any flow in a network and any cut in it, the value of the flow is equal to the flow across the cut (see equality (10.12)). Explain the relationship between this property and equality
a. Express the maximum-flow problem for the network in Figure 10.4 as a linear programming problem.
Solve this linear programming problem by the simplex method.
As an alternative to the shortest-augmenting-path algorithm, Edmonds and Karp [Edm72] suggested the maximum-capacity-augmenting-path algorithm, in which a flow is augmented along the path that
increases the flow by the largest amount. Implement both these algorithms in the language of your choice and perform an empirical investigation of their relative efficiency.
Write a report on a more advanced maximum-flow algorithm such as
(i) Dinitz’s algorithm, (ii) Karzanov’s algorithm, (iii) Malhotra-Kamar-Maheshwari algorithm, or (iv) Goldberg-Tarjan algorithm.
Dining problem Several families go out to dinner together. To increase their social interaction, they would like to sit at tables so that no two members of the same family are at the same table. Show
how to find a seating arrangement that meets this objective (or prove that no such arrangement exists) by using a maximum-flow problem. Assume that the dinner contingent has p families and that the i
th family has a[i] members. Also assume that q tables are available and the j th table has a seating capacity of b[j] . [Ahu93] | {"url":"https://www.brainkart.com/article/The-Iterative-Maximum-Flow-Problem_8053/","timestamp":"2024-11-09T04:37:19Z","content_type":"text/html","content_length":"216104","record_id":"<urn:uuid:fcf4a91c-3297-46ac-8c3f-3656b533ccf3>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00117.warc.gz"} |
Transforming and augmenting images
Transforming and augmenting images¶
Torchvision supports common computer vision transformations in the torchvision.transforms and torchvision.transforms.v2 modules. Transforms can be used to transform or augment data for training or
inference of different tasks (image classification, detection, segmentation, video classification).
# Image Classification
import torch
from torchvision.transforms import v2
H, W = 32, 32
img = torch.randint(0, 256, size=(3, H, W), dtype=torch.uint8)
transforms = v2.Compose([
v2.RandomResizedCrop(size=(224, 224), antialias=True),
v2.ToDtype(torch.float32, scale=True),
v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
img = transforms(img)
# Detection (re-using imports and transforms from above)
from torchvision import tv_tensors
img = torch.randint(0, 256, size=(3, H, W), dtype=torch.uint8)
boxes = torch.randint(0, H // 2, size=(3, 4))
boxes[:, 2:] += boxes[:, :2]
boxes = tv_tensors.BoundingBoxes(boxes, format="XYXY", canvas_size=(H, W))
# The same transforms can be used!
img, boxes = transforms(img, boxes)
# And you can pass arbitrary input structures
output_dict = transforms({"image": img, "boxes": boxes})
Transforms are typically passed as the transform or transforms argument to the Datasets.
Supported input types and conventions¶
Most transformations accept both PIL images and tensor inputs. Both CPU and CUDA tensors are supported. The result of both backends (PIL or Tensors) should be very close. In general, we recommend
relying on the tensor backend for performance. The conversion transforms may be used to convert to and from PIL images, or for converting dtypes and ranges.
Tensor image are expected to be of shape (C, H, W), where C is the number of channels, and H and W refer to height and width. Most transforms support batched tensor input. A batch of Tensor images is
a tensor of shape (N, C, H, W), where N is a number of images in the batch. The v2 transforms generally accept an arbitrary number of leading dimensions (..., C, H, W) and can handle batched images
or batched videos.
Dtype and expected value range¶
The expected range of the values of a tensor image is implicitly defined by the tensor dtype. Tensor images with a float dtype are expected to have values in [0, 1]. Tensor images with an integer
dtype are expected to have values in [0, MAX_DTYPE] where MAX_DTYPE is the largest value that can be represented in that dtype. Typically, images of dtype torch.uint8 are expected to have values in
[0, 255].
Use ToDtype to convert both the dtype and range of the inputs.
V1 or V2? Which one should I use?¶
TL;DR We recommending using the torchvision.transforms.v2 transforms instead of those in torchvision.transforms. They’re faster and they can do more things. Just change the import and you should be
good to go. Moving forward, new features and improvements will only be considered for the v2 transforms.
In Torchvision 0.15 (March 2023), we released a new set of transforms available in the torchvision.transforms.v2 namespace. These transforms have a lot of advantages compared to the v1 ones (in
• They can transform images but also bounding boxes, masks, or videos. This provides support for tasks beyond image classification: detection, segmentation, video classification, etc. See Getting
started with transforms v2 and Transforms v2: End-to-end object detection/segmentation example.
• They support more transforms like CutMix and MixUp. See How to use CutMix and MixUp.
• They’re faster.
• They support arbitrary input structures (dicts, lists, tuples, etc.).
• Future improvements and features will be added to the v2 transforms only.
These transforms are fully backward compatible with the v1 ones, so if you’re already using tranforms from torchvision.transforms, all you need to do to is to update the import to
torchvision.transforms.v2. In terms of output, there might be negligible differences due to implementation differences.
Performance considerations¶
We recommend the following guidelines to get the best performance out of the transforms:
• Rely on the v2 transforms from torchvision.transforms.v2
• Use tensors instead of PIL images
• Use torch.uint8 dtype, especially for resizing
• Resize with bilinear or bicubic mode
This is what a typical transform pipeline could look like:
from torchvision.transforms import v2
transforms = v2.Compose([
v2.ToImage(), # Convert to tensor, only needed if you had a PIL image
v2.ToDtype(torch.uint8, scale=True), # optional, most input are already uint8 at this point
# ...
v2.RandomResizedCrop(size=(224, 224), antialias=True), # Or Resize(antialias=True)
# ...
v2.ToDtype(torch.float32, scale=True), # Normalize expects float input
v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
The above should give you the best performance in a typical training environment that relies on the torch.utils.data.DataLoader with num_workers > 0.
Transforms tend to be sensitive to the input strides / memory format. Some transforms will be faster with channels-first images while others prefer channels-last. Like torch operators, most
transforms will preserve the memory format of the input, but this may not always be respected due to implementation details. You may want to experiment a bit if you’re chasing the very best
performance. Using torch.compile() on individual transforms may also help factoring out the memory format variable (e.g. on Normalize). Note that we’re talking about memory format, not tensor shape.
Note that resize transforms like Resize and RandomResizedCrop typically prefer channels-last input and tend not to benefit from torch.compile() at this time.
Transform classes, functionals, and kernels¶
Transforms are available as classes like Resize, but also as functionals like resize() in the torchvision.transforms.v2.functional namespace. This is very much like the torch.nn package which defines
both classes and functional equivalents in torch.nn.functional.
The functionals support PIL images, pure tensors, or TVTensors, e.g. both resize(image_tensor) and resize(boxes) are valid.
Random transforms like RandomCrop will randomly sample some parameter each time they’re called. Their functional counterpart (crop()) does not do any kind of random sampling and thus have a slighlty
different parametrization. The get_params() class method of the transforms class can be used to perform parameter sampling when using the functional APIs.
The torchvision.transforms.v2.functional namespace also contains what we call the “kernels”. These are the low-level functions that implement the core functionalities for specific types, e.g.
resize_bounding_boxes or `resized_crop_mask. They are public, although not documented. Check the code to see which ones are available (note that those starting with a leading underscore are not
public!). Kernels are only really useful if you want torchscript support for types like bounding boxes or masks.
Torchscript support¶
Most transform classes and functionals support torchscript. For composing transforms, use torch.nn.Sequential instead of Compose:
transforms = torch.nn.Sequential(
Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
scripted_transforms = torch.jit.script(transforms)
v2 transforms support torchscript, but if you call torch.jit.script() on a v2 class transform, you’ll actually end up with its (scripted) v1 equivalent. This may lead to slightly different results
between the scripted and eager executions due to implementation differences between v1 and v2.
If you really need torchscript support for the v2 transforms, we recommend scripting the functionals from the torchvision.transforms.v2.functional namespace to avoid surprises.
Also note that the functionals only support torchscript for pure tensors, which are always treated as images. If you need torchscript support for other types like bounding boxes or masks, you can
rely on the low-level kernels.
For any custom transformations to be used with torch.jit.script, they should be derived from torch.nn.Module.
See also: Torchscript support.
V2 API reference - Recommended¶
v2.Resize(size[, interpolation, max_size, ...]) Resize the input to the given size.
v2.ScaleJitter(target_size[, scale_range, ...]) Perform Large Scale Jitter on the input according to "Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation".
v2.RandomShortestSize(min_size[, max_size, ...]) Randomly resize the input.
v2.RandomResize(min_size, max_size[, ...]) Randomly resize the input.
v2.functional.resize(inpt, size[, ...]) See Resize for details.
v2.RandomCrop(size[, padding, ...]) Crop the input at a random location.
v2.RandomResizedCrop(size[, scale, ratio, ...]) Crop a random portion of the input and resize it to a given size.
v2.RandomIoUCrop([min_scale, max_scale, ...]) Random IoU crop transformation from "SSD: Single Shot MultiBox Detector".
v2.CenterCrop(size) Crop the input at the center.
v2.FiveCrop(size) Crop the image or video into four corners and the central crop.
v2.TenCrop(size[, vertical_flip]) Crop the image or video into four corners and the central crop plus the flipped version of these (horizontal flipping is used by default).
v2.functional.crop(inpt, top, left, height, ...) See RandomCrop for details.
v2.functional.resized_crop(inpt, top, left, ...) See RandomResizedCrop for details.
v2.functional.ten_crop(inpt, size[, ...]) See TenCrop for details.
v2.functional.center_crop(inpt, output_size) See RandomCrop for details.
v2.functional.five_crop(inpt, size) See FiveCrop for details.
v2.RandomHorizontalFlip([p]) Horizontally flip the input with a given probability.
v2.RandomVerticalFlip([p]) Vertically flip the input with a given probability.
v2.Pad(padding[, fill, padding_mode]) Pad the input on all sides with the given "pad" value.
v2.RandomZoomOut([fill, side_range, p]) "Zoom out" transformation from "SSD: Single Shot MultiBox Detector".
v2.RandomRotation(degrees[, interpolation, ...]) Rotate the input by angle.
v2.RandomAffine(degrees[, translate, scale, ...]) Random affine transformation the input keeping center invariant.
v2.RandomPerspective([distortion_scale, p, ...]) Perform a random perspective transformation of the input with a given probability.
v2.ElasticTransform([alpha, sigma, ...]) Transform the input with elastic transformations.
v2.functional.horizontal_flip(inpt) See RandomHorizontalFlip for details.
v2.functional.vertical_flip(inpt) See RandomVerticalFlip for details.
v2.functional.pad(inpt, padding[, fill, ...]) See Pad for details.
v2.functional.rotate(inpt, angle[, ...]) See RandomRotation for details.
v2.functional.affine(inpt, angle, translate, ...) See RandomAffine for details.
v2.functional.perspective(inpt, startpoints, ...) See RandomPerspective for details.
v2.functional.elastic(inpt, displacement[, ...]) See ElasticTransform for details.
v2.ColorJitter([brightness, contrast, ...]) Randomly change the brightness, contrast, saturation and hue of an image or video.
v2.RandomChannelPermutation() Randomly permute the channels of an image or video
v2.RandomPhotometricDistort([brightness, ...]) Randomly distorts the image or video as used in SSD: Single Shot MultiBox Detector.
v2.Grayscale([num_output_channels]) Convert images or videos to grayscale.
v2.RGB() Convert images or videos to RGB (if they are already not RGB).
v2.RandomGrayscale([p]) Randomly convert image or videos to grayscale with a probability of p (default 0.1).
v2.GaussianBlur(kernel_size[, sigma]) Blurs image with randomly chosen Gaussian blur kernel.
v2.GaussianNoise([mean, sigma, clip]) Add gaussian noise to images or videos.
v2.RandomInvert([p]) Inverts the colors of the given image or video with a given probability.
v2.RandomPosterize(bits[, p]) Posterize the image or video with a given probability by reducing the number of bits for each color channel.
v2.RandomSolarize(threshold[, p]) Solarize the image or video with a given probability by inverting all pixel values above a threshold.
v2.RandomAdjustSharpness(sharpness_factor[, p]) Adjust the sharpness of the image or video with a given probability.
v2.RandomAutocontrast([p]) Autocontrast the pixels of the given image or video with a given probability.
v2.RandomEqualize([p]) Equalize the histogram of the given image or video with a given probability.
v2.functional.permute_channels(inpt, permutation) Permute the channels of the input according to the given permutation.
v2.functional.rgb_to_grayscale(inpt[, ...]) See Grayscale for details.
v2.functional.grayscale_to_rgb(inpt) See RGB for details.
v2.functional.to_grayscale(inpt[, ...]) See Grayscale for details.
v2.functional.gaussian_blur(inpt, kernel_size) See GaussianBlur for details.
v2.functional.gaussian_noise(inpt[, mean, ...]) See GaussianNoise
v2.functional.invert(inpt) See RandomInvert().
v2.functional.posterize(inpt, bits) See RandomPosterize for details.
v2.functional.solarize(inpt, threshold) See RandomSolarize for details.
v2.functional.adjust_sharpness(inpt, ...) See RandomAdjustSharpness
v2.functional.autocontrast(inpt) See RandomAutocontrast for details.
v2.functional.adjust_contrast(inpt, ...) See RandomAutocontrast
v2.functional.equalize(inpt) See RandomEqualize for details.
v2.functional.adjust_brightness(inpt, ...) Adjust brightness.
v2.functional.adjust_saturation(inpt, ...) Adjust saturation.
v2.functional.adjust_hue(inpt, hue_factor) Adjust hue
v2.functional.adjust_gamma(inpt, gamma[, gain]) Adjust gamma.
v2.Compose(transforms) Composes several transforms together.
v2.RandomApply(transforms[, p]) Apply randomly a list of transformations with a given probability.
v2.RandomChoice(transforms[, p]) Apply single transformation randomly picked from a list.
v2.RandomOrder(transforms) Apply a list of transformations in a random order.
v2.LinearTransformation(...) Transform a tensor image or video with a square transformation matrix and a mean_vector computed offline.
v2.Normalize(mean, std[, inplace]) Normalize a tensor image or video with mean and standard deviation.
v2.RandomErasing([p, scale, ratio, value, ...]) Randomly select a rectangle region in the input image or video and erase its pixels.
v2.Lambda(lambd, *types) Apply a user-defined function as a transform.
v2.SanitizeBoundingBoxes([min_size, ...]) Remove degenerate/invalid bounding boxes and their corresponding labels and masks.
v2.ClampBoundingBoxes() Clamp bounding boxes to their corresponding image dimensions.
v2.UniformTemporalSubsample(num_samples) Uniformly subsample num_samples indices from the temporal dimension of the video.
v2.JPEG(quality) Apply JPEG compression and decompression to the given images.
v2.functional.normalize(inpt, mean, std[, ...]) See Normalize for details.
v2.functional.erase(inpt, i, j, h, w, v[, ...]) See RandomErase for details.
v2.functional.sanitize_bounding_boxes(...[, ...]) Remove degenerate/invalid bounding boxes and return the corresponding indexing mask.
v2.functional.clamp_bounding_boxes(inpt[, ...]) See ClampBoundingBoxes() for details.
v2.functional.uniform_temporal_subsample(...) See UniformTemporalSubsample for details.
v2.functional.jpeg(image, quality) See JPEG for details.
Beware, some of these conversion transforms below will scale the values while performing the conversion, while some may not do any scaling. By scaling, we mean e.g. that a uint8 -> float32 would map
the [0, 255] range into [0, 1] (and vice-versa). See Dtype and expected value range.
v2.ToImage() Convert a tensor, ndarray, or PIL Image to Image ; this does not scale values.
v2.ToPureTensor() Convert all TVTensors to pure tensors, removing associated metadata (if any).
v2.PILToTensor() Convert a PIL Image to a tensor of the same type - this does not scale values.
v2.ToPILImage([mode]) Convert a tensor or an ndarray to PIL Image
v2.ToDtype(dtype[, scale]) Converts the input to a specific dtype, optionally scaling the values for images or videos.
v2.ConvertBoundingBoxFormat(format) Convert bounding box coordinates to the given format, eg from "CXCYWH" to "XYXY".
v2.functional.to_image(inpt) See ToImage for details.
v2.functional.pil_to_tensor(pic) Convert a PIL Image to a tensor of the same type.
v2.functional.to_pil_image(pic[, mode]) Convert a tensor or an ndarray to PIL Image.
v2.functional.to_dtype(inpt[, dtype, scale]) See ToDtype() for details.
v2.functional.convert_bounding_box_format(inpt) See ConvertBoundingBoxFormat() for details.
v2.ToTensor() [DEPRECATED] Use v2.Compose([v2.ToImage(), v2.ToDtype(torch.float32, scale=True)]) instead.
v2.functional.to_tensor(inpt) [DEPREACTED] Use to_image() and to_dtype() instead.
v2.ConvertImageDtype([dtype]) [DEPRECATED] Use v2.ToDtype(dtype, scale=True) instead.
v2.functional.convert_image_dtype(image[, dtype]) [DEPRECATED] Use to_dtype() instead.
AutoAugment is a common Data Augmentation technique that can improve the accuracy of Image Classification models. Though the data augmentation policies are directly linked to their trained dataset,
empirical studies show that ImageNet policies provide significant improvements when applied to other datasets. In TorchVision we implemented 3 policies learned on the following datasets: ImageNet,
CIFAR10 and SVHN. The new transform can be used standalone or mixed-and-matched with existing transforms:
v2.AutoAugment([policy, interpolation, fill]) AutoAugment data augmentation method based on "AutoAugment: Learning Augmentation Strategies from Data".
v2.RandAugment([num_ops, magnitude, ...]) RandAugment data augmentation method based on "RandAugment: Practical automated data augmentation with a reduced search space".
v2.TrivialAugmentWide([num_magnitude_bins, ...]) Dataset-independent data-augmentation with TrivialAugment Wide, as described in "TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation".
v2.AugMix([severity, mixture_width, ...]) AugMix data augmentation method based on "AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty".
CutMix - MixUp¶
CutMix and MixUp are special transforms that are meant to be used on batches rather than on individual images, because they are combining pairs of images together. These can be used after the
dataloader (once the samples are batched), or part of a collation function. See How to use CutMix and MixUp for detailed usage examples.
v2.CutMix(*[, alpha, num_classes, labels_getter]) Apply CutMix to the provided batch of images and labels.
v2.MixUp(*[, alpha, num_classes, labels_getter]) Apply MixUp to the provided batch of images and labels.
V1 API Reference¶
Resize(size[, interpolation, max_size, ...]) Resize the input image to the given size.
RandomCrop(size[, padding, pad_if_needed, ...]) Crop the given image at a random location.
RandomResizedCrop(size[, scale, ratio, ...]) Crop a random portion of image and resize it to a given size.
CenterCrop(size) Crops the given image at the center.
FiveCrop(size) Crop the given image into four corners and the central crop.
TenCrop(size[, vertical_flip]) Crop the given image into four corners and the central crop plus the flipped version of these (horizontal flipping is used by default).
Pad(padding[, fill, padding_mode]) Pad the given image on all sides with the given "pad" value.
RandomRotation(degrees[, interpolation, ...]) Rotate the image by angle.
RandomAffine(degrees[, translate, scale, ...]) Random affine transformation of the image keeping center invariant.
RandomPerspective([distortion_scale, p, ...]) Performs a random perspective transformation of the given image with a given probability.
ElasticTransform([alpha, sigma, ...]) Transform a tensor image with elastic transformations.
RandomHorizontalFlip([p]) Horizontally flip the given image randomly with a given probability.
RandomVerticalFlip([p]) Vertically flip the given image randomly with a given probability.
ColorJitter([brightness, contrast, ...]) Randomly change the brightness, contrast, saturation and hue of an image.
Grayscale([num_output_channels]) Convert image to grayscale.
RandomGrayscale([p]) Randomly convert image to grayscale with a probability of p (default 0.1).
GaussianBlur(kernel_size[, sigma]) Blurs image with randomly chosen Gaussian blur.
RandomInvert([p]) Inverts the colors of the given image randomly with a given probability.
RandomPosterize(bits[, p]) Posterize the image randomly with a given probability by reducing the number of bits for each color channel.
RandomSolarize(threshold[, p]) Solarize the image randomly with a given probability by inverting all pixel values above a threshold.
RandomAdjustSharpness(sharpness_factor[, p]) Adjust the sharpness of the image randomly with a given probability.
RandomAutocontrast([p]) Autocontrast the pixels of the given image randomly with a given probability.
RandomEqualize([p]) Equalize the histogram of the given image randomly with a given probability.
Compose(transforms) Composes several transforms together.
RandomApply(transforms[, p]) Apply randomly a list of transformations with a given probability.
RandomChoice(transforms[, p]) Apply single transformation randomly picked from a list.
RandomOrder(transforms) Apply a list of transformations in a random order.
LinearTransformation(transformation_matrix, ...) Transform a tensor image with a square transformation matrix and a mean_vector computed offline.
Normalize(mean, std[, inplace]) Normalize a tensor image with mean and standard deviation.
RandomErasing([p, scale, ratio, value, inplace]) Randomly selects a rectangle region in a torch.Tensor image and erases its pixels.
Lambda(lambd) Apply a user-defined lambda as a transform.
Beware, some of these conversion transforms below will scale the values while performing the conversion, while some may not do any scaling. By scaling, we mean e.g. that a uint8 -> float32 would map
the [0, 255] range into [0, 1] (and vice-versa). See Dtype and expected value range.
ToPILImage([mode]) Convert a tensor or an ndarray to PIL Image
ToTensor() Convert a PIL Image or ndarray to tensor and scale the values accordingly.
PILToTensor() Convert a PIL Image to a tensor of the same type - this does not scale values.
ConvertImageDtype(dtype) Convert a tensor image to the given dtype and scale the values accordingly.
AutoAugment is a common Data Augmentation technique that can improve the accuracy of Image Classification models. Though the data augmentation policies are directly linked to their trained dataset,
empirical studies show that ImageNet policies provide significant improvements when applied to other datasets. In TorchVision we implemented 3 policies learned on the following datasets: ImageNet,
CIFAR10 and SVHN. The new transform can be used standalone or mixed-and-matched with existing transforms:
AutoAugmentPolicy(value) AutoAugment policies learned on different datasets.
AutoAugment([policy, interpolation, fill]) AutoAugment data augmentation method based on "AutoAugment: Learning Augmentation Strategies from Data".
RandAugment([num_ops, magnitude, ...]) RandAugment data augmentation method based on "RandAugment: Practical automated data augmentation with a reduced search space".
TrivialAugmentWide([num_magnitude_bins, ...]) Dataset-independent data-augmentation with TrivialAugment Wide, as described in "TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation".
AugMix([severity, mixture_width, ...]) AugMix data augmentation method based on "AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty".
Functional Transforms¶
adjust_brightness(img, brightness_factor) Adjust brightness of an image.
adjust_contrast(img, contrast_factor) Adjust contrast of an image.
adjust_gamma(img, gamma[, gain]) Perform gamma correction on an image.
adjust_hue(img, hue_factor) Adjust hue of an image.
adjust_saturation(img, saturation_factor) Adjust color saturation of an image.
adjust_sharpness(img, sharpness_factor) Adjust the sharpness of an image.
affine(img, angle, translate, scale, shear) Apply affine transformation on the image keeping image center invariant.
autocontrast(img) Maximize contrast of an image by remapping its pixels per channel so that the lowest becomes black and the lightest becomes white.
center_crop(img, output_size) Crops the given image at the center.
convert_image_dtype(image[, dtype]) Convert a tensor image to the given dtype and scale the values accordingly This function does not support PIL Image.
crop(img, top, left, height, width) Crop the given image at specified location and output size.
equalize(img) Equalize the histogram of an image by applying a non-linear mapping to the input in order to create a uniform distribution of grayscale values in the
erase(img, i, j, h, w, v[, inplace]) Erase the input Tensor Image with given value.
five_crop(img, size) Crop the given image into four corners and the central crop.
gaussian_blur(img, kernel_size[, sigma]) Performs Gaussian blurring on the image by given kernel
get_dimensions(img) Returns the dimensions of an image as [channels, height, width].
get_image_num_channels(img) Returns the number of channels of an image.
get_image_size(img) Returns the size of an image as [width, height].
hflip(img) Horizontally flip the given image.
invert(img) Invert the colors of an RGB/grayscale image.
normalize(tensor, mean, std[, inplace]) Normalize a float tensor image with mean and standard deviation.
pad(img, padding[, fill, padding_mode]) Pad the given image on all sides with the given "pad" value.
perspective(img, startpoints, endpoints[, ...]) Perform perspective transform of the given image.
pil_to_tensor(pic) Convert a PIL Image to a tensor of the same type.
posterize(img, bits) Posterize an image by reducing the number of bits for each color channel.
resize(img, size Resize the input image to the given size.
[, interpolation, max_size, ...])
resized_crop Crop the given image and resize it to desired size.
(img, top, left, height, width, size)
rgb_to_grayscale(img[, num_output_channels]) Convert RGB image to grayscale version of image.
rotate(img, angle Rotate the image by angle.
[, interpolation, expand, ...])
solarize(img, threshold) Solarize an RGB/grayscale image by inverting all pixel values above a threshold.
ten_crop(img, size[, vertical_flip]) Generate ten cropped images from the given image.
to_grayscale(img[, num_output_channels]) Convert PIL image of any mode (RGB, HSV, LAB, etc) to grayscale version of image.
to_pil_image(pic[, mode]) Convert a tensor or an ndarray to PIL Image.
to_tensor(pic) Convert a PIL Image or numpy.ndarray to tensor.
vflip(img) Vertically flip the given image. | {"url":"https://pytorch.org/vision/master/transforms.html","timestamp":"2024-11-06T11:06:31Z","content_type":"text/html","content_length":"158844","record_id":"<urn:uuid:cca70645-25bb-4c9e-b6fe-a7619b44dd82>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00478.warc.gz"} |
OS X
I've spent the last two days trying to recompile gcc on my G4 iBook. It hasn't been pleasant and it hasn't worked. And that might be because I didn't do enough research.
In the process of doing Project Euler problems, I reached one where I needed to solve a system of linear equations. For those of you who don't remember what this looks like, it's something like:
$4x+3y = 10$
$7x-2y = 3$
While this is a reasonably simple example and can be solved by hand, they can get significantly more difficult. The standard way for solving problems in this fashion involves using matrices. (If you
took a linear algebra class, this should look familiar.)
Coding a naive way to do this is not difficult. Cramer's Rule provides a way to find each value by using the determinant of matrices. (Apparently LU decomposition is faster but it looks harder to
implement. I'll go with naive first and then refactor later if I need to.) The determinant of arbitrary-size square matrices can be expressed through the Laplace expansion until eventually 2x2 (or, I
suppose, 3x3) matrices are found that can have their determinant determined mathematically.
In consulting Skiena's The Algorithm Design Manual, I found references to LAPACK, a linear algebra package for FORTRAN. This is not useful to me since I'm using Ruby. But if such a package exists for
FORTRAN, surely one exists for Ruby.
Linalg is a linear algebra package for Ruby. Or, more precisely, it's a Ruby wrapper around LAPACK. As a result, it relies on embedding a FORTRAN compiler into the package. And this is where the pain
OS X comes with LAPACK already. However, it does not come with FORTRAN support. In order to get FORTRAN support, you need to install software or recompile gcc. Failing to do a proper search and not
finding this resource, I decided to compile a new gcc.
gcc 4.3.3 relies on gmp and mpfr. gmp 4.2.1, which, if you don't have it installed, gcc tells you to get from ftp://gcc.gnu.org/pub/gcc/infrastructure/, does not work on OS X. It builds and installs
fine and then yields an obscure linker bug when you try to build mpfr or any other software that tries to include gmp. After an hour of fighting with this, I found out that gmp 4.2.1 was not the
newest version but that 4.2.4 was. I had no issues with gmp 4.2.4.
Then there has been little love compiling gcc. At some point, gcc creates a spec file which is then used to pass the option -macosx-version-min which causes the compiler to fail. Editing the spec
file and removing that lets it continue on before the next pass with a recreated spec file. Fixing that seems to correct the issue again. And then, eventually, there's a point in the build where it
just fails. However, this may have been left over from something else so I tore everything down and restarted the process.
It looks like the issue with the spec file could also be of my own making. If I understand this post correctly, the issue might be that I don't have a new enough version of the developer tools
installed. This may be possible since I think the version I have is the one that came with OS X 10.4 (and is, therefore, at least three years old).
With the information I've found this morning, I wonder if I could restart the process and have it run smoother. (Although the installable gfortran may be the smoothest option.) I just wish I knew
days ago what I know now. | {"url":"https://www.ithiriel.com/taxonomy/term/21","timestamp":"2024-11-01T22:29:30Z","content_type":"application/xhtml+xml","content_length":"22768","record_id":"<urn:uuid:b52069e1-0305-4bda-bf7f-2f322ac88a48>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00627.warc.gz"} |
When you plan a road trip, you use math literally every step of the way. What you may not know, however, is the right math term for what you’re calculating. So today, I’m going to walk you through a
few steps of planning a road trip so you can see how Algebra helps... | {"url":"https://askdrcallahan.com/tag/navigation/","timestamp":"2024-11-02T14:14:17Z","content_type":"text/html","content_length":"311480","record_id":"<urn:uuid:9a19b223-3644-4b49-9250-2de33bdeaaba>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00163.warc.gz"} |
Classroom Resources | Investigating Mass and Volume | AACT
« Return to AACT homepage
AACT Member-Only Content
You have to be an AACT member to access this content, but good news: anyone can join!
Investigating Mass and Volume Mark as Favorite (2 Favorites)
LAB in Measurements. Last updated January 29, 2024.
In this lab students will become familiar with collecting and recording quantitative data. They will calculate the volume of a solid cube or rectangle using metric measurements, as well as measure
the volume of a liquid and an irregular shaped object using water displacement. In addition students will have an opportunity to measure and record the mass of an object using a triple beam balance.
Grade Level
Middle and Elementary School
This lab will help prepare your students to meet the performance expectations in the following standards:
• Science and Engineering Practices:
□ Analyzing and Interpreting Data
□ Using Mathematics and Computational Thinking
By the end of this lab, students should be able to
• Accurately measure the mass of an object using a triple beam balance.
• Determine the volume of a liquid using a graduated cylinder.
• Determine the volume of an irregularly shaped object using water displacement.
• Correctly use a metric ruler.
• Calculate the volume of a rectangular object from collected metric measurements.
Chemistry Topics
This lab supports students’ understanding of
• Measurement
• Mass
• Volume
• Water Displacement
• Physical Properties
Teacher Preparation: 15 minutes
Lesson: 50 minutes
Materials (per group)
• Objects to measure: cubes, rocks, dense wood
• Triple Beam Balance
• Ruler
• Graduated cylinder(s)
• Water
• Always wear safety goggles when handling chemicals in the lab.
• Students should wash their hands thoroughly before leaving the lab.
• When students complete the lab, instruct them how to clean up their materials and dispose of any chemicals.
Teacher Notes
• Students should be somewhat familiar with measurement and the metric system prior to this activity.
• This lab activity is best completed in pairs or small groups.
• It might be helpful to demonstrate each type of measurement prior to the lab, particularly a triple beam balance.
• For older students this could be a good lab activity to use as review.
• This lab can be expanded upon by using more objects, or requiring students to also calculate density.
• Be careful when adding the irregular shaped objects to glass graduated cylinders. Objects such as rocks can easily break the glass.
• Make sure the irregular shaped objects that you use can fit inside the graduated cylinder for the water displacement portion of the lab.
• Depending on the age and ability of the students, the conclusion section might be a good question to use as a discussion question rather than a written answer.
• Density Cubes work well for the first part of this activity. Lego blocks are also a good option.
• Pieces of dense wood or rocks are good options for the irregular shaped object in part three. Examples shown in photo below.
For the Student
Pre-lab Questions
1. What is volume? Give an example.
2. How do you calculate the volume of a rectangular solid?
3. How do you calculate the volume of an irregular solid?
4. How do you calculate the volume of a liquid?
5. What is mass? Give an example.
6. How do you determine the mass of an object?
• Objects: cubes, rocks, dense wood
• Triple Beam Balance
• Ruler
• Graduated cylinder(s)
• Water
• Always wear safety goggles when handling chemicals in the lab.
• Students should wash their hands thoroughly before leaving the lab.
• When students complete the lab, instruct them how to clean up their materials and dispose of any chemicals.
• Find the volume of the rectangular solid or cube. Record all necessary data in the table below. Show all work for your calculations.
Measurements Data
Height of the solid cm
Length of the solid cm
Width of the solid cm
Calculate the volume using the formula below:
(show all work)
(H) x (L) x (W) = _____ cm^3
• Find the mass of the rectangular solid or cube using a triple beam balance.
□ Set the scale: slide all of the riders on the beams to “0”, and make sure the pointer “floats” at 0. If it doesn’t use the adjustment knob or ask your teacher for help.
□ To measure the mass of the object, place it on the plate.
□ Begin with the largest slider, move it one increment to see if the scale balances.
□ If the largest slider is too heavy, move it back to zero and move the next largest slider.
□ Keep moving the sliders until the balance lines up (floats!) at the “0” value on the far end of the balance.
□ Record data in the table below.
Measurements Data
Beam #1 result g
Beam #2 result g
Beam #3 result g
Add the results of the 3 beams together g
• Find the volume of an irregularly shaped object using water displacement.
□ Obtain a graduated cylinder. First, make sure the object can fit easily inside of the cylinder, without being restricted. If it cannot, you will need a larger cylinder.
□ Add water to the graduated cylinder. Estimate the amount of water to that it should completely cover the irregularly shaped object when it is added.
Measurements Data
How much water did you add in step b? ml
Carefully add the object to the graduated cylinder. Then, record the new measurement of the water (use the meniscus). ml
Subtract the original volume of water from the final volume of water. ml
What is the volume of the irregular solid? ml
Mass and volume are both considered physical properties. Why are physical properties important? What are some other examples of physical properties? Can all physical properties be measured? | {"url":"https://teachchemistry.org/classroom-resources/investigating-mass-and-volume","timestamp":"2024-11-13T05:35:34Z","content_type":"text/html","content_length":"41530","record_id":"<urn:uuid:27d00c62-4635-4a15-82b0-8df36eeb88ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00694.warc.gz"} |
Monadic Decomposition in Integer Linear Arithmetic
Monadic decomposability is a notion of variable independence, which asks whether a given formula in a first-order theory is expressible as a Boolean combination of monadic predicates in the theory.
Recently, Veanes et al. showed the usefulness of monadic decomposability in the context of SMT (i.e. the input formula is quantifier-free), and found various interesting applications including string
analysis. However, checking monadic decomposability is undecidable in general. Decidability for certain theories is known (e.g. Presburger Arithmetic, Tarski's Real-Closed Field), but there are very
few results regarding their computational complexity. In this paper, we study monadic decomposability of integer linear arithmetic in the setting of SMT. We show that this decision problem is
coNP-complete and, when monadically decomposable, a formula admits a decomposition of exponential size in the worst case. We provide a new application of our results to string constraint solving with
length constraints. We then extend our results to variadic decomposability, where predicates could admit multiple free variables (in contrast to monadic decomposability). Finally, we give an
application to quantifier elimination in integer linear arithmetic where the variables in a block of quantifiers, if independent, could be eliminated with an exponential (instead of the standard
doubly exponential) blow-up.
Original language English
Title of host publication International Joint Conference on Automated Reasoning
Subtitle of host publication IJCAR 2020
Publisher Springer-Verlag
Pages 122-140
Number of pages 19
Publication status E-pub ahead of print - 24 Jun 2020
International Joint Conference on Automated Reasoning - Paris, France
Event Duration: 29 Jun 2020 → 6 Jul 2020
Conference number: 10
Publication series
Name Lecture Notes in Computer Science
Publisher Springer
Volume 12166
Conference International Joint Conference on Automated Reasoning
Abbreviated title IJCAR
Country/Territory France
City Paris
Period 29/06/20 → 6/07/20
Internet address • https://ijcar2020.org/
• Presburger Arithmetic, Monadic Decomposition, Logic, Theoretical Computer Science | {"url":"https://pure.royalholloway.ac.uk/en/publications/monadic-decomposition-in-integer-linear-arithmetic","timestamp":"2024-11-06T21:39:09Z","content_type":"text/html","content_length":"52808","record_id":"<urn:uuid:d0c7757f-eeca-42bb-befd-6409065b26f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00072.warc.gz"} |
infinite sequence in English - dictionary and translation
, a
is an ordered collection of objects in which repetitions are allowed. Like a
, it contains
(also called
, or
). The number of elements (possibly infinite) is called the
of the sequence. Unlike a set, order matters, and exactly the same elements can appear multiple times at different positions in the sequence. Formally, a sequence can be defined as a
whose domain is a convex subset of the set of
, such as , , , or . | {"url":"http://info.babylon.com/onlinebox.cgi?cid=CD566&rt=ol&tid=pop&x=20&y=4&term=infinite%20sequence&tl=English&uil=Hebrew&uris=!!ARV6FUJ2JP","timestamp":"2024-11-14T20:01:35Z","content_type":"text/html","content_length":"6236","record_id":"<urn:uuid:f0c19395-2fd0-469c-aab1-04a18a1e4721>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00071.warc.gz"} |
Cite as
Adam Bene Watts, Aram W. Harrow, Gurtej Kanwar, and Anand Natarajan. Algorithms, Bounds, and Strategies for Entangled XOR Games. In 10th Innovations in Theoretical Computer Science Conference (ITCS
2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 10:1-10:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)
Copy BibTex To Clipboard
author = {Bene Watts, Adam and Harrow, Aram W. and Kanwar, Gurtej and Natarajan, Anand},
title = {{Algorithms, Bounds, and Strategies for Entangled XOR Games}},
booktitle = {10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
pages = {10:1--10:18},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-095-8},
ISSN = {1868-8969},
year = {2019},
volume = {124},
editor = {Blum, Avrim},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.10},
URN = {urn:nbn:de:0030-drops-101032},
doi = {10.4230/LIPIcs.ITCS.2019.10},
annote = {Keywords: Nonlocal games, XOR Games, Pseudotelepathy games, Multipartite entanglement} | {"url":"https://drops.dagstuhl.de/search/documents?author=Harrow,%20Aram","timestamp":"2024-11-06T06:14:48Z","content_type":"text/html","content_length":"93319","record_id":"<urn:uuid:31568339-b6f4-466b-900f-e0388e0e7644>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00069.warc.gz"} |
Here is a collection of difficult math problems with elegant solutions that possess a unique history.
In the summer of 1975, while I was in a Soviet math camp preparing to compete in the International Math Olympiad on behalf of the Soviet Union, my fellow team members and I were approached for help
by Valera Senderov, a math teacher in one of the best Moscow special math schools.
The Mathematics Department of Moscow State University, the most prestigious mathematics school in Russia, had at that time been actively trying to keep Jewish students (and other "undesirables") from
enrolling in the department. One of the methods they used for doing this was to give the unwanted students a different set of problems on their oral exam. These problems were carefully designed to
have elementary solutions (so that the Department could avoid scandals) that were nearly impossible to find. Any student who failed to answer could be easily rejected, so this system was an effective
method of controlling admissions. These kinds of math problems were informally referred to as "coffins". "Coffins" is the literal translation from Russian; in English these problems are sometimes
called "killer" problems.
These problems and their solutions were, of course, kept secret, but Valera Senderov and his friends had managed to collect a list. In 1975, they approached us to solve these problems, so that they
could train the Jewish students in these mathematical ideas. We solved some of them. Here I present some of the "coffin" problems from my archive.
I invite people who faced "killer" problems to send them to me to add to my list.
Problem 31.
A quadrilateral is given in space, such that its edges are tangent to a sphere. Prove that all the points of tangency lie in one plane.
Solution. Begin by observing that each vertex of the quadrilateral is equidistant from the two points of tangency of the edges it is on. With that, put a mass into each vertex of the quadrilateral
that is proportional to the inverse of the distance from that vertex to a point of tangency (of an edge it's in). Then the center of mass for any two neighboring vertices is the point of tangency of
the edge they share. Then the center of mass of all four vertices should lie on the line connecting opposite points of tangency. That means that the two lines connecting the opposite pairs of points
of tangency intersect, so all four of them lie in the plane those two lines define.
Last revised August 2008 | {"url":"https://tanyakhovanova.com/coffins.html","timestamp":"2024-11-08T20:06:06Z","content_type":"application/xhtml+xml","content_length":"7016","record_id":"<urn:uuid:7a013eea-b268-4b3a-a2a6-57c49d7ecb2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00299.warc.gz"} |
7 Facts About Quantum Computing You Should Know - The VPN Guru
7 Facts About Quantum Computing You Should Know
Last Updated On: December 3, 2019
No Comments
Have you ever thought that classical computing may not be enough to solve future problems of specific size and complexity? What if there’s a new method to reach higher scientific breakthroughs? The
answer to that is yes, there is. No matter what the subject is, whether it’s technology, medication, or mechanics, Quantum computers have enough computational power to tackle such issues. Stick
around as I’ll be showing you some facts you need to know about Quantum Computing.
7 Facts About Quantum Computing You Should Know
Quantum Computing is the new buzzword in the tech industry. While a few years ago, the concept seemed outlandish, significant strides have been made towards it recently. Now, it is possible that
quantum computing will make an appearance in the world in the next one or two decades. That has caused a fair bit of excitement and some concerns among many industries.
One of the most disturbing ideas associated with quantum computing is financial collapse. The sheer functioning potential of quantum computers makes some believe that they will destroy the financial
market. A similar argument has also emerged for encryption. However, this is more fiction than fact.
Quantum Computing is, in fact, a great powerhouse. To be franker, imagine that the Orion Spaceship (The one that sends astronauts to Mars) has a less powerful computer than your smartphone. That is
the case when you compare Quantum Compters to classical ones. Quantum Computing will soon be the future of technology. Can you predict what a traditional computer will be like 30 years from now?
Well, if you do so, multiply that by thousands and thousands, and maybe, you’ll get a Quantum computer.
It can make significant changes to our lives. While some people say that it has it’s drawbacks; it will not bring about a crisis situation. Here are some essential points you need to understand about
quantum computing to grasp its potential.
Quantum Computing is Fundamentally Different
Quantum computing is an exciting concept just because it differs from traditional computing. While classical computers use machine language, i.e., zeroes and ones to work, quantum computing work with
bits and qubits. The core difference here lies in dimensions. A bit or qubit is a 3d representation of data. So, it has many facets to it just as any object in our ordinary 3d world would. As such,
it can potentially contain much more data than 2D machine language. This inherently makes quantum computing a lot stronger than any tech which has come before.
Quantum Computing Won’t Replace All Traditional Computing
Even though quantum computing has far more power than traditional computing, it is not suited for all tasks. Classical computing is now considered a better option for simple functions like desktop
display, emailing, and simple office functions. Quantum computing is a far more logically oriented form of computing.
As such, it is better for more complex reasoning and logical output. That is what makes them handy tools for encryption. While at first quantum computers could foil all current-gen encryption
protocols, they will also make hackproof protocols later. So, the fears of financial houses and data security companies are practically unfounded.
Quantum Superposition is a GameChanger
Superposition is a unique property of the quantum state where each unit of data can exist in different states simultaneously. That is why quantum computers are so much more powerful. They can look at
a unit of data from different perspectives. Thus, their computing power exceeds that of others. Their power requirements are also significantly lower since they use quantum tunneling to perform their
tasks. To be precise, quantum computers only need about 1/1000th to 1/100th of the power used by traditional computing devices.
They Require Supercooling and Stability
Two of the biggest reasons why quantum computers have not emerged in the market already are their specific needs. Quantum computers can be highly unstable. So, any tiny tremor can upset their
symmetry and call what is known as decoherence. That’s where the established quantum order is destabilized and must be recalibrated for regular functioning. Imagine balancing a coin. What would
happen if any surrounding vibration or even a sound occurs? That will affect its balance, resulting in it collapsing on either heads or tails. Well, that’s precisely how it works with Quantum
Computing, but here, we have 0 and 1.
Further, a quantum computer works up massive amounts of heat through their regular processing. If not given temperatures far below sub-zero degrees, they will go bust. So, anyone who has a quantum
computer must also have a temperature control well below freezing point. At the same time, the room or computer case must be shockproof to avoid any decoherence.
Big Business is Racing Towards Them
All major global business entities from Google to IBM are taking part in the race to produce the first stable quantum computer. In 2017, Google declared that it would create its first quantum
computer in the next five years. It further stated that the computer would have 50 qubits of computing potential. According to John Martinis, a pioneer in quantum technology at Google:
“The chip has been made, and we have been testing it, and we’re seeing if we can get this to work. If we can get this to work, then we can show that the quantum computer can do some powerful
Just for comparison, the most advanced traditional supercomputers only have about 5 to 20 qubits. Soon after, IBM came up with their own declaration. They said that they would begin offering quantum
computers by the next year. Well, that has not happened yet, and the details on the IBM quantum computer project are murky at best.
The World Needs Quantum Computers Urgently
Big data and machine learning have become popular subjects in computer since these days. The reason is that each day, we produce about 2.5 exabytes of data. On average, we can store data in about 5
million computers. So, storing is a big problem, and processing is an even bigger problem.
That is the primary reason why they’re giving quantum computing such a big push. Without these future-gen computers, our science and technology would soon come up against a brick wall. And
considering the increased resource management and environmental concerns faced by the world, quantum computers couldn’t arrive sooner.
The Algorithms are Growing
While tangible quantum computers might take some time to emerge, the science behind them is already here. We have several key algorithms which the earliest working quantum computers will be based on.
We expect that these algorithms will soon be surpassed by others, which provide for more efficient or specialized functioning. Some examples of dedicated quantum computing algorithms are Grover’s
unstructured database querying algorithm and Shor’s large number factoring algorithm.
How Will the World Change in the Face of Quantum Computing?
Whenever the word quantum is associated with anything – quantum tunneling, quantum states, quantum time, people start thinking sci-fi. While that is true to a degree, it takes time to work up to
sci-fi levels. For quantum computing, the time has come to take its place in the world. And the technology will be revolutionary for many fields, including AI.
As artificial intelligence acquires reasoning ability at a quantum level, all science fields may take a literal quantum leap. It is a very promising and exciting consideration, and we are very likely
to gradually enter into a new era of technological prowess in the years to come. Now tell me, were you familiar with this technology? Have you gotten your fair share of Quantum information in this
guide? Share your thoughts and comments below. | {"url":"https://thevpn.guru/quantum-computing-facts/","timestamp":"2024-11-11T01:57:32Z","content_type":"text/html","content_length":"82754","record_id":"<urn:uuid:1c2a3dd3-977a-478b-8ce3-2287d3797a6a>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00522.warc.gz"} |
1, 2, 3, or 4?
KVM is the solution. Remote Desktop didn’t work as I expected. There was always a delay and I had a problem connecting to one stingy laptop. I picked up a KVM Extreme Miniview 4 Port KVMP Switch at
Altex on the way home. It was a present for myself for having a really good interview with Objective Advantage today.
By the way, the interview went great! The President interviewed me and his name was also Thomas. When he asked “If you were me, why should I hire you?”, my response was “Because my name is Thomas.”
Haha, then I got down to the nitty gritty. He was a really cool guy and asked lots of stuff about working on teams, managing, design patterns, and testing.
5 thoughts on “1, 2, 3, or 4?”
1. wheres my USB????
2. usb.
3. lemme know if the kvm works well…i was thinking of doing that but saw that some of them dont support high resolutions
4. wheres my usb.
5. Just don’t know how to use remote desktop cuz you’re a dumby! I could’ve showed you but you didn’t let me come over!
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://thomasnguyen.com/2005/10/1-2-3-or-4/","timestamp":"2024-11-03T00:24:15Z","content_type":"text/html","content_length":"29441","record_id":"<urn:uuid:4dcffe9d-12d7-45ac-b684-ffad6c52414e>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00502.warc.gz"} |
A gas is contained in a cylinder with a pressure of 120kPa and is initial volume of 0.66. How much work is done by the gas as it expands at constant pressure to twice its volume, or is compressed to one-third its initial volume? | HIX Tutor
A gas is contained in a cylinder with a pressure of 120kPa and is initial volume of 0.66. How much work is done by the gas as it expands at constant pressure to twice its volume, or is compressed to
one-third its initial volume?
Answer 1
Assume ideality here; you are in one of those "movable-piston" scenarios. Note how it gives you a pressure, an initial volume, a final volume, and notes that it is all at constant pressure.
You are being asked to answer the following regarding the basic equation for expansion/compression work:
#\mathbf(delw = -PdV)#
When you incorporate this, you obtain:
#int delw = color(green)(w) = -int_(V_1)^(V_2) PdV = color(green)(-P(V_2 - V_1))#
We should next think about what it means to compress or expand a gas.
1. The gas expands to twice its initial volume under constant pressure (isobaric) conditions.
#w_"exp" = -int_(V_1)^(V_2) PdV#
#= -P int_("0.66 L")^("1.32 L")dV#
#= -("120000" cancel"Pa" xx ("1 bar")/(10^5 cancel"Pa"))(1.32 - "0.66 L")#
#= -"0.792 L"*"bar"#
Now to convert this into #"J"#. The universal gas constant is a nice way to do so. Remember to cancel out the #"mol"*"K"# (I already omitted those units).
#-"0.792"cancel("L"cdot"bar") xx ("8.314472 J")/("0.083145" cancel("L"*"bar"))#
#color(blue)(w_"exp") ~~# #color(blue)(-"79.1997 J")#
2. This second section is very similar to the first one; just keep in mind that work is now positive because work is being done on the gas rather than letting the gas do its work.
#w_"comp" = -int_(V_1)^(V_2) PdV#
#= -P int_("0.66 L")^("0.22 L")dV#
#= -("120000" cancel"Pa" xx ("1 bar")/(10^5 cancel"Pa"))(0.22 - "0.66 L")#
#= "0.528 L"*"bar"#
#"0.528" cancel("L"cdot"bar") xx ("8.314472 J")/("0.083145" cancel("L"*"bar"))#
#color(blue)(w_"comp") ~~# #color(blue)("52.7998 J")#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
The work done by the gas as it expands at constant pressure to twice its volume, or is compressed to one-third its initial volume, can be calculated using the formula:
[ W = P \cdot \Delta V ]
• ( W ) = work done by the gas
• ( P ) = pressure of the gas
• ( \Delta V ) = change in volume of the gas
For expansion to twice its volume: [ \Delta V = 2V - V = V ]
For compression to one-third its initial volume: [ \Delta V = \frac{1}{3}V - V = -\frac{2}{3}V ]
Using the given pressure ( P = 120 , kPa ) and initial volume ( V = 0.66 , m^3 ), we can calculate the work done:
For expansion: [ W_{expansion} = P \cdot \Delta V = (120 , kPa) \cdot (0.66 , m^3) = 79.2 , kJ ]
For compression: [ W_{compression} = P \cdot \Delta V = (120 , kPa) \cdot \left(-\frac{2}{3} \cdot 0.66 , m^3\right) = -52.8 , kJ ]
Therefore, the work done by the gas as it expands at constant pressure to twice its volume is ( 79.2 , kJ ), and the work done as it is compressed to one-third its initial volume is ( -52.8 , kJ ).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/a-gas-is-contained-in-a-cylinder-with-a-pressure-of-120kpa-and-is-initial-volume-8f9af8c0a3","timestamp":"2024-11-03T13:01:06Z","content_type":"text/html","content_length":"588145","record_id":"<urn:uuid:96df5832-06d0-4aeb-9a31-cc6d15156ad3>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00074.warc.gz"} |
Report the result of the following calculations to the correct number of significant figures.
a) 0.347 - 0.0732
To find the difference we simply subtract the two numbers;
So we've performed the subtraction now we must determine the answer to the correct number of significant figures. In addition and substraction our answer can have no more digits to the right of the
decimal as the the number with the smallest number of digits to the right of the decimal. The number 0.347 has three digits to the right, while the number 0.0732 has 4 digits to the right. So our
answer can only have 3 digits to the right of the decimal.
So we must round off the result. Since the number '8' is the left most digit to be dropped we will add '1' to the last digit retained. The correct answer is 0.274. | {"url":"https://intro.chem.okstate.edu/1314F00/Lecture/Chapter1B/12400SigFiga.html","timestamp":"2024-11-11T20:50:59Z","content_type":"text/html","content_length":"1645","record_id":"<urn:uuid:ffbcaf6e-5e5e-4c3b-8696-1a7b222496ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00697.warc.gz"} |
9.10: Center of Mass (Part 2)
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Center of Mass of Continuous Objects
If the object in question has its mass distributed uniformly in space, rather than as a collection of discrete particles, then m[j] → dm, and the summation becomes an integral:
\[\vec{r}_{CM} = \frac{1}{M} \int \vec{r} dm \ldotp \label{9.34}\]
In this context, r is a characteristic dimension of the object (the radius of a sphere, the length of a long rod). To generate an integrand that can actually be calculated, you need to express the
differential mass element dm as a function of the mass density of the continuous object, and the dimension r. An example will clarify this.
Find the center of mass of a uniform thin hoop (or ring) of mass \(M\) and radius \(r\).
First, the hoop’s symmetry suggests the center of mass should be at its geometric center. If we define our coordinate system such that the origin is located at the center of the hoop, the integral
should evaluate to zero.
We replace dm with an expression involving the density of the hoop and the radius of the hoop. We then have an expression we can actually integrate. Since the hoop is described as “thin,” we treat it
as a one-dimensional object, neglecting the thickness of the hoop. Therefore, its density is expressed as the number of kilograms of material per meter. Such a density is called a linear mass density
, and is given the symbol \(\lambda\); this is the Greek letter “lambda,” which is the equivalent of the English letter “l” (for “linear”).
Since the hoop is described as uniform, this means that the linear mass density \(\lambda\) is constant. Thus, to get our expression for the differential mass element dm, we multiply \(\lambda\) by a
differential length of the hoop, substitute, and integrate (with appropriate limits for the definite integral).
First, define our coordinate system and the relevant variables (Figure \(\PageIndex{1}\)).
Figure \(\PageIndex{1}\): Finding the center of mass of a uniform hoop. We express the coordinates of a differential piece of the hoop, and then integrate around the hoop.
The center of mass is calculated with Equation \ref{9.34}:
\[\vec{r}_{CM} = \frac{1}{M} \int_{a}^{b} \vec{r} dm \ldotp\]
We have to determine the limits of integration a and b. Expressing \(\vec{r}\) in component form gives us
\[\vec{r}_{CM} = \frac{1}{M} \int_{a}^{b} \big[ (r \cos \theta) \hat{i} + (R \sin \theta) \hat{j} \big] dm \ldotp\]
In the diagram, we highlighted a piece of the hoop that is of differential length ds; it therefore has a differential mass dm = \(\lambda\)ds. Substituting:
\[\vec{r}_{CM} = \frac{1}{M} \int_{a}^{b} \big[ (r \cos \theta) \hat{i} + (R \sin \theta) \hat{j} \big] \lambda ds \ldotp\]
However, the arc length ds subtends a differential angle d\(theta\), so we have
\[ds = rd \theta\]
and thus
\[\vec{r}_{CM} = \frac{1}{M} \int_{a}^{b} \big[ (r \cos \theta) \hat{i} + (R \sin \theta) \hat{j} \big] \lambda rd \theta \ldotp\]
One more step: Since \(\lambda\) is the linear mass density, it is computed by dividing the total mass by the length of the hoop:
\[\lambda = \frac{M}{2 \pi r}\]
giving us
\[\begin{split} \vec{r}_{CM} & = \frac{1}{M} \int_{a}^{b} \big[ (r \cos \theta) \hat{i} + (R \sin \theta) \hat{j} \big] \left(\dfrac{M}{2 \pi r}\right) rd \theta \\ & = \frac{1}{2 \pi} \int_{a}^{b} \
big[ (r \cos \theta) \hat{i} + (R \sin \theta) \hat{j} \big] d \theta \ldotp \end{split}\]
Notice that the variable of integration is now the angle \(\theta\). This tells us that the limits of integration (around the circular hoop) are θ = 0 to \(\theta\) = 2\(\pi\), so a = 0 and b = 2\(\
pi\). Also, for convenience, we separate the integral into the x- and y-components of \(\vec{r}_{CM}\). The final integral expression is
\[\begin{split} \vec{r}_{CM} & = r_{CM,x} \hat{i} + r_{CM,y} \hat{j} \\ & = \Big[ \frac{1}{2 \pi} \int_{0}^{2 \pi} (2 \cos \theta d \theta \Big] \hat{i} + \Big[ \frac{1}{2 \pi} \int_{0}^{2 \pi} (2 \
sin \theta d \theta \Big] \hat{j} \\ & = 0 \hat{i} + 0 \hat{j} = \vec{0} \end{split}\]
as expected.
Center of Mass and Conservation of Momentum
How does all this connect to conservation of momentum?
Suppose you have N objects with masses m[1], m[2], m[3], ...m[N] and initial velocities \(\vec{v}_{1}\), \(\vec{v}_{2}\), \(\vec{v}_{3}\), ..., \(\vec{v}_{N}\). The center of mass of the objects is
\[\vec{r}_{CM} = \frac{1}{M} \sum_{j = 1}^{N} m_{j} \vec{r}_{j} \ldotp\]
Its velocity is
\[\vec{v}_{CM} = \frac{d \vec{r}_{CM}}{dt} = \frac{1}{M} \sum_{j = 1}^{N} m_{j} \frac{d \vec{r}_{j}}{dt} \label{9.35}\]
and thus the initial momentum of the center of mass is
\[\begin{split} \Big[ M \frac{d \vec{r}_{CM}}{dt} \Big]_{i} & = \sum_{j = 1}^{N} m_{j} \frac{d \vec{r}_{j,i}}{dt} \\ M \vec{v}_{CM,i} & = \sum_{j = 1}^{N} m_{j} \vec{v}_{j,i} \ldotp \end{split}\]
After these masses move and interact with each other, the momentum of the center of mass is
\[M \vec{v}_{CM,f} = \sum_{j = 1}^{N} m_{j} \vec{v}_{j,f} \ldotp\]
But conservation of momentum tells us that the right-hand side of both equations must be equal, which says
\[M \vec{v}_{CM,f} = M \vec{v}_{CM,i} \ldotp \label{9.36}\]
This result implies that conservation of momentum is expressed in terms of the center of mass of the system. Notice that as an object moves through space with no net external force acting on it, an
individual particle of the object may accelerate in various directions, with various magnitudes, depending on the net internal force acting on that object at any time. (Remember, it is only the
vector sum of all the internal forces that vanishes, not the internal force on a single particle.) Thus, such a particle’s momentum will not be constant—but the momentum of the entire extended object
will be, in accord with Equation \ref{9.36}.
Equation \ref{9.36} implies another important result: Since M represents the mass of the entire system of particles, it is necessarily constant. (If it isn’t, we don’t have a closed system, so we
can’t expect the system’s momentum to be conserved.) As a result, Equation \ref{9.36} implies that, for a closed system,
\[\vec{v}_{CM,f} = \vec{v}_{CM,i} \ldotp \label{9.37}\]
That is to say, in the absence of an external force, the velocity of the center of mass never changes.
You might be tempted to shrug and say, “Well yes, that’s just Newton’s first law,” but remember that Newton’s first law discusses the constant velocity of a particle, whereas Equation \ref{9.37}
applies to the center of mass of a (possibly vast) collection of interacting particles, and that there may not be any particle at the center of mass at all! So, this really is a remarkable result.
When a fireworks rocket explodes, thousands of glowing fragments fly outward in all directions, and fall to Earth in an elegant and beautiful display (Figure \(\PageIndex{2}\)). Describe what
happens, in terms of conservation of momentum and center of mass.
Figure \(\PageIndex{2}\): These exploding fireworks are a vivid example of conservation of momentum and the motion of the center of mass.
The picture shows radial symmetry about the central points of the explosions; this suggests the idea of center of mass. We can also see the parabolic motion of the glowing particles; this brings to
mind projectile motion ideas.
Initially, the fireworks rocket is launched and flies more or less straight upward; this is the cause of the more-or-less-straight, white trail going high into the sky below the explosion in the
upper-right of the picture (the yellow explosion). This trail is not parabolic because the explosive shell, during its launch phase, is actually a rocket; the impulse applied to it by the ejection of
the burning fuel applies a force on the shell during the rise-time interval. (This is a phenomenon we will study in the next section.) The shell has multiple forces on it; thus, it is not in
free-fall prior to the explosion.
At the instant of the explosion, the thousands of glowing fragments fly outward in a radially symmetrical pattern. The symmetry of the explosion is the result of all the internal forces summing to
zero \((\sum_{j} \vec{f}_{j}^{int} = 0)\); for every internal force, there is another that is equal in magnitude and opposite in direction.
However, as we learned above, these internal forces cannot change the momentum of the center of mass of the (now exploded) shell. Since the rocket force has now vanished, the center of mass of the
shell is now a projectile (the only force on it is gravity), so its trajectory does become parabolic. The two red explosions on the left show the path of their centers of mass at a slightly longer
time after explosion compared to the yellow explosion on the upper right.
In fact, if you look carefully at all three explosions, you can see that the glowing trails are not truly radially symmetric; rather, they are somewhat denser on one side than the other.
Specifically, the yellow explosion and the lower middle explosion are slightly denser on their right sides, and the upper-left explosion is denser on its left side. This is because of the momentum of
their centers of mass; the differing trail densities are due to the momentum each piece of the shell had at the moment of its explosion. The fragment for the explosion on the upper left of the
picture had a momentum that pointed upward and to the left; the middle fragment’s momentum pointed upward and slightly to the right; and the right-side explosion clearly upward and to the right (as
evidenced by the white rocket exhaust trail visible below the yellow explosion).
Finally, each fragment is a projectile on its own, thus tracing out thousands of glowing parabolas.
In the discussion above, we said, “…the center of mass of the shell is now a projectile (the only force on it is gravity)….” This is not quite accurate, for there may not be any mass at all at the
center of mass; in which case, there could not be a force acting on it. This is actually just verbal shorthand for describing the fact that the gravitational forces on all the particles act so that
the center of mass changes position exactly as if all the mass of the shell were always located at the position of the center of mass.
How would the firework display change in deep space, far away from any source of gravity?
You may sometimes hear someone describe an explosion by saying something like, “the fragments of the exploded object always move in a way that makes sure that the center of mass continues to move on
its original trajectory.” This makes it sound as if the process is somewhat magical: how can it be that, in every explosion, it always works out that the fragments move in just the right way so that
the center of mass’ motion is unchanged? Phrased this way, it would be hard to believe no explosion ever does anything differently.
The explanation of this apparently astonishing coincidence is: We defined the center of mass precisely so this is exactly what we would get. Recall that first we defined the momentum of the system:
\[\vec{p}_{CM} = \sum_{j = 1}^{N} \frac{d \vec{p}_{j}}{dt} \ldotp\]
We then concluded that the net external force on the system (if any) changed this momentum:
\[\vec{F} = \frac{d \vec{p}_{CM}}{dt}\]
and then—and here’s the point—we defined an acceleration that would obey Newton’s second law. That is, we demanded that we should be able to write
\[\vec{a} = \frac{\vec{F}}{M}\]
which requires that
\[\vec{a} = \frac{d^{2}}{dt^{2}} \left(\dfrac{1}{M} \sum_{j = 1}^{N} m_{j} \vec{r}_{j}\right) \ldotp\]
where the quantity inside the parentheses is the center of mass of our system. So, it’s not astonishing that the center of mass obeys Newton’s second law; we defined it so that it would. | {"url":"https://phys.libretexts.org/Bookshelves/University_Physics/University_Physics_(OpenStax)/Book%3A_University_Physics_I_-_Mechanics_Sound_Oscillations_and_Waves_(OpenStax)/09%3A_Linear_Momentum_and_Collisions/9.10%3A_Center_of_Mass_(Part_2)","timestamp":"2024-11-09T10:09:58Z","content_type":"text/html","content_length":"147185","record_id":"<urn:uuid:f9c19223-226b-41a1-bdfb-3209e1f5011a>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00283.warc.gz"} |
Working with Data
Pre-Class Readings and Videos
Now that you have a research question, it is time to look at the data. Raw data consist of long lists of numbers and/or labels that are not very informative. Exploratory Data Analysis (EDA) is how we
make sense of the data by converting them from their raw form to a more informative one. In particular, EDA consists of:
• Organizing and summarizing the raw data,
• Discovering important features and patterns in the data and any striking deviations from those patterns, and then
• Interpreting our findings in the context of the problem
We begin EDA by looking at one variable at a time (also known as univariate analysis). In order to convert raw data into useful information we need to summarize and then examine the distribution of
any variables of interest. By distribution of a variable, we mean:
• What values the variable takes, and
• How often the variable takes those values
Statistical Software
When working with data with more than just a few observations and/or variables requires specialized software. The use of syntax (or formal code) in the context of statistical software is a central
skill that we will be teaching you in this course. We believe that it will greatly expand your capacity not only for statistical application but also for engaging in deeper levels of quantitative
reasoning about data.
Writing Your First Program
Empirical research is all about making decisions (the best ones possible with the information at hand). Please watch the video below.
Working with Data
Pre-Class Quiz
After reviewing the material above, there is an optional Quiz 3 you can take on moodle. These quizzes do not count towards your grade – but you may find them helpful in assessing your comprehension
of the material. They also help you acclimate to the types of multiple choice questions you may be asked on an exam.
During Class Tasks
Mini-Assignment 2
Project Component D | {"url":"https://adatawinter.site.wesleyan.edu/schedule-2/working-with-data/","timestamp":"2024-11-11T23:01:25Z","content_type":"text/html","content_length":"49321","record_id":"<urn:uuid:d8d66962-46e7-4347-b3c5-12f2a303259e>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00247.warc.gz"} |
Joint conditional simulation of an iron ore deposit using Minimum or Maximum Autocorrelation Factor transformation
Considering the multivariable deposits that consist of various attributes that are frequently spatially correlated, the uncertainty associated with the grade-tonnage curves is assessed through the
joint conditional simulation techniques. This paper presents the joint simulation of five attributes using the Minimum/Maximum Autocorrelation Factors (MAF). The methodology for joint simulation is
three-fold: (1) MAF is used to transform the attributes to non-correlated factors; (2) the variograms for each MAF are computed and modelled; (3) the independent MAFs are individually simulated and
back-transformed to the original data space. The methodology is demonstrated in an iron ore deposit in Western Australia, where the attributes of an iron ore deposit are successfully decorrelated and
simulated independently. The grade-tonnage curves for each realisation are plotted and compared with the generated one by the kriging estimate. The MAF approach proves itself to be an efficient
method for joint simulation of multivariable deposits.
Publication series
Name Proceedings of the 16th International Association for Mathematical Geosciences - Geostatistical and Geospatial Approaches for the Characterization of Natural Resources in the Environment:
Challenges, Processes and Strategies, IAMG 2014
Other 16th International Association for Mathematical Geosciences - Geostatistical and Geospatial Approaches for the Characterization of Natural Resources in the Environment: Challenges,
Processes and Strategies, IAMG 2014
Country/ India
City New Delhi
Period 10/17/14 → 10/20/14
• Grade-tonnage curves
• Iron ore deposit
• Minimum/Maximum Autocorrelation factors
• Multivariate simulation
ASJC Scopus subject areas
• Mathematics (miscellaneous)
• General Earth and Planetary Sciences
Dive into the research topics of 'Joint conditional simulation of an iron ore deposit using Minimum or Maximum Autocorrelation Factor transformation'. Together they form a unique fingerprint. | {"url":"https://research.nu.edu.kz/en/publications/joint-conditional-simulation-of-an-iron-ore-deposit-using-minimum","timestamp":"2024-11-11T04:14:52Z","content_type":"text/html","content_length":"57263","record_id":"<urn:uuid:321c05f1-d595-4cd1-b600-0ba2812aa00a>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00166.warc.gz"} |
Open-source SPL speeds up intersection calculation of customer groups in bank user profile by 200+ times
Bank X divides its over 100 million customers into thousands of groups, hoping to calculate the intersection of any number of customer groups, filter by dimensions, and also hoping to obtain the
results in seconds. Click to learn: Open-source SPL speeds up intersection calculation of customer groups in bank user profile by 200+ times
Problem description
In the user profile application of bank X, it is required to calculate the intersection of customer groups. There are thousands of customer groups, and the number of customers in each group is
different, varying from hundreds of thousands to over 100 million. In this case, it is required to calculate the common customers of any number (generally 2-10) of customer groups. For example, the
customer group for Didi Chuxing has several million customers, and that for mobile banking has tens of millions of customers, it is required to calculate the number of common customers of the two
In addition, for the intersection calculation results, it is also required to filter by dimensions. For instance, for the common customers of Didi Chuxing and mobile banking, it is required to filter
by dimensions like gender, age range and region to calculate the final number of customers meeting each dimension condition.
In order to analyze by the time dimension, it has to calculate the results of one year each time. Therefore, the historical data of twelve months need to be saved, one set per month. Calculating the
intersection of customers based on detail data of 12 months is called a single task.
Having implemented the intersection calculation of customer groups based on a well-known OLAP Server product on Hadoop, Bank X found the performance is very poor. In this calculation, a virtual
machine cluster with 100 CPUs (cores) was used, and it took around 2 minutes to complete a single task on average. However, the expected performance goal is to complete 10-20 single tasks
concurrently in 10 seconds, and hence the gap is extremely large.
Using the pre-calculating method recommended by the OLAP Server can meet the performance requirements, which works in a way that pre-calculate the intersection of customer groups, save the
calculation results, and filter by the dimensions based on the results. However, for thousands of customer groups, there will be over 10 million of combinations when just calculating the intersection
of any two customer groups. Although it can barely be stored, it is an impossible task to save in advance when calculating the intersection of more customer groups.
Step 1, in-depth understanding of the characteristics of data and calculations, and analysis of performance bottlenecks.
Each dimension value of customer is unique, and each customer can be stored in one record, 100 million customers are 100 million records.
There is no hierarchical relationship between dimensions, and the number of attributes of a dimension is generally several to dozens. For example, the number of attributes of gender dimension is two,
and that of age range dimension is about a dozen. The total number of dimensions involved in filtering is ten to twenty.
A customer belongs to 10 customer groups on average. If the database table is used to store, the customer-customer group table will have over one billion records, see the following figure for the
table structure.
The SQL statement for calculating the intersection of customer groups and filtering by dimensions is simplified as:
select count(*) from (
select count(g.cg) from customer_group g
left join customer_dim d on g.id=d.id
where g.cg in ('18','25')
and d.d2 in ('2','4')
and d.d4 in ('8','10','11')
group by g.id
having count(g.cg)=2
In this SQL statement, there is a JOIN that may tie down the computing performance, but if two large tables are JOINed into a wide table, more than ten times redundancy may occur to the same
dimension value, in that case, the query speed will be further decreased. If using the comma-separated string to store multiple customer group numbers into a field cg, although the dimension value
redundancy can be avoided, it needs to perform the string splitting calculation, and the speed will still be very slow in the case of large data amount. For example, “18, 25, 157” in field cg means
that the customer belongs to three customer groups, it needs to use the substrings to compare when calculating the intersection, and hence the amount of calculation is still very large. It is found,
after some experiments, that in the current technical environment, continuing to use this table structure and such SQL statement can still obtain the best performance. Although this JOIN involves
large tables, it has filter conditions, after filtering, it will become small table in-memory JOIN, and the performance loss is not very serious. The more important operation bottleneck is the IN
condition in filter conditions, which is a computationally inefficient set operation and is related to the number of IN’s enumeration values, the more enumeration values, the worse the performance.
Step 2, determine the optimization scheme.
Complete data needs to be saved every month, but the memory capacity of bank X's standard virtual machine is only 16G, it cannot hold the data of even one month, and hence the all-in-memory
calculation cannot be implemented. In this case, we need to read data from external storage and calculate. To do so, we should first consider reducing the amount of data storage and access. If two
tables are still used, it needs to be read them separately and does association, which will increase the amount of access and calculation. Therefore, we consider merging the two tables into one, with
one row of data per customer, and storing the dimension attributes and the customer groups at the same time. In this way, the total amount of data is equal to the number of customers. In particular,
the dimension attributes should be stored in integers, as the amount of storage and calculation is smaller than that of strings.
There are thousands of customer groups, if they are saved as integers, the amount of data will be huge. However, the attribute of customer group belongs to tag attribute with only two states, yes and
no, which can be saved with just one bit. A small-integer-type binary number represents 16 bits, and each bit can represent one customer group. For the convenience of calculation, we use 15 bits of a
small integer to save customer group tags, and thus 600 small integers can save 9000 customer group tags.
In the data table, we use 600 fields c1 to c600, each of which represents the location of 15 customer groups. 0 represents that it doesn’t belong to this customer group, while 1 represents that it
belongs to this customer group. When calculating the intersection of two customer groups, take at most 2 columns; similarly, when calculating the intersection of n customer groups, take at most n
columns. Since the columnar storage is adopted, when n is less than 10, the amount of reading can be greatly reduced.
The dimension fields d1 to d20 no longer store the corresponding dimension values, but the sequence numbers of dimension value in dimension list. For example, d2 is the age dimension, the age range
of customer 001 is 20-30, and the corresponding enumeration value sequence number in the age dimension list is 3, then set the d1 field of this customer to 3.
When filtering by dimensions, calculate the entered age range condition as a boolean sequence, and the length of sequence is the number of attributes of age range dimension. If the age range
condition is 20-30, set the third member in the sequence to true and the others to false.
While performing the condition filtering to the newly stored file, when traversing the row of customer 001, if the value taken in d1 is 3, and the third element of boolean sequence is true, then
customer 001 meets the filtering conditions.
This algorithm is called the dimension of boolean sequence, and the dimension value of each customer can be saved with only 20 integers in one row of data. The advantage of this algorithm is that
there is no need to judge IN when querying. As mentioned above, the performance of IN is poor and related to the number of enumeration values, while the judgment of dimension of boolean sequence is a
constant time.
According to the new idea, this algorithm is mainly to perform the bitwise calculation to the large columnar storage data table, and do the filter traversing to the dimension of boolean sequence.
There are many filter conditions of AND relationships, involving multiple fields. When traversing, we can first read and compute the fields corresponding to the first few conditions. If they meet
these conditions, read the fields corresponding to subsequent conditions; if not, the subsequent fields are no longer read. Such algorithm, called pre-cursor filtering, can effectively reduce the
amount of data read.
Step 3, select the technical route.
Only very few commercial databases’ SQL can support bit operation, but it does not match the whole technical system; if such databases was used, it would result in a very cumbersome architecture.
Adding UDF to the SQL of the current platform to implement bit operation would make the complexity of code very high. If the optimization was performed based on the current SQL system, it would be
extremely costly.
If the high-level language such as Java or C++ was used, the above algorithms could certainly be implemented; however, just like adding UDF, the code would still be very complex, and it would need
hundreds or even a thousand of lines of code to implement such algorithms. Too large amount of coding will lead to too long project period, hidden trouble with code errors, and it is also difficult
to debug and maintain.
For example, objectification is needed when Java reads data from the hard disk; however, Java is slow to generate objects. If the above-mentioned pre-cursor filtering is not used, many columns need
to be read into memory before judging, resulting in the generation of a lot of useless objects, this will have a great impact on the performance. If coding in Java to implement pre-cursor filtering
algorithm from the very beginning, it takes both time and effort.
The open-source esProc SPL provides support for all the above algorithms, including mechanisms such as the high-performance compression columnar storage, boolean, bitwise calculation, small integer
objects and pre-cursor filtering, which allows us to quickly implement this personalized calculation with less amount of code.
Step 4, execute the optimization scheme. Coding in esProc SPL to combine the dimension attributes of customers in the data with the customer groups they belong to, and store them in esProc
high-performance binary columnar storage files according to the new storage structure. At the beginning of each subsequent month, extract the newly added data, and store them in the same way.
Then write the SPL code for query, convert the input conditions (dimension attributes, and customer groups for calculating intersection) to the format required by boolean dimension and bitwise
calculation, and perform the pre-cursor filtering and counting to the new storage structure.
esProc provides JDBC driver for external application, just like calling the stored procedure of database, the driver allows the front-end applications to call esProc, we can input the parameters and
obtain the query results.
Actual effect
After about two weeks of coding and testing, the actual optimization effect is very obvious. It only takes 4 seconds to execute a query on 12 months of data with a 12-CPU (cores) virtual machine;
while the original 100-CPU (cores) virtual machine needs 120 seconds to execute the same query, thus improving the performance by 250 times (100 CPUs*120 seconds ÷ 12 CPUs*4 seconds). To further
achieve the desired performance goal, that is, perform 10-20 single tasks concurrently within 10 seconds, the required resource can be completely controlled within 100-CPU (cores).
This scheme does not need to pre-calculate, and can query on the detail data directly, and thus it is very flexible. To calculate the intersection of any number of customer groups, the speed will be
the same, thereby completely solving the performance problem of the intersection operation of customer groups.
In terms of development difficulty, SPL has made a lot of encapsulations, provided rich functions, and built-in the basic algorithms required by the above scheme. The SPL code corresponding to the
algorithms mentioned above has only 20 lines:
To solve the performance optimization problem, the most important thing is to design a high-performance computing scheme to effectively reduce the computational complexity, thereby ultimately
increasing the speed. Therefore, on the one hand, we should fully understand the characteristics of calculation and data, and on the other hand, we should have an intimate knowledge of common
high-performance algorithms, only in this way can we design a reasonable optimization scheme according to local conditions. The basic high-performance algorithms used herein can be found at the
course: , where you can find what you are interested in.
Unfortunately, the current mainstream big data systems in the industry are still based on relational databases. Whether it is the traditional MPP or HADOOP system, or some new technologies, they are
all trying to make the programming interface closer to SQL. Being compatible with SQL does make it easier for users to get started. However, SQL, subject to theoretical limitations, cannot implement
most high-performance algorithms, and can only face helplessly without any way to improve as hardware resources are wasted. Therefore, SQL should not be the future of big data computing.
After the optimization scheme is obtained, we also need to use a good programming language to efficiently implement the algorithms. Although the common high-level programming languages can implement
most optimization algorithms, the code is too long and the development efficiency is too low, which will seriously affect the maintainability of the program. In this case, the open-source SPL is a
good choice, because it has enough basic algorithms, and its code is very concise, in addition, SPL also provides a friendly visual debugging mechanism, which can effectively improve development
efficiency and reduce maintenance cost.
Please input Comment content ...
• Hoo • | {"url":"https://c.raqsoft.com/article/1643333049458","timestamp":"2024-11-11T16:45:56Z","content_type":"text/html","content_length":"69796","record_id":"<urn:uuid:a418a455-0d25-476f-b20a-1eda7475469a>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00113.warc.gz"} |
Paper: An evaluation semantics for classical proofs (at LICS 1991)
Chetan R. Murthy
It is shown how to interpret classical proofs as programs in a way that agrees with the well-known treatment of constructive proofs as programs and moreover extends it to give a computational meaning
to proofs claiming the existence of a value satisfying a recursive predicate. The method turns out to be equivalent to H. Friedman's (Lecture Notes in Mathematics, vol.699, p.21-28, 1978) proof by A
-transition of the conservative extension of classical cover constructive arithmetic for II[2]^0 sentences. It is shown that Friedman's result is a proof-theoretic version of a semantics-preserving
CPS-translation from a nonfunctional programming language back to a functional programming language. A sound evaluation semantics for proofs in classical number theory (PA) of such sentences is
presented as a modification of the standard semantics for proofs in constructive number theory (HA). The results soundly extend the proofs-as-programs paradigm to classical logics and to programs
with the control operator, C
author = {Chetan R. Murthy},
title = {An evaluation semantics for classical proofs},
booktitle = {Proceedings of the Sixth Annual IEEE Symp. on Logic in Computer Science, {LICS} 1991},
year = 1991,
editor = {Giles Kahn},
month = {July},
pages = {96--107},
location = {Amsterdam, The Netherlands},
publisher = {IEEE Computer Society Press} | {"url":"http://www.lfcs.inf.ed.ac.uk/events/lics/1991/Murthy-Anevaluationsemanti.html","timestamp":"2024-11-04T14:09:35Z","content_type":"text/html","content_length":"3020","record_id":"<urn:uuid:7a0adb0f-6085-4856-9af2-f042f0e74734>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00803.warc.gz"} |
Trigonometry - A Critical Branch Of Mathematics To Comprehend - Unleashing The Latest In EntertainmentTrigonometry – A Critical Branch Of Mathematics To Comprehend
Trigonometry is one of the crucial fields of mathematics. For a long time, it has been included in the math subject, and it has played an important role not only in solving mathematical problems but
also has played a crucial role in deriving derivations in physics and chemistry. It is a topic that not only requires a lot of practice but also a deep understanding of it is necessary to be good in
Just like trigonometry, another topic of great importance is the unit circle. We are talking about unit circles here because it is related to trigonometry. In this article, we will discuss what
exactly a unit circle is and also discuss trigonometry in detail.
We all are familiar with this shape. It is a two-dimensional shape and is easily found everywhere. Circle has two of the most important properties related to them and they are the area and
circumference of it.
Both these properties have one thing in common, and that is, to find any of them we need to have the value of either the radius or the diameter of the circle. The diameter of a circle can be defined
as the longest distance possible between any two points, lying opposite to each other on the circle. In simple words, one can say that it is the longest chord in a circle. Radius is nothing but half
the value of diameter.
Unit circle
From the name itself we can conclude that in a unit circle something has a value of unity or one, and it is nothing else but the radius of the circle. A circle having a radius equal to one is
referred to as a unit circle. The diameter of a unit circle will be equal to the value of radius one. Unit circles play a huge role in deriving standard angle values of trigonometric ratios. Students
need to learn this concept of circle and unit circle to easily tackle problems related to them.
In this branch of mathematics students are made to study the relationship between the ratios of the various sides of a right-angled triangle and its angles. The ratios that we study in trigonometry
are known as trigonometry ratios, and they are a total of six in numbers. They are called sine, cosine, tangent, cosecant, secant, and cotangent.
These trigonometric ratios are of great importance, and every student should be familiar with the ratio they represent. The sine of a right-angled triangle is given by the ratio of perpendicular of
the triangle to the hypotenuse of the same triangle. Similarly, the other ratio, cosine, represents the ratio of base to the hypotenuse and the tangent of a right-angled triangle is given by the
ratio of perpendicular to that of the base of the triangle.
One might think that we have discussed only three ratios and left the other three but that’s not true. The other three trigonometric ratios can be calculated from the above-mentioned ratios. For
example, cosecant is equal to the reciprocal of sine, secant is reciprocal of cosec and at last, the cotangent of a right-angled triangle is given by the reciprocal of the tangent. In this way, one
can easily calculate all the six ratios of trigonometry.
In the above article, we have discussed two of the highly studied topics: unit circle and trigonometry in detail. Students should try to practice them as much as they can and try to have a good
knowledge of them as they are the basic concepts and will be very useful in higher education. Cuemath is an online platform where students can take help to understand math-related topics. | {"url":"https://theshahab.com/blog/2021/11/18/trigonometry-a-critical-branch-of-mathematics-to-comprehend/","timestamp":"2024-11-15T04:22:08Z","content_type":"text/html","content_length":"175970","record_id":"<urn:uuid:b8d713b1-9c26-4c87-8b4a-541da6743852>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00378.warc.gz"} |
Mathematical Analysis I
In this curricular unit it is intended that students develop their logical reasoning and calculation skills, essential for the learning of other curricular units of their cycle of studies. The main
goals are the learning and consolidation of fundamental knowledge of Differential and Integral Calculus for real functions of a real variable.
General characterization
Responsible teacher
José Maria Nunes de Almeida Gonçalves Gomes
Weekly - Available soon
Total - Available soon
Teaching language
Portuguese. If there are Erasmus students, classes will be taught in English
It is recommended the frequency of Mathematic A in secondary education.
Sydsæter, K, Hammond, P., Essential Mathematics for Economic Analysis, 2nd ed., Prentice Hall, 2006.;
Sarrico, Carlos, Análise Matemática, Leituras e exercícios, Gradiva.;
Azenha, A., Jerónimo, M.A., Elementos de Cálculo Diferencial e Integral em IR e IRn, McGraw-Hill, 1995.
Teaching method
Lectures and practical classes for solving exercises.
Evaluation method
Continuous Assessment (1st period)
The continuous assessment consists of conducting, during the academic semester, 2 tests T[1,] T[2] and a final exam E. The tests have no minimum grade. The exam has minimum grade of 8.5.
The final grade is calculated as follows: 20%T[1]+20%T[2]+60%
If, exceptionally, it is not possible to carry out an assessment in person, there will be the possibility of do the same online followed by an oral exam, but the situation will be analyzed case by
Exam Assessment (only 2nd period)
Final exam (100%) (face-to-face and with minimum score of 9.5).
Subject matter
1. The IR set
1.1 Basic concepts.
1.2 Topological notions.
2. Real functions of one real variable
2.1 Generalities about real functions of one real variable.
2.2 Notion of limit; lateral limts, properties and operations.
2.3 Continuous functions: definition and properties of continuous functions.
2.4Theorems of Bolzano and Weierstrass .
3. Differential Calculus on IR
3.1 Derivative of a function: definition of the tangent line equation.
3.2 One-sided derivatives; differentiability; relationship between differentiability and continuity of a function; derivation rules; derivative of the composite function.
3.3 Fundamental theorems: theorems of Rolle, Lagrange and Cauchy; Cauchy rule; indeterminate forms.
3.4 Derivatives from the higher order; formula of Taylor and MacLaurin.
3.5 Extremes of functions; concavity and inflection points; asymptotes; sketch graph of a function.
4. Integral Calculus in IR
4.1 Antiderivative: definition;
4.2 General methods to compute antiderivatives.
4.3 Integral Calculus: Riemann integral;
4.4 Fundamental theorems of integral calculus;
4.5 Calculation of areas of plane figures.
Programs where the course is taught: | {"url":"https://guia.unl.pt/en/2021/novaims/program/188/course/100008","timestamp":"2024-11-10T21:32:11Z","content_type":"text/html","content_length":"21195","record_id":"<urn:uuid:6eb667b8-fd03-4ede-935b-efdcd89fe89f>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00018.warc.gz"} |
Fibonacci sequence: Recursion, cryptography and the golden ratio
In the world of mathematics, the importance of sequences and series in analysis is well established. Sometimes, it’s hard to find a concrete application for concepts invented (or discovered?) in the
field. The Fibonacci sequence, on the other hand, can be found in many areas of nature, and continues to fascinate researchers thanks to its properties closely linked to the golden ratio.
The Fibonacci sequence takes its name from the 13th-century Italian mathematician Leonardo of Pisa, also known as Fibonacci. Although he didn’t invent the sequence, he introduced it to Europe with
his book “Liber Abaci” (Book of the Abacus or Book of Calculation) in 1202.
In this book, Fibonacci posed and solved a problem involving the growth of an idealized rabbit population, which leads to this sequence of numbers. It’s worth noting that the Fibonacci sequence was
known long before this date in India, where the relationships between the numbers in the sequence were already being studied.
Fibonacci numbers have many applications in nature and art. They can be observed on a variety of scales, from flower petals to galaxy spirals. Flower petals often have Fibonacci numbers – daisies can
have 34, 55 or even 89 petals. And sunflower seeds or pine cones are often arranged in spirals following Fibonacci numbers.
In architecture, the golden ratio, directly linked to the Fibonacci sequence, has been used to create works that are pleasing to the eye thanks to their “perfect” proportions. For example, the
Parthenon in Greece and the Taj Mahal in India both seem to use the golden ratio in their construction.
It’s this universality that makes the Fibonacci sequence so fascinating – a simple sequence of numbers that finds its way through mathematics, nature and art, uniting fields that might seem remote at
first glance. In the next section, we’ll take a closer look at what these numbers are and how we can generate them ourselves using the Python programming language.
Romanesco cabbage hides a secret – count the number of spirals, if you can!
Definition and implementation
Formally, the Fibonacci sequence is a sequence in which each term is equal to the sum of the two preceding elements. Just knowing the first two terms and the recurrence formula is enough to construct
the sequence in its entirety.
$F_0 = 0$, $F_1 = 1$ and $F_n = F_{n-1}+ F_{n-2}$ for $n \geq 2$.
Building a program to provide the nth Fibonacci number is a very good elementary exercise. Before reading on, you can try coding the function yourself!
Here’s a Python function to solve the problem:
def fibonacci(n):
if n in {0, 1}:
return n
return fibonacci(n-1) + fibonacci(n-2)
This initial recursive implementation may be satisfactory, but there is scope for optimizing this code by opting for an iterative approach. Recursion is very restrictive in terms of computation time;
in fact, the time complexity of this function is O(n²). In practice, this means that the function struggles to deliver a result within a satisfactory timeframe when n is large (for example, n = 40).
Here’s a new approach, this time iterative, which produces the same result using a for loop:
def fibonacci(n):
a, b, c= 0, 1, 0
if n == 0:
return 0
for i in range(2, n+1):
c = a + b
a = b
b = c
return b
One way of optimizing this code would be to cache the elements of the sequence as they are introduced, so as to avoid redundant calculations each time the function is called.
In addition, there’s a mathematical formula for calculating the exact term of the sequence without recursion. Called Binet’s formula, it uses the golden ratio :
def fibonacci(n):
sqrt5 = math.sqrt(5)
F_n = int((( (1 + sqrt5) ** n - (1 - sqrt5) ** n ) / ( 2 ** n * sqrt5 )))
return F_n
The Fibonacci sequence is used in a wide variety of fields, such as cryptography and trading. Among other things, it can be used to generate a list of pseudo-random numbers or to create an elementary
encryption system.
In finance, a tool called the Fibonacci retracement level is used to estimate how far an asset could fall before resuming its trend movement. In time series analysis, Fibonacci numbers are used to
determine the optimum number of time periods to use in calculating moving averages. | {"url":"https://datascientest.com/en/fibonacci-sequence-recursion-cryptography-and-the-golden-ratio","timestamp":"2024-11-05T09:46:38Z","content_type":"text/html","content_length":"429102","record_id":"<urn:uuid:ee4a97d1-1d70-4509-ad84-ae7799e06426>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00862.warc.gz"} |
This pattern is an oscillator.
This pattern is periodic with period 15.
This pattern runs in standard life (b3s23).
The population fluctuates between 30 and 62.
This evolutionary sequence works in multiple rules, from b3-ckys23aeiy through to b2n34-aeinw5-e6-ik7cs234cjkqyz5-aiqy6-c.
Pattern RLE
Glider synthesis
#C [[ GRID MAXGRIDSIZE 14 THEME Catagolue ]]
#CSYNTH xp15_2252222522y24ahhhhhha4 costs 6 gliders (true).
#CLL state-numbering golly
x = 46, y = 29, rule = B3/S23
Sample occurrences
There are 11 sample soups in the Catagolue:
Official symmetries
Symmetry Soups Sample soup links
C1 1 •
G1 1 •
Unofficial symmetries
Comments (0)
There are no comments to display.
Please log in to post comments. | {"url":"https://gol.hatsya.co.uk/object/xp15_2252222522y24ahhhhhha4/b3s23","timestamp":"2024-11-07T13:36:11Z","content_type":"text/html","content_length":"9949","record_id":"<urn:uuid:a7dc581b-1119-4fcc-a845-d33c35820e18>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00671.warc.gz"} |
5K Data Analysis
Apr 22, 2018 · 654 words · 4 minutes read
This was my 3rd year running the Big House 5K, an annual run in Ann Arbor, Michigan. Less than a week later I my first 10K in Richmond, Virginia. One thing about running is that it gives you lots of
time to think and one thing that runners tend to think during a race about is their time/pace.
It is easy to look up race results. The results include your time (and pace) and standing overall and by division but it’s not very interesting to just know your ranking. So I decided to analyze the
results from previous races to see what I could learn.
My dataset is results from the Big House 5K 2015-2018, consisting of 20,250 runner/year records (runners may not run in every year). I could have found data from more years and different events, but
this seemed like as good a place to start as any.
Each record includes the runner’s overall rank, name, bib number, time, pace, hometown, age, sex, division, division rank, and year of the event. Here’s a snippet with my data:
As I mentioned, knowing your ranking alone isn’t that useful. For example, is a rank of 886 good? It depends how many other people ran.
Because I have all of the data I can calculate my percent rank for each event. Percent (or percentile) rank is the percentage that are equal to or lower than the value. This gives me a way to
interpret my time in the context of others.
The violin plot below shows how finish times are distributed by each year. The outer shape is the density; inside is a traditional boxplot denoting the first quartile, median, third quartile, and the
range 1.5×IQR (interquartile range) above the third quartile and below the first quartile. I’ve marked my own time as the red points.
This plot is helpful because it shows the distribution of finish times but also conveys where I stand relative to others. My first year I was worse than average (60.2% rank), but then improve
significantly my second year (27.3% rank). This year my time improved by less, but I was in the top 14.8% of all runners!
Despite my relatively high overall percent rank, it would be better to compare my time to runners within my division (males between ages 25-29). This changes my percent rankings to 81.3%, 48.7%, and
29.7% (still not bad).
We can also look at summary statistics for each year (median, IQR, 90th percentile, and N). The median gives us the 50% cutoff - the time to beat to be in the top 50%. The interquartile range (IQR)
is the difference between the third and first quartiles - it contains exactly 50% of the data. The IQR is the window of time in which 50% of runners finish. The 90th percentile is the time that the
top 10% of runners finish under.
The median finish time, IQR, and 90th percentile are similar between years (as we saw in the plot above). However there is some evidence that runners are getting slower. The 2018 median time is more
than 2 minutes slower than the 2015 time and IQR is 4 and a half minutes longer. This would probably meet a traditional statistical significance threshold, simply due to a large sample. Practically,
I don’t think there’s much difference between years.
Wrapping things up, here are some findings from the data:
• I’ve improved a lot since my first 5K, going from below average to above average.
• The distribution of finish times is pretty heavily skewed. Some participants walk a portion or the entire distance.
• To be among the top 10% of runners overall I’d need to finish within 25m 25s (and within 22m 39s for my division). | {"url":"https://ellisvalentiner.com/post/5k-data-analysis/","timestamp":"2024-11-14T11:19:53Z","content_type":"text/html","content_length":"14837","record_id":"<urn:uuid:4a758d33-ce95-40b4-bad3-05880e53bf52>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00433.warc.gz"} |
What are neural networks and how do they work? - Nuventure Blog
In machine learning, an algorithm is only as good as the data it was trained on. Learning from experience is sorta the point of machine learning. Without ML, we would be manually programming a whole
lot of “if” and “if-else” conditions for a computer to understand if an object is a car or a human being. And gradient descent is one of the most popular tools used in machine learning. But to
understand gradient descent, let’s understand how artificial neural networks work.
What are artificial neural networks or ANN?
As you may know, the human brain is made of billions of neurons. The connections between these neurons determine how our brain understands things and how it makes decisions. And artificial neural
networks are an attempt to mimic the neural networks in our brain. And neural networks are the basis for everything from digital assistants to self-driving cars and just about everything in AI.
So to understand ANN, let’s have a look at how neural networks in our brain works.
The human brain is incredible plastic, it learns and changes from experience. While the total number of neurons in the brain remain more or less constant throughout our life, the connections between
them keep changing. New connections may be formed, old connections may degrade or become stronger.
Each of these neurons has different parts with specific functions. But for our purpose, we can picture it as a line segment. The neuron gets signals from one end and sends out signals from the other.
Now it doesn’t fit into our picture of a neuron as a line segment, but a single neuron can receive signals from and send signals to more than one neuron.
So as you can imagine, not all signals will be of equal magnitude. If multiple neurons simultaneously send signals, the magnitude will be higher. And neurons don’t all fire for every signal. They
have a minimum magnitude of the signal that they need to get, called the action potential, for them to fire.
Now one of the thumb rules on how these connections are formed is “neurons that fire together, wire together”. Let’s have a look at the old example of Pavlov’s dog to understand this.
Pavlov’s dog and training
The experiment is familiar. Ivan Pavlov measured the amount of saliva produced by his dog when offered food. Then for a while, before feeding the dog, he’d ring a bell. After a while, he just rang
the bell and measured the saliva production. The dog’s brain has come to associate the bell with food and started producing saliva when Pavlov rang the bell.
Now to simplify things, we can picture a neuron that identifies food, another for producing saliva, and another for identifying the sound of the bell. (In reality, all of these would be complex
neural networks themselves, and not individual neurons). The neuron that identifies food and the one that produces saliva are already connected to each other. So when one fires, the other one fires
too. But in this experiment, the neuron that identifies the sound of the bell was firing too. And it developed a connection with the food signalling neuron.
How neural networks solves a problem
The brain solves all problems in a similar manner. For example, consider a situation when your brain is presented with a picture and is asked to identify if it is a dog. There are many factors that
may indicate whether is a dog. For example, if it has 4 legs, it is likely that it’s a dog, but there are other animals with 4 legs too. Dogs usually have a coat of fur, but not always. A dog has two
eyes, but most animals have two eyes. As you can see, these are all correlations, all of which combined together may help someone come to the conclusion that the picture is that of a dog’s.
All of these factors will have a neuron(once again, this is an oversimplification, in reality, each of these factors may be a neural network themselves), and they will all be connected to the neuron
that finally says “yes this is a dog.” And the connections of all of these neurons to this final neuron will depend on how much the factors influence the decision.
For example, a thick coat of fur shows a higher correlation with a dog than four legs may, so the connection between the corresponding neurons will be stronger than the other pair of neurons. Now
when all of the factors fire together, it will be enough to exceed the action potential of the final neuron, and the judgement is made.
And as we discussed earlier, these connections are formed when they fire together. So when your brain is really young, it just forms these patterns, based on signals that come together at the same
time. There may be some genetic predisposition towards certain kinds of patterns, but that is not very relevant when we try to understand artificial neural networks.
How does artificial neural networks work?
The “tightness” or “strength” of connections between neurons in the brain is represented by, or rather corresponds to the weights in an artificial neural network. Let’s make it simpler.
Just like neurons, artificial neurons are also connected to many other neurons. A single neuron may get inputs from many neurons and may send its output to many others.
And depending on the importance of a factor in the decision, the signals get amplified with the weight associated with it. Consider the example of a neural network that determines if a picture is
that of a dog. Two neurons, one representing that the figure has 4 legs and the other representing that the figure has a thick coat are connected to the neuron that lights up if it’s a dog. The
decision making neurons are, of course, connected to input neurons.
Let’s say that the input neurons representing the four legs give an input, say 1, and the one representing the thick coat gives the same input. Now the input from the thick coat may get multiplied
by, say 4, and the input from the four legs neuron may get multiplied by 2. (Representing that the presence of a thick coat increases the likelihood that the picture is that of a dog than the
presence of 4 legs). So the total input now becomes 6.
Threshold. Or bias
As with the natural neuron, the artificial neuron doesn’t fire every time it gets a signal. That is, taking the above example, the neuron that determines that the pic is indeed that of a dog won’t
fire every time it gets a signal. The total signal it got was 6, now if the threshold for the neuron was 8, the neuron won’t fire (won’t give an output saying it’s a dog).
And when it comes to ANNs, the threshold is referred to as bias. (technically the negative of the threshold value is called the bias. I.e, if the threshold is 4, bias is -4)
So as you can see, three different components come together to decide if a neuron will fire or not, that is, the input from neurons connected to it, weights attached to these inputs, and the bias.
Activation function of a neural network
The activation function is how the output from the neuron is calculated. Keep in mind that some aspects of this don’t correspond exactly to the natural neuron.
Generally, to calculate the activation function, you multiply the weights with the respective inputs and subtract the bias. If x1, x2, x3, … were the inputs, and w1, w2, w3 were the corresponding
weights, and B was the bias, the activation function would be
y = x1w1 + x2w2 + x3w3……. – B
As you can imagine, the value of the function could range from negative infinity to positive infinity. But sometimes this would be restricted to other 1 or 0
Y = x1w1 + x2w2 + x3w3……. – B
y = 1, if Y>0
y = 0, if Y<0
There are other types of activation functions as well which may be used according to the situation.
Training neural networks
A neural network is only as good as its training and the data used to train it. We know that in a natural neuron, the brain learns with experience; it forms new connections, and old connections grow
stronger or weaker as it gains new information. How do we do that in an artificial neural network?
This is what we do with data. If you want to develop a neural network that can tell if there’s a dog in a picture, you’ll need a lot of pictures both with and without dogs in it. This will be the
training data set. So as the input, we’ll feed these pictures to our network. But during training, we’ll also supply the expected output as well. For example, when we feed a picture with a dog as the
input, the output would be “Detected dog.” Or in a more mathematical sense, 1 or 0.
Now training the network means finding out the exact combination of weights and biases associated with every neuron in the network so that the output worked. Of course, when you were feeding an image
as input, it will be the pixel data that would be fed. So essentially, it would work like, you gave a number (or numbers) as input, and you would get another number as output. But with training data,
you know both these numbers, which you’ll be using to find the weights.
Y = x1w1 + x2w2 + x3w3 + ….. B
Now if you know x1, x2, x3,…… and you knew Y, it’s like solving an equation. But since we have way too many variables, the process is complicated. Of course, we can try all the different combinations
available for weights and biases, but that will take a lot of time and computing resources.
But to identify the optimal solution, we have to define how perfect a solution is
That is, we have to know how incorrect the output is, or how far the actual output was from the expected output.
For example, let’s say we’re trying to develop an algorithm that predicts the salary of a person based on their age and experience. And for training, we have the data for 10000 people or so. For
every single example in this data set, we’ll know the input and the output. To optimize the algorithm, we need to know the difference between the actual output, and the output.
This is the performance of a neural network and is defined by a cost function. The goal is to reduce this to a minimum for all the inputs by fine-tuning the weights and biases. The cost function
shows how far the actual output is from the expected output for the entire set of training data. If it is for a single training example from the set, this is called the loss function.
There are different types of cost functions that we can use in different situations.
For example, in the above situation, the output (the salary of an individual) could be just any number (more or less). Or let’s say we’re trying to estimate the distance between the camera and an
object in the pic, or the actual size of a building from a picture. In all these situations, the output could be any number.
In such a situation, the loss function would be the difference between the actual output and the predicted output. (the difference between the actual salary and the predicted salary, or the actual
distance and the predicted distance, or the actual size and the predicted size).
Loss function = actual output – predicted output
And the cost function may be the mean average of all of these errors, for all the data in the training data set. This is called Mean Error or ME. But as you can imagine, some of these errors may be
negative. For example, in the above case, if for a training example the actual salary was 10k USD, but if the network predicted a salary of $9k USD, the error is negative. And when negative errors
come, the mean error may turn out to be zero.
In this situation, either the Mean Squared Error (MSE or the mean of square of all errors) or mean absolute error (mean of absolute errors for all training data) is taken.
But what if the output is just 0 or 1? Or when the output cannot be any number?
This is the situation with classification problems. For example the situation we discussed earlier when a neural network has to judge the presence or absence of a dog in the pic? How do you determine
the performance of a neural network, or how accurate the neural network is when the output is 0 or 1? Or if a neural network has to classify a set of pictures into that of a dog, a cat, and a fish?
In this situation, we use a different type of cost function. For a classification problem, the machine learning model will give the output as a probability distribution for a given input.
Let’s say, for instance, we give a picture as an input, the model will give an output saying, that its 23% likely to be a fish, 57% likely to be a dog, and 20% likely that its a cat (the one with the
highest probability will be taken as the classification, that is dog, in this example).
We express this as
Y = [.23, .57, .20]
Now the expected output would be
Y’ = [0, 1, 0]
And this is how the output for a training example will look like for a picture of a dog.
For a picture of a fish, it will be
Y’ = [1, 0, 0]
And for a picture of a cat, it will be
Y’ = [0, 0, 1]
Now we have the expected output and the actual output.
Now we can calculate loss function as the distance between the two probability distributions.
Be prepared for a bit of matrix math here (it’s simple enough).
To find the loss function, we write the expected probability distribution as
[y1, y2, y3]
In the above example, it will be [0, 1, 0]
And we write the actual probability distribution after taking their logs
In the above training example it will be
[log .23,
log .57,
log .20]
To get the loss function, we multiply these two matrices.
[y1, y2, y3] * [logP1,
The loss = y1*logP1 + y2*logP2 + y3*logP3
Taking the above example, it will be
0*log.23 + 1*log.57 + 0*log.20
As you can imagine, with more classes, this equation will be longer.
The cost function would be the sum of loss functions for all training data.
Now we know if our neural network is accurate, and how inaccurate it is. The next step is to reduce this error and improve the performance of our neural network. And this is where we play around with
the weights and biases to minimize the cost function.
Now as we discussed earlier, we can always try all the different combinations of weights and biases until we hit the right numbers. But as you can imagine this will take infinite time and computing
resources. So we do this bit more systematically.
This is where we use gradient descent.
Picture this scenario: you’re programming a rover to go to the deepest point in a pond, but you don’t know where the bottom of the pond is. The rover has sensors though, which can tell the angle it
is facing, whether it is going down, or if it is level.
Picture three different axes for the pond. The coordinates of the rover according to these axes will keep changing as the rover moves, but at the very bottom, it will stop changing. That’s how the
rover knows it is at the deepest point in the pond.
This is what we do with gradient descent. We don’t know what the values are, but we can see if the steps we take are taking us closer to it, and we know when we have found the values of these
weights. Of course, we’ll have a lot more than 3 weights to figure out in most neural networks.
Now to go with the earlier example, the bottom of the pond is where the error is the minimum. And the steps we make into the pond are the changes we make to the weights. If we are moving in the right
direction, the distance to the bottom, that is the error value, will go down. And we’ll know we’ve reached the minimum error once the error stops changing.
To apply gradient descent, we have to find the cost function. As we discussed earlier, we compute the cost using the training data. Keep in mind that we use the entire training data to obtain the
cost function.
Gradient descent: Some math
Let’s consider a simple example where we have to find just one unknown variable, a weight single neuron. Of course, in an actual neural network, there will be thousands of weights and biases to
figure out.
Let’s say our cost function is
f(x) = x^2 – x + 1
Now let’s picture this as a simple curve on a graph (the function here has a very specific curve, but picture it as just about any curve, maybe shaped like a 2D mountain)
Now, this graph may go up and down, (oversimplification), but for a given value of x, the function will have the lowest value. If you recall, this is a cost function or the error for the neural
network, and x is the weight.
So let’s picture us standing on this graph. Let’s say we’re at x = 9. For this, the value of the function is 73. Of course, we can plot all the values of x and find out the lowest value of the
function. But it’s easier if we know where the slope is towards, and move step by step in that direction (picture the earlier rover-going-down-into-the-pond analogy).
Now the first derivative at a point will give the slope of a function at that point. For the above function, it is
2x – 1
At x = 9, the slope of the above function is
Keep in mind that the above graph is not that of the function we’re discussing, its just a simple graph to show the slope and all.
Now based on this, we adjust the value of x.
So this is where we apply the formula for gradient descent
X1 = x0 – learning rate (gradient)
So x0 is the initial value of x, ie 9. And gradient is the slope of the function (17, at x = 9).
Learning rate
The learning rate is the steps we take to reach the bottom of the pond. The learning rate plays a huge role in machine learning. Here’s how.
Let’s say that the rover takes small steps, checking after every step to see if it has reached the bottom. Every check takes a lot of time, so if the steps are small, it may take a very long time for
the rover to reach the bottom.
But what if the steps are too big? Remember the pond is like a bowl(in this case, not so much, since there’s only one variable), so if the rover reaches the bottom and doesn’t stop, it will keep
ongoing. Now it will have to travel back to reach the bottom. So yeah, the choice of learning rate matters.
Back to the problem
So now we adjust the value of x with
X1 = 9 – (.1 * 17 )
Here we choose .1 as the learning rate
X1 = 7.3
Substituting in the cost function
7.3^2 – 7.3 + 1
= 46.99
As you can see, the error is low, but not zero.
So we repeat the process again
Slope =
2 * 7.3 – 1
Adjusting X
X2 = X1 – .1 * 13.6
X2 = 7.3 – 1.36
= 5.94
5.94^2 – 5.94 + 1
= 30.3436
As you can see the error is going down.
Now there’s a question, why don’t we just wing it? Why bother doing all this calculation to see how much we should change the x, why don’t we take a guess?
Well, when it’s just one variable that might work. Of course, it’s hard to tell a computer to just “wing” it, but still. It can work.
But what if we have more than one variable? What if we have to adjust more than one weight, as is the case with neural networks?
Consider a cost function
f(x,y) = X^2 + Y^2
In this situation, the graph is a bit more complex. We can picture this as a 3-dimensional graph, where X and Y determine the Z values.
Or, we can picture this as a mountain range on a flat XY plane. Now we can divide this into the x and y-axis, and the altitude as the z-axis.
To determine the lowest value of Z, we have to use partial derivatives.
With partial derivative, we consider the slope only with respect to one axis and consider the other variable as a constant.
Now if you bring this back to the picture of mountains on top of an XY plane, and we keep the X-axis constant, we’ll essentially be taking a slice out of the mountain range, like how we cut a cake,
except, no thickness. If we keep this slice of cake on its side on a piece of paper, now we have just one variable. And the slope is just like the slope for the function with a single variable, as we
discussed above.
If you didn’t understand all of the above, no worries
It’s just since now we have two weights, we need two slopes to adjust these weights. And to find these slopes, you find the partial derivatives of the slope. And to find the partial derivative of the
slope, you keep the other variable as a constant.
So if we take the earlier function f(x, y) = x^2 + y^2
Partial derivative with respect to x
d f(x,y)/dx = 2x
And partial derivative with respect to y
d f(x,y)/dy = 2y
Now let’s look at how we’re going to update the weights
We have two weights to update now
So for updating x
x1 = x0 + learning rate * partial derivative with respect to x
And for y
y1 = y0 + learning rate * partial derivative with respect to x
So let’s say that initially, we kept the weights as x = 5 and y = 10
So cost function will be
f(x,y) = x^2 + y^2
= 25 + 100 = 125
Updating x
X1 = 5 – (.1 * 10)
= 5 – 1
= 4
Updating y
Y1 = 10 – (.1 * 20)
= 10 – 2
= 8
Cost = 16 + 64
= 80
Similarly, the process will update the weights until the cost is very low
How about when there are more weights in neural networks?
Well, as we discussed earlier, on average, neural networks may have more than 1000 weights. So how would you calculate them?
Exactly as we did with the two-weight situation.
We randomly assign weights
Calculate the cost function
Update the weights (same formula for the rest of the weights we well
Repeat until the cost is zero or close to zero
Some words before you go
Now you know the importance of choosing the learning rate. Imagine all of these thousands of variables. And remember, the cost function would be a lot more complicated, the training data will have
thousands of instances. So just finding the cost itself would take a lot of computing power. If you have to perform a lot of steps to find the lowest cost, that’s gonna complicate things. Therefore
the choice of learning rate is very important while training neural networks.
Another factor is that what we discussed is just the simplest easiest function. If you consider the earlier example of a rover moving down into a pond or a mountain on an XY plane, they both will
have the deepest point. But they may also have small pits where your rover may get stuck. It’s not the deepest point, but it’s deep compared to its immediate surroundings.
Something similar may happen with our mathematical operation as well. It may look like our cost function won’t go lower by much no matter how many more times we update the weights. And we may decide
to just stop there. But it may not be the lowest value of the cost function. One of the things we can do to prevent this is to randomly choose the weights when you begin. There are other techniques
as well which uses only a part of the training data at a time. | {"url":"https://nuventureconnect.com/blog/2021/12/29/what-are-neural-networks-how-do-they-work-explained-in-detail/","timestamp":"2024-11-11T03:38:17Z","content_type":"text/html","content_length":"117618","record_id":"<urn:uuid:e73034af-9051-4c95-aff0-cb4bdecf87b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00610.warc.gz"} |
This week, a conversation flared up on Twitter on whether mathematics can carry human biases, and what such a possibility could even mean.
The spark was a statement by the Committee on Minority Participation in Mathematics of the Mathematical Association of America (MAA), responding to actions the Trump administration has taken to
disparage and de-fund the academic discipline of Critical Race Theory. The committee's statement pointed out that the attack on Critical Race Theory has a potentially chilling effect on all academic
disciplines, including mathematics:
As mathematicians, we notice patterns - this is something we are all trained to do. We bring these Executive actions to our community’s attention for several reasons: we see the pattern of
science being ignored and the pattern of violence against our colleagues that give voice to race and racism. We need to fight against these patterns. As educators, we also recognize the
threatening pattern of banning education and withdrawing education funding to suppress conversations on race and racism, extending from elementary to postsecondary institutions to the workplace
and research spheres.
The MAA tweeted out this statement, highlighting the following quote:
"It is time for all members of our profession to acknowledge that mathematics is created by humans and therefore inherently carries human biases. Until this occurs, our community and our students
cannot reach full potential." -CMPM #MathValues https://t.co/vVBUnXf1TL
— MAA (@maanow) October 3, 2020
The resulting conversation appears to have focused in particular on the idea that "mathematics is created by humans and therefore inherently carries human biases", largely disregarding the rest of
the committee's statement. One biologist in particular felt so provoked by this statement that she felt it should be disqualifying for the whole field:
If you truly believe that math is created by humans, you have no business in math.
The ways that we *describe* math are, no doubt, a social construct, but math itself is the discovery of underlying reality. https://t.co/z5ce8hLCTZ
— Heather E Heying (@HeatherEHeying) October 3, 2020
First off, let me say clearly: Dr. Heying's tweet is reprehensible. No one should be dictating who does or does not have business in math, let alone someone from outside the field. She also seems
completely ignorant of the centuries-old debate on whether mathematics is discovered or invented (most mathematicians feel it's some combination of both). And while I do not know if her comment was
intended to be racist, the fact that she is saying the Committee on Minority Participation in Mathematics has "no business in math" is absolutely racist in its effect. She should apologize
immediately, but instead she is doubling down.
Leaving aside Dr. Heying's offensive remark, the statement itself raises some interesting questions. What could it mean for mathematics to "carry human biases"? I think part of the issue here is that
the word "mathematics" could be understood in several different ways:
1. Mathematics as a collection of relationships (discovered or not) among numbers and other mathematical objects,
2. Mathematics as the human body of knowledge regarding these relationships,
3. Mathematics as a discipline and profession devoted to understanding and describing these relationships
For an example of mathematics in the first sense, let's take the theorem that there are infinitely many primes among the natural numbers. This is one of the most famous results in elementary number
theory, with a number of beautiful proofs dating back to Euclid in ancient Greece. Within the universe of math, such a statement is not contestable. This is the point--and the beauty--of proofs in
mathematics: they reveal truths that are universal, regardless of who discovers or uses them.
Many of those responding to the committee's statement assumed that they were using "mathematics" in this first sense, as if theorems like the inifinitude of primes could carry human bias. But I see
this as an exceedingly ungenerous interpretation, with no support in the rest of their statement. Indeed, the people leaping to this interpretation seem to be all too eager to paint the committee's
statement in the worst possible light, as if any statement calling for greater diversity and inclusion in mathematics is automatically considered suspect.
If "mathematics" is understood in the third sense, as a discipline and profession, then absolutely it can carry human bias. Ronald Fisher, who pioneered the study of statistics, was a notorious
racist and eugenicist, and he was not alone in these views. Moreover, until recent decades, women and minority groups were systematically excluded from studying and practicing higher mathematics.
Because of this systematic exclusion, most of the "great figures" of Western mathematics are white men, and this perception that "math is for white men" becomes self-reinforcing. This is not merely a
historical legacy: nonwhite mathematicians continue to face bias and isolation, and in some cases harassment.
What about the second sense, mathematics as a human body of knowledge? Could this carry bias? Here I think the question is much more nuanced, but the example of negative numbers is instructive. They
first appeared in the Han Dynasty of ancient China (202 BC - 220AD). It has been suggested that the idea of duality in Chinese philosophy made negative numbers more intuitive for them. Indian
mathematicians in the 7th century AD were using negative numbers to represent debts. Yet in Western mathematics, negative numbers were dismissed as absurd and nonsensical until calculus came along in
the 18th century.
I like the example of negatives, because it shows that what gets accepted as legitimate mathematics is indeed a social construct. Cultural biases can come into play when determining which ideas gain
legitimacy, even in the abstract world of pure mathematics. Relationships among numbers are not biased, but our process of understanding and discovering these relationships may be. And I agree with
the committee's statement that understanding how human biases influence our thought--even within the ivory tower of mathematics--is key to achieving greater inclusion and equity for all people.
2019 was a lot of things. But for what I want to say here, 2019 was the year that I realized we might not save ourselves.
Just on its face, 2019 was a terrible year if you care about climate change. Arctic permafrost may have reached a tipping point. Antarctic ice melted at record pace. The Amazon burned. Meanwhile,
carbon emissions continued to rise, and COP-25, the major UN forum for international climate policy, ended with essentially no progress.
But for me personally, 2019 was the year I allowed myself to consider that we might not work it out. Not only will we not stop the first effects of climate change, we might not even stop any of them.
Faced with an existential threat to our entire civilization, we might just drive ourselves right off the fucking cliff.
Surely we will do something to stop it. Consciously or not, this thought had always been in the back of my head when thinking about climate change. Yes, the science looks bleak, the politics look
intractable, and some level of crisis is probably unavoidable. But surely, at some point, human civilization will come together, face the danger ahead, and do something to stop it.
This year, I allowed myself to pluck this voice from the back of my head, hold it to the light, and examine it. Will we do something to stop it?
Well, what does our track record show? Climate change was officially identified by NASA as a severe global threat in 1988. Since then, we've had 31 years of scientific research, policy debates, and
international agreements. Every international scientific and policy-making body recognizes climate change as an urgent and existential threat. And yet emissions have continued to rise, essentially
without pause.
I'm an optimist at heart. I always try to look at things in the best possible light. But at this point, it's starting to look like, if we were going to save ourselves, we would have done it by now.
Surely we will stop it. We might not stop it. What if we don't stop it?
What happens if we don't take drastic action? Here is where I think that the scientific and journalistic institutions have failed to properly communicate the danger. Because the headline numbers—3 or
4 degrees Celsius, 2 meter sea-level rise by 2100—might not sound that bad at first. Why, exactly, are these numbers so scary?
First of all, with a 4°C temperature rise, 74% of the Earth's population would experience deadly heat waves every year. Multi-breadbasket failures are possible, leading to mass famine. As much as 5%
of the world's population could be flooded every year by 2100. These and other catastrophes could lead to as many as a billion climate refugees by 2050.
What would this level of disruption mean for human civilization? With one tenth of the world's population displaced, can nations still maintain their borders or their identities? Can governments
survive if they can't provide food or freshwater to their people? When "natural disasters" turn into commonplace occurrences, will the collective fiction known as "money" retain its value?
Questions like these defy quantitative predictions, but based on these an other considerations, researchers have described increase of 5°C or more as posing "existential threats to the majority of
the population". And while it is probably still possible to avoid this level of warming, doing so would require unprecedented economic transitions and global cooperation—and our track record so far
does not give much reason for optimism.
We might not stop it.
2019 is the year I started to mourn. The year I let myself consider that the civilization we have right now might be—likely will be—the best we will ever get. That our current society—for all its
wonders and flaws—could be revealed as fossil-fueled mirage that collapses before we ever build something better to replace it. That, even if homo sapiens as a species survive, what we know of as
human civilization could go up in smoke, fire, and water.
Of course, the destruction will not be spread evenly, nor fairly. The countries most vulnerable to climate change, such as Bangladesh and Haiti, are among those least responsible for creating it.
Still, there is reason to doubt that the political and economic systems of the West will survive extreme climate change. Already, mass migration from the Middle East and Central America (driven in
part by climate change) have fueled the rise of the Far Right in Europe, Brexit in the UK, and the election of Donald Trump in the US. Currently, the US is holding thousands of these migrants in
concentration camps, forcibly separated from their families. What will happen when migrants swell to 10% of the world's population, compounded with greatly increased fires, flooding, hurricanes,
epidemics, and food shortages? How much strain, exactly, can our political and economic institutions take?
What had you pictured for yourself and your loved ones in 2050? I had hoped to be rounding out my career as a mathematician, with a satisfying record of scientific accomplishment and well-taught
students behind me. I had hoped to be watching my son thrive in the world with at least some of the advantages that had helped me succeed. But now I'm letting myself ask, what if my college, the
university system, the country, the entire economy, are gone by then? What if all we leave the next generation is a command to survive, survive at all costs?
I am not telling you to despair. Despair saps the will to act, and there is too much work to be done. The difference between 2°C vs 3°C, or between 3°C vs 4°C, is so great that we must be out in the
streets causing disruption, fighting for our futures and our lives. We must also join with each other to become resilient, to form networks of preparedness, to help the most vulnerable, and to
strategize how we will adapt to whatever change will come. I am not telling you that we cannot make a difference. I believe we can and we will, and I invite you to join me, and help me, in this
But I also invite you to mourn. We can't truly grasp the urgency for action unless we emotionally grapple with the consequences of inaction. What, in human civilization, will you miss most? What will
you wish we had fought harder to preserve? What imagined future will you be most heartbroken to discard?
I wish you a joyous 2020, but also a mournful one. We must be clear-eyed about what we will lose, if we are to fight to preserve what we can.
Two weeks ago, my friend Andi messaged me about a mathematical game she had invented. She was so excited to share it. She had coded up a "proof of concept" version in html, and had come up with a
mathematical proof about its winning strategies. She was enthusiastic about its potential to make math fun even for non-math people, and full of ideas for next steps.
Then two days ago, I learned that Andi died. It seems that this game is one of the last things she put into the world. Although I didn't know her as well as I might have, her excitement about sharing
this game seems to typify the passion and determination with which she approached all her projects. Andi was an uncompromising advocate for social justice with a poetic eye and a keen sense of humor.
Also, she was a transgender woman; I say this because visibility matters and because I believe she would not want this aspect of her identity to be erased.
The best way I can personally think of to honor Andi's memory is to share her final game with the world. Like any well-designed game, it is easy to play but difficult to master. The rules are
deceptively simple:
1. A large whole number, called the Magic Number, is specified and known to both players (it could be randomly generated by computer, for example). All factors of the Magic Number are listed out,
including 1 and the number itself.
2. Two players take turns choosing factors of the Magic Number. Every time one player chooses a factor, that factor and all multiples of it are crossed out. Once a factor has been crossed out,
neither player can choose it.
3. Whoever chooses 1 loses. In other words, the goal is to eliminate the factors in such a way that the other player is forced to choose 1.
For example, let's say the Magic Number is 12. The factors of 12 are 1, 2, 3, 4, 6, and 12. These are all the numbers that can be chosen.
Say player 1 choses 12 itself. Then 12 is eliminated, so the "board" looks like this:
1 2 3 4 6 [S:12:S]
Now player 2 chooses 3. So 3 and all multiples of 3 are crossed out:
1 2 [S:3:S] 4 [S:6:S] [S:12:S]
Next player 1 chooses 2. So 2 and all multiples of 2 are crossed out:
1 [S:2:S] [S:3:S] [S:4:S] [S:6:S] [S:12:S]
Only the number 1 is left. Player 2 is forced to choose 1, so Player 1 wins.
To visualize what's happening in this game, it helps to draw a diagram like this:
Every time you a player picks a number, that number and all numbers downstream of it are eliminated. (Here "downstream" refers to the direction the arrows are pointing, which is visually upwards.)
So, if 2 is picked, that eliminates 2, 4, 6, and 12.
To mathematicians, a diagram like this is called a lattice. The game-play for a given Magic Number is determined by the structure of the lattice, which in turn is determined by the Magic Number's
prime factorization, as you can see in this lattice for 120.
But enough theory, go ahead and play! Here's a link to the "proof of concept" version that Andi coded up. You play against the computer, who goes first. To try again with a different Magic Number,
click "New Game". You can put in whatever Magic Number you choose, or have the computer randomly pick one.
Did you win? No, you didn't. But don't feel bad: Andi proved that, with optimal play, Player 1 will always win the game.
It's a proof by contradiction. Assume, for the sake of contradiction, that for some particular Magic Number, Player 2 has a winning strategy. In other words, Player 2 has a winning response to any
first move that Player 1 might make. In particular, if Player 1 chooses the Magic Number itself, Player 2 must be able to choose some other number—call it n—which puts them in a winning position. But
then Player 1 could have chosen n as their first move, which would have put Player 1 in this same winning position. This contradicts our assumption that Player 2 has a winning response to any first
move of Player 1. Therefore, by contradiction, Player 1 must win if both sides play perfectly.
The interesting thing about this proof is that it's non-constructive. It says that there exists a winning strategy for Player 1, but gives no indication of what this winning strategy might be!
Andi designed her code to search through all possible game outcomes for a winning one. While this guarantees that the computer always wins, it doesn't give much insight into how one ought to play, or
why certain strategies might work better than others.
There are many interesting open questions here: Can the winning strategy be described concisely? Is there a polynomial-time algorithm to find the winning strategy for a given Magic Number? And can
the game be generalized to other kinds of lattices?
Andi's final gift to the world is a good one. Her code is available on GitHub; please use it and build on it if you are inspired. I hope she is remembered for this and for everything else she put out
into the world.
I'll close with this mathematical meditation, which was one of Andi's last Facebook posts:
The set of rational numbers is continuous, in the sense that between any two distinct rational numbers, there exist more distinct rational numbers. If you only look at the rationals, you'll miss
uncountably many reals. If you insist on defining reals in terms of rationals, you'll need to take rationals to their limits.
Rest in Power.
Donald Trump is poised to become the next US president, despite the fact that Hillary Clinton received over a million more votes than him (and counting). This would mark the second time in sixteen
years, and either the fourth or fifth time in history (depending on how you count) that the Electoral College winner has lost the popular vote.
How is it possible to win the Electoral College but lose the popular vote? The answer lies in a combination of two factors. The first is the winner-take-all nature of the state contests. All states
except for Maine and Nebraska deliver all their electors to the candidate with the plurality of votes. This means that if you win by slim margins in a sufficient set of states, you can lose badly in
all other states and still secure an Electoral College victory.
The second factor is the disproportionate representation of small states. Each state has a number of electors equal to its total number of congresspeople (senators plus representatives). The number
of representatives is roughly proportional to population size, but adding in the two senators per state gives the smaller states more per-capita representation. For example, Wyoming has approximately
7 electors per million elligible voters, while California has 2 per million. So a Wyomingite has more over three times the Electoral College representation of a Californian (calculations here).
So if you want to become president without winning the most votes, your strategy is to aim for narrow victories in a set of smaller states that add up to 270, while ceding the other states to your
opponent. This begs the question: what is the smallest popular vote percentage one could receive while still winning the presidency?
The answer—according to my best calculations—is 22%. You could capture the Electoral College, and become President of the United States, with only 22% of the vote.
I got this number by starting with the states with the most electors per elligible voter (Wyoming, Vermont, Delaware, Alaska, ...). For each of these, I gave 50.1% of the vote to "Team Red", and the
remaining 49.9% to "Team Blue". I continued down the list of states with the most electors per capita, giving 50.1% to Team Red, until the total electoral votes exceeded the 270 needed to win. I then
gave Team Blue 100% of the vote for all other states. It turns out Team Red didn't need New Jersey, so I threw that over to Team Blue as well. The result: Team Blue captures 77.7% of the popular
vote, but Team Red wins the Electoral College vote 270 to 268. You can check my math in this spreadsheet. My answer agrees with a similar calculation done in 2011.
Figure 1: One can capture the Electoral College with only 22.3% of the vote, by receiving 50.1% of the vote in the red states above and 0% in the blue states.
It makes sense that the 22.3% figure is close to one quarter. If all states were equal in both population and electoral votes, one could tie the electoral college with slightly more than one quarter
of the vote, by winning slightly more than half the vote in half the states, while losing the others completely (see below). The fact that one can win the US electoral map with less than 25% is due
to the disproportionate representation of small states.
Figure 2: A hypothetical electoral map of four states with equal populations and electoral votes. Pie charts show the popular votes in each state. One can tie the electoral college with slightly more
than 25% of the vote, by winning narrow majorities in two states and receiving no votes in the other two.
The above calculations assume that there are no third party candidates, and that voter turnout is the same in each state. Dropping these assumptions can lead to even more lopsided possibilities. For
instance, with one third-party candidate, we only need to give Team Red 33.4% in the red states of Figure 1, while Team Blue and the third party each get 33.3%. This leads to an Electoral College win
for Team Red with 14.9% of the vote. Alternatively, suppose that the turnout in the red states of Figure 1 is half that of the blue states. Then Team Red wins with 14.3% of the vote.
Of course, possible is not the same as likely. It would be very unlikely, for instance, for a candidate to receive 50.1% of the vote in Oklahoma but 0% in Texas. What does not seem unlikely, on the
other hand, is that the Electoral College winner loses the popular vote. This has happened in at least 4 out of 58 elections, or 6.8%, which is not that rare of an occurrence. What we need to decide,
as a country, is whether we support an electoral system that does not always align with the majority of votes.
Eight years ago, I had finished my first year of graduate school in math, and I was at a loss as to what to research. My original focus, differential geometry, was a beautiful subject to learn about,
but the open research questions were too abstract and technical to sustain my interest. I wanted something more relevant to the real world, something I could talk to people about.
Looking for new ideas, I took a course in complex systems, run by the New England Complex Systems Institute. The director, Yaneer Bar-Yam, had pioneered a new way of representing structure in a
systems. I was fascinated by this idea but also puzzled. As a mathematician, I wanted to understand the basis of this idea. What assumptions does it rely on? How are its basic concepts defined?
My attempt to answer these questions turned into one of the longest and most demanding projects I’ve worked. After an eight-year collaboration Yaneer and my friend Blake Stacey, we finally have a
preliminary manuscript up on the web. It is currently under review for publication. And to my pleasant surprise, we got a nice write-up in ScienceNews.
So what is this project all about? The idea is that we're using information theory (which I've written about previously) as a tool to represent and quantify the structure of a system.
Before I explain what any of this means, let's consider some motivating examples. Here's a system (call it system A):
You wouldn't really call this a complex system. It has only one component (a ball) that bounces around in a fairly simple way. Since there's not much to see here, let's turn to system B:
Source: Wikimedia Commons
This system has many particles, which bounce around and bump into each other. In one sense, this system is quite complex: it is very difficult to describe or predict its exact state at any given
time. But looking beyond the level of individual particles reveals a kind of simplicity: since the particles behave independently of each other, overall measures such as the average particle velocity
or the rate of collisions are relatively stable. In other words, the individual complexity "averages out", so that on the whole, the system behaves quite simply.
Contrast that to the behavior of system C:
Source: A Bird Ballet by Niels Castillon
This is a murmuration of starlings. The starlings fly in a semi-coordinated, semi-independent way, creating intricate shapes and patterns that you would never observe in systems A and B. This is a
prototypical "complex system"—the kind that has intrigued researchers since the 70's.
It is intuitively clear that systems A, B, and C have entirely different kinds of structure. But it is surprisingly difficult to capture this intuition mathematically. What is the essential
mathematical property of system C that can allow us to differentiate it from A and B?
We try to answer this question using information theory. Information theory was first invented by mathematician Claude Shannon in 1948 to address problems of long-distance communication (e.g. by
telegraph) when some signals may be lost along the way. Shannon's ideas are still used, for example, in the development of cell phone networks. But they also have found applications in physics,
computer science, statistics, and complex systems.
To explain the concept of information, let's look at a system consisting of a single blinking light:
This is one of the simplest systems you could possibly imagine. In fact, we can quantify this simplicity. To describe the state of the system at any given time, you only have to answer one yes/no
question: "Is the light on?"
The amount of information conveyed in one yes/no question is called one bit. "Bit" is short for "binary digit", and is the same unit used to quantify computer memory. In other words, the state of
this light can be described in one binary digit, 0 for OFF and 1 for ON.
Now let's add another light:
Let's say these lights are statistically independent. This means that knowing the state of one doesn't tell you anything about the other. In this case, to identify the state of the system requires
two bits of information—that is, two yes/no questions, one for the first light and one for the second. We can depict this situation with a diagram like this:
The circles are drawn separately, since information describing one of them tells us nothing about what the other is doing. We could say that each of these bits applies at "scale one", since each
describes only a single light bulb.
Here are two lights that behave in a completely different fashion:
Note that the two light bulbs are always either both on or both off. Thus, even though there are two components, the system can still be described by a single bit of information—a single yes/no
question. The answer to this question (e.g. "are they on?") applies to both bulbs at once. The "information diagram" for this system looks like two completely overlapping circles:
We could say that the one bit of information describing this system applies at "scale two", since it describes two light bulbs at once.
A more interesting case occurs between these two extremes:
It's hard to see it, but I've animated these bulbs to be in the same state 3/4 of the time, and the opposite state 1/4 of the time. If I told you the state of the first bulb, you wouldn't completely
know the state of the second, but you could make an educated guess. Specifically, if I told you the first bulb is ON, you could guess that the second is ON too, and you'd be right 75% of the time. So
there is information overlap: Information about the first bulb gives partial information about the second. In fact, we can use Shannon's formulas to actually calculate how much overlap there is:
approximately 0.19 bits. So if you know the state of the first bulb (1 bit), then you also know 0.19 bits about the second bulb—not enough to know its state with certainty, but enough to make a guess
that is 75% accurate. The overlapping information can be depicted like this:
As you can see, 0.19 bits of information apply to both light bulbs at once (scale two), while the remaining 0.81+0.81=1.62 bits apply only to a single bulb (scale one).
In principle, these "information diagrams" (we call them dependency diagrams) exist for any system. Highly ordered systems, like system A above, have lots of overlapping, large-scale information.
Highly disordered systems like B have mostly small-scale, non-overlapping information. The systems that are most interesting to complex-systems researchers, like the starlings in example C, have lots
of partial overlaps, with information distributed over a wide range of scales.
And that's the basic premise of our theory of structure. The structure of a system is captured in the overlaps of information describing different components, and the way information is distributed
across scales. While we take these concepts quite a bit further in our paper, the central idea is right here in these blinking lights.
Thanks for reading!
This post is not about science.
I'm writing this because the horrific news out of Ferguson, Missouri—the killing of an unarmed man and the subsequent assault on the populace and media—has been bringing back memories an experience I
had with the police ten years ago in Chicago.
I should be clear about why I'm choosing to share this. It's not because I think my own problems are particularly deserving of attention in comparison to the violence done to Michael Brown, Eric
Garner, and other recent victims of police violence. In fact, what I experienced was relatively tame in comparison. But that's kind of the point. This incident instantly brought my white privilege
into sharp focus, in a way that has stuck with me ever since. Issues like racial profiling can be somewhat abstract for white people. I hope my story can open a new entry point into these issues for
those who rarely experience them directly.
After college, I joined Teach for America. I was assigned to a high school on the West side of Chicago, where I taught math and coached the chess team. The school and the surrounding neighborhood
were nearly 100% black. (Yep, Chicago is segregated.) It was also a rough neighborhood in the sense that drug dealers and prostitutes operated openly within a block of the high school, and students
talked about gang warfare the way those at other schools might gossip about the Homecoming dance. I was not a great teacher in that environment, but I felt a strong bond with the students—especially
with those on the chess team, who would squeeze into my tiny Civic every month or so to face off against other teams, often from much more affluent suburban schools.
One Saturday, we got back to the West side around 10pm, and I decided to give each of the team members a ride home. After I dropped the last student off, I got back into the car to head home. But as
I tried to start out, there was another car right next to me, blocking me into my parking space. And the driver was looking at me.
I didn't know what they wanted. Maybe they wanted my parking spot. To try to get out of their way, I pulled forward a bit. But they moved in parallel, blocking me in again. We repeated this dance two
or three times. They motioned to me to roll down my window. But seeing as I had no idea who they were, I thought this was probably a bad idea and kept my window shut.
Then the driver and passenger got out, walked in front of my car, pulled guns out, and pointed them at me.
As a child, I frequently had nightmares in which "bad guys" would shoot me with guns. I started to feel like I had slid into one of those nightmares. It didn't feel like reality—it felt like a dream
that was happening to me. I thought maybe I was mistakenly mixed up in a criminal conspiracy, and they were going to kidnap me or worse.
They shouted "PUT THE FUCKING CAR IN PARK!" I complied. Then one of them yanked open my car door and put his gun to my head (literally, it was touching my temple). He shouted "TAKE YOUR FUCKING
SEATBELT OFF", which I did as well as I could given how much I was shaking. He then pulled me out, put me in handcuffs, and bent me over the trunk of their car.
It was at this point that I realized I was probably dealing with the police, rather than some criminal organization. I told them I didn't know they were police. One of them responded "Who else would
be going the wrong way down a motherfucking one way street?"
Ummm, I guess this chain of logic might have occurred to me if I wasn't scared shitless by the fact that strangers were blocking me in and pointing guns at me.
The other one, who still had his gun to my head, said "We don't want to hurt you, we just want to know your source!" I had no idea what they were talking about. I told them that I was a math teacher
at the local high school. His response was "Oh yeah? Well how long have you been doing heroin?" They continued to interrogate me and searched my pockets as I told them about the chess team, the
tournament, and the student I had just dropped off.
After a minute or so, it became clear to them that I was not, in fact, a heroin user. It was remarkable how quickly I shifted in their view from "junkie" to "white do-gooder". Within sixty seconds,
their tone of voice changed, they took me out of cuffs, and their started explaining why they had taken the approach that they did.
Their explanation went like this: The corner where I had dropped of this student was a well-known herion point. White people are so rare in this neighborhood that those who are around after dark are
usually there for the drugs. Transactions often occur in the buyer's car, with the buyer driving the dealer around the block as the deal is made. So I fit the profile of a heroin buyer. When I failed
to stop for them, they escalated by getting out and drawing guns. When I continued to creep my car forward towards them (unintentionally, since I had no idea what I was doing at that point), they
felt they had to escalate further my opening the door and putting a gun to my head.
It almost makes sense, except that they never identified themselves as cops. They were in an unmarked car and never bothered to show me a badge. Because they read me as a heroin junkie, they assumed
I would be familiar with the routine of being pulled over by an unmarked car. Just to emphasize the point: They were quicker to pull their guns on me than to show me any kind of police
The next week, I told the chess team what happened during practice. I'll never forget what one of them said to me next: "Mr. Allen, I'm sorry you had to go through that, but you know what that makes
you? A black man. We go through that shit every day." He then told me about a time the cops made him strip to his underwear and stand outside in the middle of winter for hours, cuffed to a police
car, before they released him without charge. All of my students had stories. They all had stories of the cops treating them as if their time, their dignity, and even their lives were worthless.
I did end up filing a complaint with the Chicago Police Department, but I was unable to ID the officers. I had (and still have) a clear mental picture of one of them, but none of the photos they
showed me matched him. So the case was dropped.
What do I take from this experience? For one thing, some very real anxiety. It still haunts me sometimes when I'm trying to sleep, and I was shaking when typing this out. But I also try to accept it
as an alternate-reality window into something I would never have otherwise experienced. For a brief moment in time, the usual dynamics were reversed: I was profiled for being a white person in an
all-black neighborhood. Because of the color of my skin and the block I was on, the cops read me as a criminal and treated me like one. But only for about a minute. Once they realized I was not a
junkie, my white privilege reasserted itself and suddenly they were there to serve rather than threaten me.
As a white person with financial and educational privilege to boot, I can be reasonably certain that I will not experience such an incident again, unless I choose to return to a situation like urban
teaching in which the usual rules become twisted. But imagine (and I'm talking to white folks here) if you had no choice. Imagine if you could never tell whether the cops—the people who are supposed
to protect you—would arbitrarily read you as a criminal and decide to threaten your life before even explaining who they are or what they want. Imagine how that might change your concept of safety,
the way you present yourself outside, or even your plans for any given evening. That is the reality that my chess team described to me. It is the reality that underlies the headline-grabbing
incidents like Michael Brown, Eric Garner, or Trayvon Martin. It is the reality that millions of people live every day.
My friend Daria Roithmayr alerted me to a working paper of Brian Arthur laying out a vision for a new approach to studying economics. Brian Arthur is one of the pioneers of complex systems thought,
and has devoted his life to understanding what really happens in our economy, and why this behavior is so different from what classical economics predicts.
Classical economics is a theory based on the concept of equilibrium. Equilibrium, in economics, is a state in which everyone is doing the best thing they could possibly do, relative to what everyone
else is doing. And since everyone is doing the best possible thing, no one has incentive to change. So everything stays the same. Forever.
Okay, that doesn't sound much like our actual economy. So why is the equilibrium concept so central to economics? The answer is that equilibria can be calculated. If you make certain simplifying
assumptions about how economic actors behave, you can prove that exactly one equlibrium exists, and you can calculate exactly what every actor is doing in this equilibrium. This allows economics to
make predictions.
These predictions are useful in explaining many broad phenomena—for example, the relationship between supply, demand, and price. But they exclude any possibility of movement or change, and therefore
exclude what is really interesting (and lucrative!) about the economy. Arthur explains it this way:
We could similarly say that in an ocean under the undeniable force of gravity an approximately equilibrium sea level has first-order validity. And this is certainly true. But, as with markets, in
the ocean the interesting things happen not at the equilibrium sea level which is seldom realized, they happen on the surface where ever-present disturbances cause further disturbances. That,
after all, is where the boats are.
T-Pain understands the need for nonequilibrium theories.
The vision of economics that Arthur lays out is based not on equilibrium, but on computation:
A better way forward is to observe that in the economy, current circumstances form the conditions that will determine what comes next. The economy is a system whose elements are constantly
updating their behavior based on the present situation. To state this in another way, formally, we can say that the economy is an ongoing computation—a vast, distributed, massively parallel,
stochastic one. Viewed this way, the economy becomes a system that evolves procedurally in a series of events; it becomes algorithmic.
The part of this essay that was most challenging to me personally was where he talks about the limitations of mathematics:
...the reader may be wondering how the study of such computer-based worlds can qualify as economics, or what relationship this might have to doing theory. My answer is that theory does not
consist of mathematics. Mathematics is a technique, a tool, albeit a sophisticated one. Theory is something different. Theory lies in the discovery, understanding, and explaining of phenomena
present in the world. Mathematics facilitates this—enormously—but then so does computation. Naturally, there is a difference. Working with equations allows us to follow an argument step by step
and reveals conditions a solution must adhere to, whereas computation does not. But computation—and this more than compensates—allows us to see phenomena that equilibrium mathematics does not. It
allows us to rerun results under different conditions, exploring when structures appear and don’t appear, isolating underlying mechanisms, and simplifying again and again to extract the bones of
a phenomenon. Computation in other words is an aid to thought, and it joins earlier aids in economics—algebra, calculus, statistics, topology, stochastic processes—each of which was resisted in
its time.
He later explains the limitations of mathematics with an analogy to biology:
Even now, 150 years after Darwin’s Origin, no one has succeeded in reducing to an equation-based system the process by which novel species are created, form ecologies, and bring into being whole
eras dominated by characteristic species. The reason is that the evolutionary process is based on mechanisms that work in steps and trigger each other, and it continually defines new
categories—new species. Equations do well with changes in number or quantities within given categories, but poorly with the appearance of new categories themselves. Yet we must admit that
evolution’s central mechanisms are deeply understood and form a coherent group of general propositions that match real world observations, so these understandings indeed constitute theory.
Biology then is theoretical but not mathematical; it is process- based, not quantity-based. In a word it is procedural. By this token, a detailed economic theory of formation and change would
also be procedural. It would seek to understand deeply the mechanisms that drive formation in the economy and not necessarily seek to reduce these to equations.
Or, as Stuart Kauffman asked me when I told him about my mathematical biology research, "Can any of your equations predict rabbits fucking?" | {"url":"http://plektix.fieldofscience.com/","timestamp":"2024-11-10T08:55:12Z","content_type":"application/xhtml+xml","content_length":"209663","record_id":"<urn:uuid:6187dd41-a9ab-4105-b1db-d3d84f44b819>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00075.warc.gz"} |
How do you solve the equation for y: (2x)/ 5 - (x / 3) = 3? | HIX Tutor
How do you solve the equation for y: #(2x)/ 5 - (x / 3) = 3#?
Answer 1
In this equation you can not solve for #y#, for there is no #y# in it.
But we can surely solve for #x#, which goes as follows :-
#(3×2x -5×x)/15=3#
#:. color(Brown)(x =75)#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To solve the equation (2x)/5 - (x/3) = 3 for y, first, find a common denominator for the fractions. Then, simplify the expression by combining like terms. Finally, solve for x and substitute the
value of x back into the equation to find y. The solution is y = (15x - 10x)/15, which simplifies to y = (5x)/15 or y = (x)/3.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-solve-the-equation-for-y-2x-5-x-3-3-8f9af8f575","timestamp":"2024-11-11T06:56:05Z","content_type":"text/html","content_length":"574527","record_id":"<urn:uuid:17c7bccf-3e5f-44d8-8fe4-1eeda56b8a8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00296.warc.gz"} |
HTML Full Text
Tulsi H. Vyas * and Girish N. Patel
Department of Pharmaceutics, Shree S. K. Patel College of Pharmaceutical Education & Research, Ganpat University, Ganpat Vidyanagar, Mehsana-Gozaria Highway, Mehsana, Gujarat, India.
ABSTRACT: Artificial Intelligence is the simulation of human intelligence. From delivering simple groceries to doorsteps to solving the toughest task in scientists’ lab, it is surrounding human life
in all the means. So how can the Pharma industry be untouched in the case of AI?! Artificial Neural Network (ANN) is a type of AI used to solve non-linear problems and predict the output for given
input parameters from the training values. In this research work, such generalized ANN is developed to predict drug release from the sustained-release monolithic tablet. It is trained by the
backpropagation method under supervised learning. This developed model is evaluated on the basis of RMSE, similarity and dissimilarity factors and can predict the output with the best achieved
average error ~0.0095 and R^2 0.9953. Such ANNs can be the best combination of experience and intelligence, which can eliminate tedious lab works that can be cost-effective and time-effective.
Keywords: Artificial Neural Network (ANN), backpropagation method, Supervised Learning, Input Feature Selection, Monolithic Tablet, RMSE
INTRODUCTION: Artificial Intelligence is an area of computer science dedicated to improvise and simplify our routine and difficult life hacks as well as definitely bringing some revolutionary changes
in various fields. AI has entered to almost all fields, so Pharma field ^1. Continuous development of new pharmaceutical formulation besides regular troubleshooting in the existed formulation is a
very crucial task for pharmaceutical industries. The performance of pharmaceutical products relies upon multiple factors, and it is not possible to predict product performance in complex formulation
One has to rely on empirical outcomes to understand the product performance along with experience of decades to select appropriate ingredients along with processing conditions to, even, find starting
step of right pathway to develop successful formulation. Traditionally, formulators use empirical method or statistical methods. However, such statistical methods can help in case of screening only
and can mislead in the case of complex formulation development. For example, in case of numbers of formulation affecting factors more than five, very profound numbers of experiments are required to
be performed ^2, 3, 4. So it becomes important to work in a smarter way by combining experience of ages and today’s smart technology. Even ANN is becoming very handy in current pandemic condition
from recognizing pattern of virus spread to predicting COVID report by pattern recognition and also effect of this pandemic on economy of world ^5, 6, 7.
In such cases AI can be a helping hand. By using AI some models can be developed which actually mimic the biological brain and such models are called Artificial Neural Networks (ANN). They simulate
the brain, learn, solve problems and draw conclusions. According to Dr. Robert Hecht-Nielsen, the inventor of one of the first neurocomputers “ANN is a computing system made up of number of simple,
highly interconnected processing elements, which process information by their dynamic state response to external output” ^8.^ In Pharma field ANN model can be used at various stages of formulation
and development of controlled release matrix tablet-like optimization of formulations and manufacturing processes ^9, 10, 11.
What is ANN?? There are problem categories that cannot be formulated as an algorithm. Problems that depend on many subtle factors, for example the purchase price of a real estate which our brain can
(approximately) calculate. Without an algorithm a computer cannot do the same. Therefore the question to be asked is: How do we learn to explore such problems? Exactly – we learn; a capability
computer does not have. Humans have a brain that can learn. Computers have some processing units and memory. They allow the computer to perform the most complex numerical calculations in a very short
time, but they are not adaptive all the times ^12.
Artificial neural networks (ANNs) technology is a group of computer methods for modelling and pattern recognition, functioning similarly to the neurons of the brain. It is a computational system
inspired by the Structure Processing Method Learning Ability of a biological brain. In the brain, inputs are received by biological neurons from external resources, they are combined (performing a
non-linear operation) and then a decision is made based on the final results. There are many types of neural networks exists but all are having same basic principle i.e. to receive input, process
them and execute the output ^13. ANNs are a type of “mathematical model” that simulates the biological nervous system and draws on analogues of adaptive biological neurons. Table 1 shows
terminologies comparison between Biological Neural Network (BNN) and artificial Neural Network (ANN). A major advantage of ANNs compared to statistical modelling is that they do not require rigidly
structured experimental designs and can map functions using historical or incomplete data.
Biological Terminology Artificial Neural Network Terminology
Neuron Node/ Unit/ Neuron
Synapse Connection/ Edge/ Link
Synaptic Efficiency Connection Strength/ Weight
Firing Frequency Node Output
ANNs are known to be a powerful tool to simulate various non-linear systems and have been applied to numerous problems of considerable complexity in many fields, including engineering, psychology,
medicinal chemistry, and pharmaceutical research. They are good recognizers of patterns and robust classifiers, with the ability to generate when making decision based on imprecise input data ^17,
General Applications of ANN: ^15, 16, 18, 19
1. Pattern Classification Applications
• Speech Recognition and Speech Synthesis
• Classification of radar/sonar signals
• Remote Sensing and image classification
• Handwritten character/digits Recognition
• ECG/EEG/EMG Filtering/Classification
• Credit card application screening
• Data mining, Information retrieval
2. Control, Time series, Estimation
• Machine Control/Robot manipulation
• Financial/ Scientific/ Engineering Time series forecasting.
• Inverse modelling of vocal tract
3. Optimization
• Travelling sales person
• Multiprocessor scheduling and task assignment
4. Real World Application Examples
• Real Estate appraisal
• Credit scoring
• Geochemical modelling
• Hospital patient stay length prediction
• Breast cancer cell image classification
• Jury summoning prediction
• Precision direct mailing
• Natural gas price prediction
• In drug discovery: Quantitative Structure-Activity Relationship (QSAR), Quantitative Structure Toxicity Relationship (QSTR), Virtual Screening (VS)
Applications of ANN in Pharmaceutical Product and Process Development: ^20, 21, 22
• In the modeling and optimization of pharmaceutical formulations
• In minimization of the capping tendency of tableting process optimization.
• In the prediction of the in-vitro permeability of drugs
• Optimizing emulsion formulation
• Determination of factors controlling the particle size of nanoparticle.
• ANN in tablet manufacturing.
• Investigation of the effects of process variables on derived properties of spray-dried solid dispersion.
• Quantitative structure Property relationship and Molecular Modeling.
• Molecular de novo design and combinatorial libraries.
• Validation of pharmaceutical processes.
• Modeling the response surface in HPLC
• Structure Retention Relationships in Chromatography.
Artificial Neural Network Structure: As biologically inspired computational model, ANN is capable of simulating neurological processing ability of the human brain. An average human brain contains
about 100 billion neurons, with each neuron being connected with 1000-10,000 connections to others ^23.
A single neuron consists of three major parts Fig. 1
FIG. 1: A BIOLOGICAL AND AN ARTIFICIAL NEURON. (Via https://www.quora.com/What-is-the-differences-between-artificial-neural-network-computer-science-and-biological-neural-network)
• Dendrites (fine branched out threads)- carrying signals into the cell
• The cell body- receiving and processing the information
• The axon (a single longer extension) - carries the signal away and relays it to the dendrites of the next neuron or receptor of a target cell. The signals are conducted in an all-or-none fashion
through the cells.
The arrangement of neurons to form layers and the connection pattern formed within and between layers is called network architecture.
Simulating BNN there are 3 layers in ANN which are as follows ^24
• Input Layer: It contains those units (artificial neuron) which receive input from the outside world on which network will learn, recognize, or otherwise process.
• Hidden layer: These units are in between the input and output layer. The job of the hidden layer is to transform the input into something that the output unit can use in some way. The hidden
layer may be different for different types of networks.
• Output layer: It contains units that respond to the information about how it learns any task. An output layer depends on the outcome of the problem. The hidden layer then links to an output layer
receives connections from hidden layers. It returns an output value that corresponds to the prediction of the response variable. The active nodes of the output layer combine and change the data
to produce the output values. Fig. 2 shows the basic architecture of ANN.
FIG. 2: ARCHITECTURE OF ANN
Weight and Activation Function: Weight is a parameter of the network that transforms input data within the hidden layer. The training mode of model begins with arbitrary values of the weights - they
might be random numbers – and proceeds iteratively. Each iteration of the complete training set is called an epoch. In each single epoch the network adjusts the weights in the direction so that to
reduces the error. As the iterative process of incremental adjustment continues, the weights gradually converge to the locally optimal set of values. Many epochs are usually required before training
is completed. So, generally, weights are parameters selected by the network itself to reduce error while learning. Activation functions are mathematical equations that determine the output of a
neural network. This function is attached to each neuron in the network and determines if it should be “fired” or not depending on whether neuron’s input is relevant for model’s prediction. There are
different types of activation functions like Sigmoid, Hyperbolic Tangent, Softmax, Softsign, Rectified Linear Unit, Exponential Linear Unit, etc. Types of ANN models can be classified into various
categories based on different parameters ^25, 26, 27, which are shown in Table 2:
Parameter Types
Prediction Neural Network / Nonadaptive Network
Based on their function Clustering Neural Network / Feature Extracting Network
Association Neural Network
Based on nature of Fixed, Adaptive
Based on learning Feed forward, Recurrent
Based on Memory unit Static, Dynamic
Based on development of Single layer, Multi-Layer
Miscellaneous Hopfield network ,Stochastic neural network ,Modular neural network, Radial basis function neural network, Kohonen self-organizing neural network, Convolutional neural
networks, Boltzmann machine network, Long Short-Term Memory Units (LSTMs)
How does a Model “learn”? The learning process is human intelligence. This ability permits us to acquire various skills and expertise in numerous fields with reference to changing environments.
Our reactions rather say outputs in different -different conditions are totally based on some previous experiences or inputs.
So, implementing these learning capabilities in machines and predicting the outputs by them is the central goal of Artificial intelligence. Based on the topology, the connection of ANN could be
feedback and feed-forward ^28, 29.
Feedback or Recurrent ANN Model: There are cycles in the connections. The feedback model first decreases error between predicted output and real output, and after that, it gives the final output.
In such ANN models, each time an input is presented, the ANN model must iterate for a potentially long time before it produces a response.
Feedback ANN models are usually more difficult to train than feed-forward ANN models. Here the network learns by Backpropagation or Delta rule.
Feedforward ANN Model or Acyclic Network: the connections between the nodes do not form cycles. Feedforward network works on the bases of randomly assigned weigh values and apply the activation
function and gives the output ^30.
FIG. 3: FEEDBACK AND FEED-FORWARD MODELS
ANN Model Development: ^31, 32 In this research work model is developed to predict drug release from SR monolithic tablet by using backpropagation supervised learning method which not possible by
other simple statistical methods ^33. ANN model can learn the latent relationship between the causal factors (formulation variables) and response (in-vitro release characteristics) ^34. Artificial
model development includes a number of operations like training, validation or testing. Such operations are as follows stepwise:
FIG. 4: HIERARCHY OF ANN MODEL DEVELOPMENT
As per hierarchy, we can select the type of network on the basis of the problem we want to solve. The present study aims to develop ANN model which a formulator can be used in the prediction of the
SR tablet performance. Because of which there will be the elimination of course trial and error methods and even it can be useful instead of other statistical methods where there is number of runs
that have to be performed because of a large number of dependent factors.
Data Gathering: Formulators have to develop their own data set to develop their own ANN model ^35. Model will be trained by data on principle of generalization ^36. Present study involves retrieval
and compilation of data over experiments and granted patents pertaining to pharmaceutical formulation. This huge pool of data will be utilized for development of ANN model. Compilation of data from
granted patents will vanish the need to perform number of experiments. Granted patents provide authenticated data, per se. However, data will be validated randomly by developing formulation from a
collected data set. For these data set, data were developed and collected having selection criteria like Sustain release Monolithic tablets. From the total of 101 data collected. These formulation
data contain characteristics of drugs and excipients like mol. Wt of the drug, log P and solubility of the drug as well as factors which can affect the dissolution profile of the formulation like pH
of dissolution medium, USP apparatus number, RPM, drug to polymer ratio, total weight of tablet, etc. This data will be used as input nodes for the network, and the network will predict the
performance of SR monolithic. Tablets in the form of time required to release 10%, 50%, and 80% release of the drug from formulations as output nodes ^37, 38. Data should be selected as such as it
does not overfit or underfit the model.
Data Splitting: Generally 3 data set (training, validation, testing) splitting technique is used. Here we have used the software JUSTNN version 4.0b. This requires 2 set of data i.e. training and
testing. From these dataset itself the software will learn for its validation and for tuning hyper-parameters to fit the best model with least error. The accuracy of the test set shows the prediction
ability of unknown data and this strategy is widely adopted in machine learning ^39, 40. So, here 79 data were used for training and remaining was kept as testing dataset.
IFS: In various formulation problems, a large range of variables are available to train the network, but it is very much hard to define which of them are most relevant or useful ^41. This situation
can be more confusing when there is interdependence or correlations between any of the variable. ANN can be used to rank which of the various formulation and processing variables are most critical in
influencing the output parameter of interest because of its unique ability in spotting pattern in data. So the network is designed by considering the input feature selection. Input feature Selection
is generally used to cop up with large number of irrelevant input features which may confuse the network unnecessarily during learning. The objective of IFS are manifold, the important ones being (1)
to avoid overfitting and improve model performance, (2) to provide faster and more cost effective models, (3) to gain a deeper insight into the underlying process of data generation.
The software itself contains the feature of IFS on bases of which we can perform this task. Software suggested importance of inputs in this manner: Polymer 1 viscosity > Drug: Polymer 2 ratio > Drug
Amount > Log P> Polymer 3 viscosity > Mol. Wt of drug > USP Apparatus > Tab wt > RPM > Drug: Polymer 1 ratio > pH of medium > Polymer 2 viscosity > Water solubility of drug > pKa > Drug: Polymer 3
ratio Following “remove one at a time” strategy and evaluating each model on bases of RMSE.
Comparison between actual outcome and predicted outcome is compared by means of root mean square error (RMSE). More the error is the dissimilarity between two results. So poor and less accurate the
prediction is. So our target should be to decrease the RMSE ^42, 43. RMSE can be calculated by using following equation:
RMSE = √ (1 / N∑ (Predicted – Observed)^2)
Where “PREDICTED” is the predicted value from the models, “OBSERVED” is the observed value from the experiments, and “N” is the total number of test cases.
Training and Optimization of Learning Variables: The training dataset contains total of 79 data which is been trained with different learning variables. Choosing the correct variables like learning
rate and momentum will help weight adjustment. Setting right learning rate could be the biggest task, if learning rate is too small algorithm might take long time to converge ^44. On the other hand,
choosing large learning rate could have opposite effect, algorithm could diverge also the large values of momentum will influence the adjustment in the current weight to move in same direction.
This ANN network contained 1 hidden layer with 3 hidden nodes. The finalized Network after IFS was trained for further learning variables. Variables like Learning rate and Momentum were studied at 3
different levels. Targeted error was set below 0.01 within 10% of range of given validation data. So total of 3 models were trained for 3 different values.
Evaluation Criteria: Generally, in machine learning, correlation coefficient of determination are usually adopted as evaluation metrics for regression problems. The correlation coefficient generally
indicates a linear relationship between 2 variables and gives the correlation between predicted and observed value. However, this cannot be that useful in the prediction of pharmaceutical product
performance. In pharmaceutics, a good dissolution profile prediction model should have less than 10% error ^45. Following USFDA credibility or we can say accuracy of final model can be evaluated on
basis of similarity (f2) and dissimilarity factors (f1) ^46, 47, 48.
The f1 factor (eq.1) calculates the percent difference between the two dissolution profiles at each time point and is a measurement of relative error between the two profiles:
f1 = (∑nt = 1 [Rt – Tt] / ∑nt = 1 Rt) * 100........1
where n is the number of time points, R[t] is the mean dissolution value for the reference product at time t and T[t] is the mean dissolution value for the test product at that same time point. The f
1 value is equal to zero when the test and reference profiles are identical and increases as the two profiles become less similar.
The f2 factor (eq.2) is a logarithmic reciprocal square root transformation of the sum of squared error and is a measurement of the similarity in the percent dissolution between the two profiles. The
f2 value is equal to 100 when the test and reference profiles are identical and exponentially decrease as the two profiles becomes less similar.
f2 = 50 *log10 [100 /1+ (∑nt = 1(Rt –Tt 02 / /n....2
Where R[t ]and T[t] are the percent (%) drug got into solution at every time point for the reference and test product, respectively.
According to the guidelines issued by regulatory authorities f1 values upto 15 (0-15) and f2 values greater than 50 (50-100) ensures the “sameness” or “equivalence “of the two profiles. Values less
than 50 may be acceptable if justified.
Testing: Finalized Model was tested for its accuracy by using remained Testing dataset. The model ultimately predicts the value, which can be compared in form of f1 and f2 which must have to fulfill
guidelines of regulatory authorities ^49, 50.
RESULTS AND DISCUSSION: Deep learning requires a huge number of the dataset from which it structures algorithms in layers to create an “artificial neural network” that can learn and make intelligent
decision on its own.
TABLE 3: COMPARISON OF NETWORKS FOR IFS ON BASES OF RMSE
Observed value Predicted value RMSE
1.36 2.2806
Network A
5.18 5.6155 1.338144
(Without removal of any input)
7.49 9.572
1.36 1.6672
Network B
5.18 4.6353 0.630322
(Removal of D:P3 ratio)
7.49 8.3849
1.36 1.6995
Network C
5.18 4.6116 0.604847
(Removal of D:P3 ratio+ pKa)
7.49 8.3019
Network D 1.36 0.7903
5.18 5.4318 1.338549
(Removal of D:P3 ratio+ pKa+ solubility of drug) 7.49 9.7232
Here total of 101 data used to train and test the model. Inputs were optimised by using IFS method. From the table 3 it can be concluded that up to removal of 2 features RMSE is decreasing but
further removal leads to increase in RMSE suggesting the previous one as a best fit. So the finalized model which is selected for training is having 13 input nodes and 3 output nodes.
Variables Network C1 Network C2 Network C3
Learning rate 0.01 2.5 5
Momentum 0.05 0.6 0.9
Epochs 480801 9038600 114201
Target error 0.01 0.01 0.01
Average error 0.009504 0.013350 0.2413
RMSE 0.5062 1.3367 2.6335
After performing IFS method, final model C was further studied for different learning variables for optimization as shown in Table 4.
From these 3 models, Network C1 was optimized on the bases of RMSE as it is having the least RMSE among all, indicating the best fit model, which would be further go for testing and evaluation.
This strategy of similarity and dissimilarity was adopted because it would be useful to paramount the predictive ability of the trained network and to verify whether the network could be used to
speculate unseen data within the dataset. Below Table 5 shows the comparison of predicted and observed values and their evaluation in the form of f1 and f2 by using the remaining 22 test datasets.
TABLE 5: TESTING OF ANN ON BASES OF f1 AND f2
S. no. Observed Value Predicted Value f1 f2
2.9106 2.1558
1 11.672 12.6761 2.888 96.85
20.3986 20.4187
5.821 5.7879
2 9.875 10.5044 2.36 98.58
15.987 16.1385
1.8543 1.8736
3 6.7413 7.4654 6.768 98.24
10.832 10.7662
1.5221 1.88
4 6.6992 7.488 6.447 97.56
10.7212 10.7957
0.5341 0.4394
5 5.4998 5.5201 0.149 99.96
10.412 10.4619
0.4672 0.1645
6 5.9843 5.4987 2.854 98.68
12.5647 12.8103
0.7549 0.2833
7 6.2876 6.3092 1.463 99.11
10.98 11.1662
0.2656 0.1519
8 8.6951 8.7006 3.349 97.65
12.9875 13.8308
2.3111 2.4388
9 6.6782 6.5611 1.662 99.51
10.3168 9.9853
0.7065 0.6377
10 5.778 5.778 1.374 99.89
10.256 10.0948
0.4311 0.3229
11 5.9753 6.3312 5.645 98.03
9.9842 10.6617
0.3468 0.287
12 6.11 5.3704 1.728 97.34
9.003 9.5353
1.0178 0.1864
13 8.6431 8.3403 4.754 97.45
16.4569 16.3494
0.2543 0.1992
14 8.0543 8.7134 0.072 100.97
12.4511 13.8401
1.0322 0.153
15 8.6733 8.7134 3.333 97.5
13.7842 13.8401
0.6742 0.1869
16 8.9765 9.0108 1.742 99.15
11.5235 11.6078
3.8648 3.8806
17 8.6835 8.7424 0.743 99.93
14.5883 14.7149
7.1298 7.2526
18 14.8975 15.3526 1.429 99.2
23.7854 23.8623
0.9923 1.8356
19 6.2113 5.9159 2.4774 97.38
9.3145 9.1753
0.6322 0.4742
20 4.993 5.5753 1.75 98.69
10.1734 10.0255
6.7611 7.2985
21 14.953 15.4364 4.826 94.9
22.8639 23.9944
0.6419 0.255
22 5.7124 4.6392 11.56 95.97
8.1648 7.9467
Satisfying regulatory norms, values of testing data are falling within range i.e. f1 is between 0-15 and f2 is within 50-100 except one which ensures the “sameness” or “equivalence “of the two
profiles with average error ~0.0095. A regression plot was constructed for predicted value and the observed value of drug releases at various sampling points using the test dataset to obtain a
squared correlation coefficient (R^2) and slope.
The ANN model developed above that must yield a regression plot with a slope and R^2 both being closest to value 1.0 then it is considered as the optimum model. Considering these 2 criteria i.e. f1
and f2 and squared correlation coefficient (R^2) this developed model can be said satisfying in norms of accuracy and regulatory guidelines.
CONCLUSION: In this work, a generalized artificial neural network is successfully developed for drug release prediction of SR monolithic tablet. The networks were rigorously trained and optimized for
various variables and also tested for enough data to exhibit reliable prediction behaviour with best achieved average error ~0.0095 and R^2 0.9953. It also satisfactorily fulfils USFDA guidelines for
comparison of 2 dissolution profiles adding acceptability credits to our model. A lengthy and tedious work like Pharmaceutical formulation development can be simplified by using various statistical
methods, but it can be furthermore lightened by using today’s smart methods like ANN development. A once-developed model can further be used to predict the product performance eliminating the
requirement of tedious physical practicals. ANN is the perfect combination of experience of ages and intelligence of present, which must be explored more and more in the pharmaceutical world.
ACKNOWLEDGEMENT: Authors are grateful to Shree S. K. Patel College of pharmaceutical education & research, Ganpat University, whose cooperation made this study possible.
We also acknowledge Laksh Finechem Pvt. Ltd., Alembic Pharmaceuticals Ltd, Granules India Limited, and Gattefosse India Pvt. Ltd for providing gratis samples of drugs for the study.
CONFLICTS OF INTEREST: None
1. Briganti G and Le Moine O: Artificial Intelligence in Medicine: Today and Tomorrow. Front. Med. 2020; 7: 27.
2. Chen MY, Fan MH, Chen YL and Wei HM: Design of experiments on neural network's parameters optimization for time series forecasting in stock markets. Neural Network World 2013; 23: 369.
3. Jacques B, Heinz S, Peter van H and Hans L: Basic Concepts of Artificial Neural Networks (ANN) Modeling in the Application to Pharmaceutical Development. Pharmaceutical Development and Technology
1997; 2: 2: 95-109
4. Srikantha V: Modeling and optimization of developed cocoa beans extractor parameters using box behnken design and artificial neural network. Computer and electronics in Agriculture 2020; 117.
5. Jena PR, Majhi R, Kalli R, Managi S and Majhi B: Impact of COVID-19 on GDP of major economies: Application of the artificial neural network forecaster. Economic Analysis and Policy 2021; 69:
6. Toraman S, Alakus T and Turkoglu I: Convolutional capsnet: A novel artificial neural network approach to detect COVID-19 disease from X-ray images using capsule networks. Chaos, Solitons and
Fractals 2020; 140: 110-22.
7. Tamang SK: Forecasting of Covid-19 cases based on prediction using artificial neural network curve fitting technique Global J. Environ. Sci. Manage 2020 6(4).
8. Li EY: Artificial neural networks and their business applications. Information & Management 1994; 27: 303-13.
9. Barmpalexis P: Artificial neural networks in the optimization of a nimodipine controlled release tablet formulation. European Journal of Pharmaceutics and Biopharmaceutics 2010; 74: 316-23.
10. Vijaykumar S, Anastasia G, Prabodh S and Deepak B: Artificial Neural Network in Drug Delivery and Pharmaceutical Research. The Open Bioinformatics Journal 2013; 7(Suppl-1): 49-62.
11. Ebube NK, Owusu-Ababio G and Adeyeye CM: Preformulation studies and characterization of the physicochemical properties of amorphous polymers using artificial neural networks. Int J Pharm 2000;
196: 27-35.
12. Basheer IA and Hajmeer M: Artificial neural networks: fundamentals, computing, design, and application. Journal of microbiological methods 2000; 43: 3-1.
13. Haykin S: Neural Networks - A Comprehensive Foundation, Macmillan, 1994.
14. Krenker A, Bester J and Kos A: Introduction to the artificial neural networks. Artificial Neural Networks: Methodological Advances and Biomedical Applications. InTech 2011; 1: 1-8.
15. Yu HH: Introduction to ANN & Fuzzy Systems. University of Wisconsin – Madison Dept. Electrical & Computer Engineering 2001.
16. Wang: Application of Artificial neural Network model in diagnosis of Alzheimer’s disease. BMC Neurology 2019; 19: 154
17. Nassif AB, Shahin I, Attili I, Azzeh M and Shaalan K: Speech recognition using deep neural networks: A systematic review. IEEE Access 2019; 7: 19143-165.
18. Mossalam A and Arafa M: Using artificial neural networks (ANN) in projects monitoring dashboards’ formulation, HBRC Journal 2018, 14(3): 385-92.
19. Mandlik V, Bejugam PR and Singh S: Application of artificial neural networks in modern drug discovery. In: Artificial Neural Network for Drug Design, Delivery and Disposition. Academic Press
2016; 123-39.
20. Kustrin S and Beresford R: Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research. J Pharm Biomed Anal 2000; 22(5): 717-27.
21. Peh KK, Lim CP, Quek SS and Khoh KH: Use of artificial neural networks to predict drug dissolution profiles and evaluation of network performance using similarity factor. Pharm Res 2000; 17(11):
22. Takayama K, Takahara J, Fujikawa M, Ichikawa H and Nagai T: Formula optimization based on artificial neural networks in transdermal drug delivery. J Control Release 1999; 62(1-2): 161-70.
23. Larsen J: Introduction to Artificial Neural Networks. Section for Digital Signal Processing Department of Mathematical Modelling Technical Uni of Denmark 1999.
24. Michael ML, Iain C and Owen IC: The use of artificial neural networks for the selection of the most appropriate formulation and processing variables in order to predict the in-vitro dissolution
of sustained release mini tablets. AAPS PharmSciTech 2003; 4(2): 1-12.
25. Zupan J: Introduction to artificial neural network (ANN) methods: what they are and how to use them. Acta Chimica Slovenica 1994; 41: 327.
26. Bourquin J, Schmidli H, van Hoogevest P and Leuenberger H: Basic concepts of artificial neural networks (ANN) modeling in the application to pharmaceutical development. Pharm Dev Technol 1997; 2
(2): 95-109.
27. Kishan M, Chilukuri K and Sanjay R: Elements of Artificial Neural Networks. In: A Bradford book. MIT press; October 1996.
28. Wythoff BJ: Back propagation neural networks: a tutorial, Chemometr Intell Lab Syst 1993; 18: 115-55.
29. Simon H: Neural Network: A Comprehensive Foundation 1998. Second edition, Reprint 2009:183-95.
30. Kohonen T: Self-organization and Associative Memory, Springer Verlag, Berlin 1988.
31. Jadid MN and Fairbairn DR: Predicting moment-curvature parameters from experimental data. Eng Appl Artif Intell 1996; 9: 303-19.
32. Carpenter JC and Hoffman ME: Understanding neural network approximations and polynomial approximations helps neural network performance. AI Expert 1995; 10: 31-33.
33. Ibrić S, Djuriš J, Parojčić J and Djurić Z: Artificial neural networks in evaluation and optimization of modified release solid dosage forms. Pharmaceutic 2012; 4: 531-50.
34. Dowell JA, Hussain A, Devane J and Young D: Artificial neural networks applied to the in-vitro-in-vivo correlation of an extended-release formulation: initial trials and experience. J Phar Sci
1999; 88: 154-60.
35. Baptista D, Abreu S, Freitas F, Vasconcelos R and Morgado-Dias F: A survey of software and hardware use in artificial neural networks. Neural Computing and Applications 2013; 23: 591-99.
36. Svozil D: Introduction to multi-layer feed-forward neural networks. Chemometrics and Intelligent Laboratory Systems 1997; 39: 43-62.
37. Rayzard SM: Learning strategies and automated review acquisition: An Review. Report #926, Department of Computer Science, University of Illinois 1984.
38. Bourquin J, Schmidli H, van Hoogevest P and Leuenberger H: Basic concepts of artificial neural networks (ANN) modeling in the application to pharmaceutical development. Pharm Dev Technol 1997; 2:
39. De Mulder W, Bethard S and Moens MF: A survey on the application of recurrent neural networks to statistical lang modeling. Computer Speech & Lan 2015; 30: 61-98.
40. Hassan KM, Pezhman K and Lucia P: Computational intelligence models to predict porosity of tablets using minimum features. Drug Design, Development and Therapy 2017; 11: 193-202.
41. Goh WY: Application of RNNs to Prediction of Drug Dissolution Profiles. Neural Computing & Applications 2002; 10: 311-17.
42. Eftekhar B: Comparison of artificial neural network and logistic regression models for prediction of mortality in head trauma based on initial clinical data. BMC Medical Informatics and Decision
Making 2005; 5(3): 1-8.
43. Gölcü M, Sekmen Y, Erduranlı P and Salman MS: Artificial neural-network based modeling of variable valve-timing in a spark-ignition engine. Applied Energy 2005; 81: 187-97.
44. Chatterjee SP and Pandya AS: Artificial Neural Networks in Drug Transport Modeling and Simulatione II. Artificial Neural Network for Drug Design, Delivery and Disposition 2015: 243.
45. Shah VP, Tsong Y, Sathe P and Liu JP: In-vitro dissolution profile comparison-statistics and analysis of the similarity factor, f2. Pharm Res 1998; 15: 889-96.
46. Karuppiah SP: Analytical method development for dissolution release of finished solid oral dosage forms. Int J Curr Pharm Res 2012; 4: 48-53.
47. Lourenço FR, Ghisleni DD, Yamamoto RN and Pinto TD: Comparison of dissolution profile of extended-release oral dosage forms-two one-sided equivalence test. Brazilian journal of pharmaceutical
sciences 2013; 49: 367-71.
48. Yilong Y: Deep learning for in-vitro prediction of pharmaceutical formulations”. Acta Pharmaceutica Sinica B 2019; 9(1): 177-85.
49. Marijana M: Optimization and Prediction of Ibuprofen Release from 3D DLP Printlets Using Artificial Neural Networks. Pharmaceutics 2019; 11: 544.
50. Zakria Q: Predicting the energy output of hybrid PV–wind renewable energy system using feature selection technique for smart grids. Energy Reports 2021 (Article in Press).
How to cite this article:
Vyas TH and Patel GN: Generalized artificial neural network modelling and its application in performance prediction of sustain release monolithic tablets. Int J Pharm Sci & Res 2021; 12(12):
6530-39. doi: 10.13040/IJPSR.0975-8232.12(12).6530-39.
All © 2021 are reserved by the International Journal of Pharmaceutical Sciences and Research. This Journal licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported
Article Information
767 KB
Tulsi H. Vyas * and Girish N. Patel
Department of Pharmaceutics, Shree S. K. Patel College of Pharmaceutical Education & Research, Ganpat University, Ganpat Vidyanagar, Mehsana-Gozaria Highway, Mehsana, Gujarat, India.
25 January 2021
05 May 2021
28 May 2021
01 December 2021 | {"url":"https://ijpsr.com/bft-article/generalized-artificial-neural-network-modelling-and-its-application-in-performance-prediction-of-sustained-release-monolithic-tablets/?view=fulltext","timestamp":"2024-11-08T20:14:33Z","content_type":"text/html","content_length":"89762","record_id":"<urn:uuid:3cd6ad9a-63b2-4e8b-830d-1aad2bc2fbdd>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00873.warc.gz"} |
July 2019 - Flirting with Models
This post is available as a PDF download here.
• We have shown many times that timing luck – when a portfolio chooses to rebalance – can have a large impact on the performance of tactical strategies.
• However, fundamental strategies like value portfolios are susceptible to timing luck, as well.
• Once the rebalance frequency of a strategy is set, we can mitigate the risk of choosing a poor rebalance date by diversifying across all potential variations.
• In many cases, this mitigates the risk of realizing poor performance from an unfortunate choice of rebalance date while achieving a risk profile similar to the top tier of potential strategy
• By utilizing strategies that manage timing luck, the investors can more accurately assess performance differences arising from luck and skill.
On August 7^th, 2013 we wrote a short blog post titled The Luck of Rebalance Timing. That means we have been prattling on about the impact of timing luck for over six years now (with apologies to
our compliance department…).
(For those still unfamiliar with the idea of timing luck, we will point you to a recent publication from Spring Valley Asset Management that provides a very approachable introduction to the topic.^1)
While most of our earliest studies related to the impact of timing luck in tactical strategies, over time we realized that timing luck could have a profound impact on just about any strategy that
rebalances on a fixed frequency. We found that even a simple fixed-mix allocation of stocks and bonds could see annual performance spreads exceeding 700bp due only to the choice of when they
rebalanced in a given year.
In seeking to generalize the concept, we derived a formula that would estimate how much timing luck a strategy might have. The details of the derivation can be found in our paper recently published
in the Journal of Index Investing, but the basic formula is:
Here T is strategy turnover, F is how many times per year the strategy rebalances, and S is the volatility of a long/short portfolio capturing the difference between what the strategy is currently
invested in versus what it could be invested in.
We’re biased, but we think the intuition here works out fairly nicely:
• The higher a strategy’s turnover, the greater the impact of our choice of rebalance dates. For example, if we have a value strategy that has 50% turnover per year, an implementation that
rebalances in January versus one that rebalances in July might end up holding very different securities. On the other hand, if the strategy has just 1% turnover per year, we don’t expect the
differences in holdings to be very large and therefore timing luck impact would be minimal.
• The more frequently we rebalance, the lower the timing luck. Again, this makes sense as more frequent rebalancing limits the potential difference in holdings of different implementation dates.
Again, consider a value strategy with 50% turnover. If our portfolio rebalances every other month, there are two potential implementations: one that rebalances January, March, May, etc. and one
that rebalances February, April, June, etc. We would expect the difference in portfolio holdings to be much more limited than in the case where we rebalance only annually.^2
• The last term, S, is most easily explained with an example. If we have a portfolio that can hold either the Russell 1000 or the S&P 500, we do not expect there to be a large amount of performance
dispersion regardless of when we rebalance or how frequently we do so. The volatility of a portfolio that is long the Russell 1000 and short the S&P 500 is so small, it drives timing luck near
zero. On the other hand, if a portfolio can hold the Russell 1000 or be short the S&P 500, differences in holdings due to different rebalance dates can lead to massive performance
dispersion. Generally speaking, S is larger for more highly concentrated strategies with large performance dispersion in their investable universe.
Timing Luck in Smart Beta
To date, we have not meaningfully tested timing luck in the realm of systematic equity strategies.^3 In this commentary, we aim to provide a concrete example of the potential impact.
A few weeks ago, however, we introduced our Systematic Value portfolio, which seeks to deliver concentrated exposure to the value style while avoiding unintended process and timing luck bets.
To achieve this, we implement an overlapping portfolio process. Each month we construct a concentrated deep value portfolio, selecting just 50 stocks from the S&P 500. However, because we believe
the evidence suggests that value is a slow-moving signal, we aim for a holding period between 3-to-5 years. To achieve this, our capital is divided across the prior 60 months of portfolios.^4
Which all means that we have monthly snapshots of deep value^5 portfolios going back to November 2012, providing us data to construct all sorts of rebalance variations.
The Luck of Annual Rebalancing
Given our portfolio snapshots, we will create annually rebalanced portfolios. With monthly portfolios, there are twelve variations we can construct: a portfolio that reconstitutes each January; one
that reconstitutes each February; a portfolio that reconstitutes each March; et cetera.
Below we plot the equity curves for these twelve variations.
Source: CSI Analytics. Calculations by Newfound Research. Results are hypothetical. Results assume the reinvestment of all distributions. Results are gross of all fees, including, but not
limited to, manager fees, transaction costs, and taxes. Past performance is not an indicator of future results.
We cannot stress enough that these portfolios are all implemented using a completely identical process. The only difference is when they run that process. The annualized returns range from 9.6% to
12.2%. And those two portfolios with the largest disparity rebalanced just a month apart: January and February.
To avoid timing luck, we want to diversify when we rebalance. The simplest way of achieving this goal is through overlapping portfolios. For example, we can build portfolios that rebalance
annually, but allocate to two different dates. One portfolio could place 50% of its capital in the January rebalance index and 50% in the July rebalance index.
Another variation could place 50% of its capital in the February index and 50% in the August index.^6 There are six possible variations, which we plot below.
The best performing variation (January and July) returned 11.7% annualized, while the worst (February and August) returned 9.7%. While the spread has narrowed, it would be dangerous to confuse 200bp
annualized for alpha instead of rebalancing luck.
Source: CSI Analytics. Calculations by Newfound Research. Results are hypothetical. Results assume the reinvestment of all distributions. Results are gross of all fees, including, but not
limited to, manager fees, transaction costs, and taxes. Past performance is not an indicator of future results.
We can go beyond just two overlapping portfolios, though. Below we plot the three variations that contain four overlapping portfolios (January-April-July-October, February-May-August-November, and
March-June-September-December). The best variation now returns 10.9% annualized while the worst returns 10.1% annualized. We can see how overlapping portfolios are shrinking the variation in
Finally, we can plot the variation that employs 12 overlapping portfolios. This variation returns 10.6% annualized; almost perfectly in line with the average annualized return of the underlying 12
variations. No surprise: diversification has neutralized timing luck.
Source: CSI Analytics. Calculations by Newfound Research. Results are hypothetical. Results assume the reinvestment of all distributions. Results are gross of all fees, including, but not
limited to, manager fees, transaction costs, and taxes. Past performance is not an indicator of future results.
Source: CSI Analytics. Calculations by Newfound Research. Results are hypothetical. Results assume the reinvestment of all distributions. Results are gross of all fees, including, but not
limited to, manager fees, transaction costs, and taxes. Past performance is not an indicator of future results.
But besides being “average by design,” how can we measure the benefits of diversification?
As with most ensemble approaches, we see a reduction in realized risk metrics. For example, below we plot the maximum realized drawdown for annual variations, semi-annual variations, quarterly
variations, and the monthly variation. While the dispersion is limited to just a few hundred basis points, we can see that the diversification embedded in the monthly variation is able to reduce the
bad luck of choosing an unfortunate rebalance date.
Source: CSI Analytics. Calculations by Newfound Research. Results are hypothetical. Results assume the reinvestment of all distributions. Results are gross of all fees, including, but not
limited to, manager fees, transaction costs, and taxes. Past performance is not an indicator of future results.
Just Rebalance more Frequently?
One of the major levers in the timing luck equation is how frequently the portfolio is rebalanced. However, we firmly believe that while rebalancing frequency impacts timing luck, timing luck should
not be a driving factor in our choice of rebalance frequency.
Rather, rebalance frequency choices should be a function of the speed at which our signal decays (e.g. fast-changing signals such as momentum versus slow-changing signals like value) versus
implementation costs (e.g. explicit trading costs, market impact, and taxes). Only after this choice is made should we seek to limit timing luck.
Nevertheless, we can ask the question, “how does rebalancing more frequently impact timing luck in this case?”
To answer this question, we will evaluate quarterly-rebalanced portfolios. The distinction here from the quarterly overlapping portfolios above is that the entire portfolio is rebalanced each
quarter rather than only a quarter of the portfolio. Below, we plot the equity curves for the three possible variations.
Source: CSI Analytics. Calculations by Newfound Research. Results are hypothetical. Results assume the reinvestment of all distributions. Results are gross of all fees, including, but not
limited to, manager fees, transaction costs, and taxes. Past performance is not an indicator of future results.
The best performing variation returns 11.7% annualized while the worst returns 9.7% annualized, for a spread of 200 basis points. This is actually larger than the spread we saw with the three
quarterly overlapping portfolio variations, and likely due to the fact that turnover within the portfolios increased meaningfully.
While we can see that increasing the frequency of rebalancing can help, in our opinion the choice of rebalance frequency should be distinct from the choice of managing timing luck.
In our opinion, there are at least two meaningful conclusions here:
The first is for product manufacturers (e.g. index issuers) and is rather simple: if you’re going to have a fixed rebalance schedule, please implement overlapping portfolios. It isn’t hard. It is
literally just averaging. We’re all better off for it.
The second is for product users: realize that performance dispersion between similarly-described systematic strategies can be heavily influenced by when they rebalance. The excess return may really
just be a phantom of luck, not skill.
The solution to this problem, in our opinion, is to either: (1) pick an approach and just stick to it regardless of perceived dispersion, accepting the impact of timing luck; (2) hold multiple
approaches that rebalance on different days; or (3) implement an approach that accounts for timing luck.
We believe the first approach is easier said than done. And without a framework for distinguishing between timing luck and alpha, we’re largely making arbitrary choices.
The second approach is certainly feasible but has the potential downside of requiring more holdings as well as potentially forcing an investor to purchase an approach they are less comfortable
with. For example, blending IWD (Russell 1000 Value), RPV (S&P 500 Pure Value), VLUE (MSCI U.S. Enhanced Value), and QVAL (Alpha Architect U.S. Quantitative Value) may create a portfolio that
rebalances on many different dates (annual in May; annual in December; semi-annual in May and November; and quarterly, respectively), it also introduces significant process differences. Though
research suggests that investors may benefit from further manager/process diversification.
For investors with conviction in a single strategy implementation, the last approach is certainly the best. Unfortunately, as far as we are aware, there are only a few firms who actively implement
overlapping portfolios (including Newfound Research, O’Shaughnessy Asset Management, AQR, and Research Affiliates). Until more firms adopt this approach, timing luck will continue to loom large.
This post is available as a PDF download here.
• We explore a representative multi-asset momentum model that is similar to many bank-based indexes behind structured products and market-linked CDs.
• With a monthly rebalance cycle, we find substantial timing luck risk.
• Using the same basic framework, we build a simple ensemble approach, diversifying both process and rebalance timing risk.
• We find that the virtual strategy-of-strategies is able to harvest diversification benefits, realizing a top-quartile Sharpe ratio with a bottom-quartile maximum drawdown.
Early in the 2010s, a suite of index-linked products came to market that raised billions of dollars. These products – offered by just about every major bank – sought to simultaneously exploit the
diversification benefits of modern portfolio theory and the potential for excess returns from the momentum anomaly.
While each index has its own bells and whistles, they generally follow the same approach:
• A global, multi-asset universe covering equities, fixed income, and commodities.
• Implemented using highly liquid ETFs.
• Asset class and position-level allocation limits.
• A monthly rebalance schedule.
• A portfolio optimization that seeks to maximize weighted prior returns (e.g. prior 6 month returns) while limiting portfolio volatility to some maximum threshold (e.g. 5%).
And despite their differences, we can see in plotting their returns below that these indices generally share a common return pattern, indicating a common, driving style.
Source: Bloomberg.
Frequent readers will know that “monthly rebalance” is an immediate red flag for us here at Newfound: an indicator that timing luck is likely lurking nearby.
Replicating Multi-Asset Momentum
To test the impact of timing luck, we replicate a simple multi-asset momentum strategy based upon available index descriptions.
We rebalance the portfolio at the end of each month. Our optimization process seeks to identify the portfolio with a realized volatility less than 5% that would have maximized returns over the prior
six months, subject to a number of position and asset-level limits. If the 5% volatility target is not achievable, the target is increased by 1% until a portfolio can be constructed that satisfies
our constraints.
We use the following ETFs and asset class limits:
As a naïve test for timing luck, rather than assuming the index rebalances at the end of each month, we will simply assume the index rebalances every 21 trading days. In doing so, we can construct 21
different variations of the index, each representing the results from selecting a different rebalance date.
Source: CSI Analytics; Calculations by Newfound Research. Results are backtested and hypothetical. Results assume the reinvestment of all distributions. Results are gross of all fees, including,
but not limited to, manager fees, transaction costs, and taxes, with the exception of underlying ETF expense ratios. Past performance is not an indicator of future results.
As expected, the choice of rebalance date has a meaningful impact. Annualized returns range from 4.7% to 5.5%, Sharpe ratios range from 0.6 to 0.9, and maximum drawdowns range from 9.9% to 20.8%.
On a year-by-year basis, the only thing that is consistent is the large spread between the worst and best-performing rebalance date. On average, the yearly spread exceeds 400 basis points.
Min Max
2008* -9.91% 0.85%
2009 2.36% 4.59%
2010 6.46% 9.65%
2011 3.31% 10.15%
2012 6.76% 10.83%
2013 3.42% 6.13%
2014 5.98% 10.60%
2015 -5.93% -2.51%
2016 4.18% 8.45%
2017 9.60% 11.62%
2018 -6.00% -2.53%
2019 YTD 5.93% 10.01%
* Partial year starting 7/22/2018
We’ve said it in the past and we’ll say it again: timing luck can be the difference between hired and fired. And while we’d rather be on the side of good luck, the lack of control means we’d rather
just avoid this risk all together.
If it isn’t nailed down for a reason, diversify it
The choice of when to rebalance is certainly not the only free variable of our multi-asset momentum strategy. Without an explicit view as to why a choice is made, our preference is always to
diversify so as to avoid specification risk.
We will leave the constraints (e.g. volatility target and weight constraints) well enough alone in this example, but we should consider the process by which we’re measuring past returns as well as
the horizon over which we’re measuring it. There is plenty of historical efficacy to using prior 6-month total returns for momentum, but no lack of evidence supporting other lookback horizons or
Therefore, we will use three models of momentum: prior total return, the distance of price from its moving average, and the distance of a short-term moving average from a longer-term moving average.
We will vary the parameterization of these signals to cover horizons ranging from 3- to 15-months in length.
We will also vary which day of the month the portfolio rebalances on.
By varying the signal, the lookback horizon, and the rebalance date, we can generate hundreds of different portfolios, all supported by the same theoretical evidence but having slightly different
realized results due to their particular specification.
Our robust portfolio emerges by calculating the weights for all these different variations and averaging them together, in many ways creating a virtual strategy-of-strategies.
Below we plot the result of this –ensemble approach– as compared to a –random sample of the underlying specifications–. We can see that while there are specifications that do much better, there are
also those that do much worse. By employing an ensemble approach, we forgo the opportunity for good luck and avoid the risk of bad luck. Along the way, though, we may pick up some diversification
benefits: the Sharpe ratio of the ensemble approach fell in the top quartile of specifications and its maximum drawdown was in the bottom quartile (i.e. lower drawdown).
Source: CSI Analytics; Calculations by Newfound Research. Results are backtested and hypothetical. Results assume the reinvestment of all distributions. Results are gross of all fees, including,
but not limited to, manager fees, transaction costs, and taxes, with the exception of underlying ETF expense ratios. Past performance is not an indicator of future results.
In this commentary, we again demonstrate the potential risk of needless specification and the potential power of diversification.
Using a popular multi-asset momentum model as our example, we again find a significant amount of timing luck lurking in a monthly rebalance specification. By building a virtual
strategy-of-strategies, we are able to manage this risk by partially rebalancing our portfolio on different days.
We go a step further, acknowledging that processrepresents another axis of risk. Specifically, we vary both how we measure momentum and the horizon over which it is measured. Through the variation
of rebalance days, model specifications, and lookback horizons, we generate over 500 different strategy specifications and combine them into a virtual strategy-of-strategies to generate our robust
multi-asset momentum model.
As with prior commentaries, we find that the robust model is able to effectively reduce the risk of both specification and timing luck. But perhaps most importantly, it was able to harvest the
benefits of diversification, realizing a Sharpe ratio in the top quartile of specifications and a maximum drawdown in the lowest quartile.
This post is available as a PDF download here.
• Many retirement planning analyses rely on Monte Carlo simulations with static assumptions for withdrawals.
• Incorporating dynamic spending rules can more closely align the simulations with how investors would likely behave during times when the plan looked like it was on a path to failure.
• Even a modest reduction in withdrawals (e.g. 10%) can have a meaningful impact on reducing failure rates, nearly cutting it in half in a sample simulation.
• Combining dynamic spending rules with other marginal improvements, such as supplemental income and active risk management, can lead to more robust retirement plans and give investors a better
understanding of the variables that are within their realm of control.
Monte Carlo simulations are a prevalent tool in financial planning, especially pertaining to retirement success calculations.
Under a typical framework of normally distributed portfolio returns and constant inflation-adjusted withdrawals, calculating the success of a given retirement portfolio is straightforward. But as
with most tools in finance, the art lies both in the assumptions that go into the calculation and in the proper interpretation of the result.
If a client is told they have a 10% chance of running out of money over their projected retirement horizon, what does that mean for them?
They cannot make 9 copies of themselves to live out separate lives, with one copy (hopefully not the original) unfortunately burning through the account prematurely.
They also cannot create 9 parallel universes and ensure they do not choose whichever one does not work out.
We wrote previously how investors follow a single path (You Are Not a Monte-Carlo Simulation). If that path hits zero, the other hypothetical simulation paths don’t mean a thing.
A simulation path is only as valuable as the assumptions that go into creating it, and fortunately, we can make our simulations align more closely with investor behavior.
The best way to interpret the 10% failure rate is to think of it as a 10% chance of having to make an adjustment before it hits zero. Rarely would an investor stand by while their account went to
zero. There are circumstances that are entirely out of investor control, but to the extent that there was something they could do to prevent that event, they would most likely do it.
Derek Tharp, on Michael Kitces’ blog, wrote a post a few years ago weighing the relative benefit of implementing small but permanent adjustments vs. large but temporary adjustments to retirement
withdrawals and found that making small adjustments and leaving them in place led to greater likelihoods of success over retirement horizons (Dynamic Retirement Spending Adjustments:
Small-But-Permanent Vs Large-But-Temporary).
In this week’s commentary, we want to dig a little deeper into some simple path dependent modifications that we can make to retirement Monte-Carlo simulations with the hope of creating a more robust
toolset for financial planning.
The Initial Plan
Suppose an investor is 65 and holds a moderate portfolio of 60% U.S. stocks and 40% U.S. Treasuries. From 1871 until mid-2019, this portfolio would have returned an inflation-adjusted 5.1% per year
with 10.6% volatility according to Global Financial Data.
Sticking with the rule-of-thumb 4% annual withdrawal of the initial portfolio balance and assuming a 30-year retirement horizon, this yields a predicted failure rate of 8% (plus or minus about 50
The financial plan is complete.
If you start with $1,000,000, simply withdraw $3,333/month and you should be fine 92% of the time.
But what if the portfolio drops 5% in the first month? (It almost did that in October 2018).
The projected failure rate over the next 29 years and 11 months has gone up to 11%. That violates a 10% threshold that may have been a target in the planning process.
Or what if it drops 30% in the first 6 months, like it would have in the second half of 1931?
Now the project failure rate is a staggering 46%. Retirement success has been reduced to a coin flip.
Admittedly, these are trying scenarios, but these numbers are a key driver for financial planning. If we can better understand the risks and spell out a course of action beforehand, then the risk of
making a rash emotion-driven decision can be mitigated.
Aligning the Plan with Reality
When the market environment is challenging, investors can benefit by being flexible. The initial financial plan does not have to be jettisoned; agreed upon actions within it are implemented.
One of the simplest – and most impactful – modifications to make is an adjustment to spending. For instance, an investor might decide at the outset to scale back spending by a set amount when the
probably of failure crosses a threshold. Source: Global Financial Data. Calculations by Newfound.
This reduction in spending would increase the probability of success going forward through the remainder of the retirement horizon.
And if we knew that this spending cut would likely happen if it was necessary, then we can quantify it as a rule in the initial Monte Carlo simulation used for financial planning.
Graphically, we can visualize this process by looking at the probabilities of failure for varying asset levels over time. For example, at 10 years after retirement, the orange line indicates that a
portfolio value ~80% of the initial value would have about a 5% failure rate.
Source: Global Financial Data. Calculations by Newfound.
As long as the portfolio value remains above a given line, no adjustment would be needed based on a standard Monte Carlo analysis. Once a line is crossed, the probability of success is below that
This chart presents a good illustration of sequence risk: the lines are flatter initially after retirement and the slope progressively steepens as the time progresses. A large drawdown initially puts
the portfolio below the threshold for making and adjustment.
For instance, at 5 years, the portfolio has more than a 10% failure rate if the value is below 86%. Assuming zero real returns, withdrawals alone would have reduced the value to 80%. Positive returns
over this short time period would be necessary to feel secure in the plan.
Looking under the hood along the individual paths used for the Monte Carlo simulation, at 5 years, a quarter of them would be in a state requiring an adjustment to spending at this 10% failure level.
Source: Global Financial Data. Calculations by Newfound.
This belies the fact that some of the paths that would have crossed this 10% failure threshold prior to the 5-year mark improved before the 5-year mark was hit. 75% of the paths were below this 10%
failure rate at some point prior to the 5-year mark. Without more appropriate expectations of a what these simulations mean, under this model, most investors would have felt like their plan’s failure
rate was uncomfortable at some point in the first 5 years after retirement!
Dynamic Spending Rules
If the goal is ultimately not to run out of funds in retirement, the first spending adjustment case can substantially improve those chances (aside from a large negative return in the final periods
prior to the last withdrawals).
Each month, we will compare the portfolio value to the 90% success value. If the portfolio is below that cutoff, we will size the withdrawal to hit improve the odds of success back to that level, if
The benefit of this approach is greatly improved success along the different paths. The cost is forgone income.
But this can mean forgoing a lot of income over the life of the portfolio in a particularly bad state of the world. The worst case in terms of this total forgone income is shown below.
Source: Global Financial Data. Calculations by Newfound.
The portfolio gives up withdrawals totaling 74%, nearly 19 years’ worth. Most of this is given up in consecutive periods during the prolonged drawdown that occurs shortly after retirement.
This is an extreme case that illustrates how large of income adjustments could be required to ensure success under a Monte Carlo framework.
The median case foregoes 9 months of total income over the portfolio horizon, and the worst 5% of cases all give up 30% (7.5 years) of income based off the initial portfolio value.
That is still a bit extreme in terms of potential cutbacks.
As a more realistic scenario that is easier on the pocketbook, we will limit the total annual cutback to 30% of the withdrawal in the following manner:
• If the current chance of failure is greater than 20%, cut spending by 30%. This equates to reducing the annual withdrawal by $12,000 assuming a $1,000,000 initial balance.
• If the current chance of failure is between 15% and 20%, cut spending by 20%. This equates to reducing the annual withdrawal by $8,000 assuming a $1,000,000 initial balance.
• If the current chance of failure is between 10% and 15%, cut spending by 10%. This equates to reducing the annual withdrawal by $4,000 assuming a $1,000,000 initial balance.
These rules still increase the success rate to 99% but substantially reduce the amount of reductions in income.
Looking again at the worst-case scenario, we see that this case still “fails” (even though it lasts another 4.5 years) but that its reduction in come is now less than half of what it was in the
extreme cutback case. This pattern is in line with the “lower for longer” reductions that Derek had looked at in the blog post.
Source: Global Financial Data. Calculations by Newfound.
On the 66% of sample paths where there was a cut in spending at some point, the average total cut amounted to 5% of the portfolio (a little over a year of withdrawals spread over the life of the
Even moving to an even less extreme reduction regime where only 10% cuts are ever made if the probability of failure increases above 10%, the average reduction in the 66% of cases that required cuts
was about 9 months of withdrawals over the 30-year period.
In these scenarios, the failure rate is reduced to 5% (from 8% with no dynamic spending rules).
Source: Global Financial Data. Calculations by Newfound.
Retirement simulations can be a powerful planning tool, but they are only as good as their inputs and assumptions. Making them align as closes with reality as possible can be a way to quantify the
impact of dynamic spending rules in retirement.
While the magnitude of spending reductions necessary to guarantee success of a retirement plan in all potential states of the world is prohibitive. However, small modifications to spending can have a
large impact on success.
For example, reducing withdrawal by 10% when the forecasted failure rate increases above 10% nearly cut the failure rate of the entire plan in half.
But dynamic spending rules do not exist in a vacuum; they can be paired with other marginal improvements to boost the likelihood of success:
• Seek out higher returns – small increases in portfolio returns can have a significant impact over the 30 -ear planning horizon.
• Supplement income – having supplements to income, even small ones, can offset spending during any market environment, improving the success rate of the financial plan.
• Actively manage risk – managing risk, especially early in retirement is a key factor to now having to reduce withdrawals in retirement.
• Plan for more flexibility – having the ability to reduce spending when necessary reduces the need to rely on the portfolio balance when the previous factors are not working.
While failure is certainly possible for investors, a “too big to fail” mentality is much more in line with the reality of retirement.
Even if absolute failure is unlikely, adjustments will likely be a requirement. These can be built into the retirement planning process and can shed light on stress testing scenarios and sensitivity.
From a retirement planning perspective, flexibility is simply another form of risk management.
This post is available as a PDF download here.
• In this research note, we continue our exploration of credit.
• Rather than test a quantitative signal, we explore credit changes through the lens of statistical decomposition.
• As with the Treasury yield curve, we find that changes in the credit spread curve can be largely explained by Level, Slope, and Curvature (so long as we adjust for relative volatility levels).
• We construct stylized portfolios to reflect these factors, adjusting position weights such that they contribute an equal amount of credit risk. We then neutralize interest rate exposure such that
the return of these portfolios represents credit-specific information.
• We find that the Level trade suggests little-to-no realized credit premium over the last 25 years, and Slope suggests no realized premium of junk-minus-quality within credit either. However,
results may be largely affected by idiosyncratic events (e.g. LTCM in 1998) or unhedged risks (e.g. sector differences in credit indices).
In this week’s research note, we continue our exploration of credit with a statistical decomposition of the credit spread curve. Just as the U.S. Treasury yield curve plots yields versus maturity,
the credit spread curve plots excess yield versus credit quality, providing us insight into how much extra return we demand for the risks of declining credit quality.
Source: Federal Reserve of St. Louis; Bloomberg. Calculations by Newfound Research.
Our goal in analyzing the credit spread curve is to gain a deeper understanding of the principal drivers behind its changes. In doing so, we hope to potentially gain intuition and ideas for trading
signals between low- and high-quality credit.
To begin our, we must first construct our credit spread curve. We will use the following index data to represent our different credit qualities.
• Aaa: Bloomberg U.S. Corporate Aaa Index (LCA3TRUU)
• Aa: Bloomberg U.S. Corporate Aa Index (LCA2TRUU)
• A:Bloomberg U.S. Corporate A Index (LCA1TRUU)
• Baa: Bloomberg U.S. Corporate Baa Index (LCB1TRUU)
• Ba: Bloomberg U.S. Corporate HY Ba Index (BCBATRUU)
• B: Bloomberg U.S. Corporate HY B Index (BCBHTRUU)
• Caa: Bloomberg U.S. Corporate HY Caa Index (BCAUTRUU)
Unfortunately, we cannot simply plot the yield-to-worst for each index, as spread captures the excess yield. Which raises the question: excess to what? As we want to isolate the credit component of
the yield, we need to remove the duration-equivalent Treasury rate.
Plotting the duration of each credit index over time, we can immediately see why incorporating this duration data will be important. Not only do durations vary meaningfully over time (e.g. Aaa
durations varying between 4.95 and 11.13), but they also deviate across quality (e.g. Caa durations currently sit near 3.3 while Aaa durations are north of 11.1).
Source: Bloomberg.
To calculate our credit spread curve, we must first calculate the duration-equivalent Treasury bond yield for each index at each point in time. For each credit index at each point in time, we use
the historical Treasury yield curve to numerically solve for the Treasury maturity that matches the credit index’s duration. We then subtract that matching rate from the credit index’s reported
yield-to-worst to estimate the credit spread.
We plot the spreads over time below.
Source: Federal Reserve of St. Louis; Bloomberg. Calculations by Newfound Research.
Statistical Decomposition: Eigen Portfolios
With our credit spreads in hand, we can now attempt to extract the statistical drivers of change within the curve. One method of achieving this is to:
• Calculate month-to-month differences in the curve.
• Calculate the correlation matrix of the differences.
• Calculate an eigenvalue decomposition of the correlation matrix.
Stopping after just the first two steps, we can begin to see some interesting visual patterns emerge in the correlation matrix.
• There is not a monotonic decline in correlation between credit qualities. For example, Aaa is not more highly correlated to Aa than Ba and A is more correlated to B than it is Aa.
• Aaa appears to behave rather uniquely.
• Baa, Ba, B, and to a lesser extent Caa, appear to visually cluster in behavior.
• Ba, B, and Caa do appear to have more intuitive correlation behavior, with correlations increasing as credit qualities get closer.
Step 3 might seem foreign for those unfamiliar with the technique, but in this context eigenvalue decomposition has an easy interpretation. The process will take our universe of credit indices and
return a universe of statistically independent factor portfolios, where each portfolio is made up of a combination of credit indices.
As our eigenvalue decomposition was applied to the correlation matrix of credit spread changes, the factors will explain the principal vectors of variance in credit spread changes. We plot the
weights of the first three factors below.
Source: Federal Reserve of St. Louis; Bloomberg. Calculations by Newfound Research.
For anyone who has performed an eigenvalue decomposition on the yield curve before, three familiar components emerge.
We can see that Factor #1 applies nearly equal-weights across all the credit indices. Therefore, we label this factor “level” as it represents a level shift across the entire curve.
Factor #2 declines in weight from Aaa through Caa. Therefore, we label this factor “slope,” as it controls steepening and flattening of the credit curve.
Factor #3 appears as a barbell: negative weights in the wings and positive weights in the belly. Therefore, we call this factor “curvature,” as it will capture convexity changes in the curve.
Together, these three factors explain 80% of the variance in credit spread changes. Interestingly, the 4^thfactor – which brings variance explained up to 87.5% – also looks very much like a curvature
trade, but places zero weight on Aaa and barbells Aa/Caa against A/Baa. We believe this serves as further evidence as to the unique behavior of Aaa credit.
Tracking Credit Eigen Portfolios
As we mentioned, each factor is constructed as a combination of exposure to our Aaa-Caa credit universe; in other words, they are portfolios! This means we can track their performance over time and
see how these different trades behave in different market regimes.
To avoid overfitting and estimation risk, we decided to simplify the factor portfolios into more stylized trades, whose weights are plotted below (though ignore, for a moment, the actual weights, as
they are meant only to represent relative weighting within the portfolio and not absolute level). Note that the Level trade has a cumulative positive weight while the Slope and Curvature trades sum
to zero.
To actually implement these trades, we need to account for the fact that each credit index will have a different level of credit duration.
Akin to duration, which measure’s a bond’s sensitivity to interest rate changes, credit duration measures a bond’s sensitivity to changes in its credit spread. As with Treasuries, we need to adjust
the weights of our trades to account for this difference in credit durations across our indices.
For example, if we want to place a trade that profits in a steepening of the Treasury yield curve, we might sell 10-year US Treasuries and buy 2-year US Treasuries. However, we would not buy and sell
the same notional amount, as that would leave us with a significantly negative duration position. Rather, we would scale each leg such that their durations offset. In the end, this causes us to buy
significantly more 2s than we sell 10s.
To continue, therefore, we must calculate credit spread durations.
Without this data on hand, we employ a statistical approach. Specifically, we take monthly total return data and subtract yield return and impact from interest rate changes (employing the
duration-matched rates we calculated above). What is left over is an estimate of return due to changes in credit spreads. We then regress these returns against changes in credit spreads to calculate
credit spread durations, which we plot below.
Source: Federal Reserve of St. Louis; Bloomberg. Calculations by Newfound Research.
The results are a bit of a head scratcher. Unlike duration in the credit curve which typically increases monotonically across maturities, we get a very different effect here. Aaa credit spread
duration is 10.7 today while Caa credit spread duration is 2.8. How is that possible? Why is lower-quality credit not more sensitiveto credit changes than higher quality credit?
Here we run into a very interesting empirical result in credit spreads: spread change is proportional to spread level. Thus, a true “level shift” rarely occurs in the credit space; e.g. a 1bp change
in the front-end of the credit spread curve may actually manifest as a 10bp change in the back end. Therefore, the lower credit spread duration of the back end of the curve is offset by larger
There is some common-sense intuition to this effect. Credit has a highly non-linear return component: defaults. If we enter an economic environment where we expect an increase in default rates, it
tends to happen in a non-linear fashion across the curve. To offset the larger increase in defaults in lower quality credit, investors will demand larger corresponding credit spreads.
(Side note: this is why we saw that the Baa–Aaa spread did not appear to mean-revert as cleanly as the log-difference of spreads did in last week’s commentary, Value and the Credit Spread.)
While our credit spread durations may be correct, we still face a problem: weighting such that each index contributes equal credit spread duration will create an outsized weight to the Caa index.
DTS Scaling
Fortunately, some very smart folks thought about this problem many years ago. Recognizing the stability of relative spread changes, Dor, Dynkin, Hyman, Houweling, van Leeuwen, and Penninga (2007)
recommend the measure of duration times spread (“DTS”) for credit risk.
With a more appropriate measure of credit sensitivity, we can now scale our stylized factor portfolio weights such that each position contributes an equal level of DTS. This will have two effects:
(1) the relative weights in the portfolios will change over time, and (2) the notional size of the portfolios will change over time.
We scale each position such that (1) they contribute an equal level of DTS to the portfolio and (2) each leg of the portfolio has a total DTS of 500bps. The Level trade, therefore, represents a
constant 500bps of DTS risk over time, while the Slope and Curvature trades represent 0bps, as the longs and short legs net out.
One problem still remains: interest rate risk. As we plotted earlier in this piece, the credit indices have time-varying – and sometimes substantial – interest rate exposure. This creates an
unintended bet within our portfolios.
Fortunately, unlike the credit curve, true level shift does empirically apply in the Treasury yield curve. Therefore, to simplify matters, we construct a 5-year zero-coupon bond, which provides us
with a constant duration instrument. At each point in time, we calculate the net duration of our credit trades and use the 5-year ZCB to neutralize the interest rate risk. For example, if the Level
portfolio has a duration of 1, we would take a -20% notional position in the 5-year ZCB.
Source: Federal Reserve of St. Louis; Bloomberg. Calculations by Newfound Research.
Some things we note when evaluating the portfolios over time:
• In all three portfolios, notional exposure to higher credit qualities is substantially larger than lower credit qualities. This captures the meaningfully higher exposure that lower credit quality
indices have to credit risk than higher quality indices.
• The total notional exposure of each portfolio varies dramatically over time as market regimes change. In tight spread environments, DTS is low, and therefore notional exposures increase. In wide
spread environments – like 2008 – DTS levels expand dramatically and therefore only a little exposure is necessary to achieve the same risk target.
• 2014 highlights a potential problem with our approach: as Aaa spreads reached just 5bps, DTS dipped as low as 41bps, causing a significant swing in notional exposure to maintain the same DTS
The fruit of our all our labor is the graph plotted below, which shows the growth of $1 in our constant DTS, stylized credit factor portfolios.
What can we see?
First and foremost, constant credit exposure has not provided much in the last 25 years until recently. It would appear that investors did not demand a high enough premium for the risks that were
realized over the period, which include the 1998 LTCM blow-up, the burst of the dot-com bubble, and the 2008 recession.
From 12/31/2008 lows through Q1 2019, however, a constant 500bps DTS exposure generated a 2.0% annualized return with 2.4% annualized volatility, reflecting a nice annual premium for investors
willing to bear the credit risk.
Slope captures the high-versus-low-quality trade. We can see that junk meaningfully out-performed quality in the 1990s, after which there really did not appear to be a meaningful difference in
performance until 2013 when oil prices plummeted and high yield bond prices collapsed. This result does highlight a potential problem in our analysis: the difference in sector composition of the
underlying indices. High yield bonds had an outsized reaction compared to higher quality investment grade credit due to more substantial exposure to the energy sector, leading to a lop-sided
What is also interesting about the Slope trade is that the market did not seem to price a meaningful premium for holding low-quality credit over high-quality credit.
Finally, we can see that Curvature (“barbell versus belly”) – trade was rather profitable for the first decade, before deflating pre-2008 and going on a mostly-random walk ever since. However, as
mentioned when the curvature trade was initially introduced, the 4^th factor in our decomposition also appeared to reflect a similar trade but shorts Aa and Caa versus a long position in A and Baa.
This trade has been a fairly consistent money-loser since the early 2000s, indicating that a barbell of high quality (just not Aaa) and junk might do better than the belly of the curve.
It is worth pointing out that these trades represent a significant amount of compounding estimation – from duration-matching Treasury rates to credit spread durations – which also means a significant
risk of compounding estimation error. Nevertheless, we believe there are a few takeaways worth exploring further:
• The Level trade appears highly regime dependent (in positive and negative economic environments), suggesting a potential opportunity for on/off credit trades.
• The 4^th factor is a consistent loser, suggesting a potential structural tilt that can be made by investors by holding quality and junk (e.g. QLTA + HYG) rather than the belly of the curve
(LQD). Implementing this in a long-only fashion would require more substantial analysis of duration trade-offs, as well as a better intuition as to whythe returns are emerging as they are.
• Finally, a recognition that maintaining a constant credit risk level requires reducing notional exposure as rates go up, as rate changes are proportional to rate levels. This is an important
consideration for strategic asset allocation.
This post is available as a PDF download here.
• We continue our exploration of quantitative signals in fixed income.
• We use a measure of credit curve steepness as a valuation signal for timing exposure between corporate bonds and U.S. Treasuries.
• The value signal generates a 0.84% annualized return from 1950 to 2019 but is highly regime dependent with meaningful drawdowns.
• Introducing a naïve momentum strategy significantly improves the realized Sharpe ratio and drawdown profile, but does not reduce the regime-based nature of the returns.
• With a combined return of just 1.0% annualized, this strategy may not prove effective after appropriate discounting for hindsight bias, costs, and manager fees. The signal itself, however, may be
useful in other contexts.
In the last several weeks, we have been exploring the application of quantitative signals to fixed income.
• In Tactical Credit we explored trend-following strategies with high yield bonds.
• In Quantitative Styles and Multi-Sector Bonds we built off a prior piece (Navigating Municipal Bonds with Factors) and explored the cross-sectional application of momentum, value, carry,
reversal, and volatility signals in a broad fixed income universe.
• In Time Series Signals and Multi-Sector Bonds we explored the same momentum, value, carry, and reversal signals as market timing signals.
Recent cross-sectional studies also build off of further research we’ve done in the past on applying trend, value, carry, and explicit measures of the bond risk premium as duration timing mechanisms
(see Duration Timing with Style Premia; Timing Bonds with Value, Momentum, and Carry; and A Carry-Trend-Hedge Approach to Duration Timing).
Broadly, our studies have found:
• Value (measured as deviation from real yield), momentum (prior 12-month returns), and carry (yield-to-worst) were all profitable factors in cross-section municipal bond sector long/short
• Value (measured as deviation from real yield), trend (measured as prior return), and carry (measured as term spread + roll yield) have historically been effective timing signals for U.S. duration
• Prior short-term equity returns proved to be an effective signal for near-term returns in U.S. Treasuries (related to the “flight-to-safety premium”).
• Short-term trend proved effective for high yield bond timing, but the results were vastly determined by performance in 2000-2003 and 2008-2009. While the strategy appeared to still be able to
harvest relative carry between high-yield bonds and core fixed income in other environments, a significant proportion of returns came from avoiding large drawdowns in high yield.
• Short-term cross-section momentum (prior total returns), value (z-score of loss-adjusted yield-to-worst), carry (loss-adjusted yield-to-worst), and 3-year reversals all appeared to offer robust
signals for relative selection in fixed income sectors. The time period covered in the study, however, was limited and mostly within a low-inflation regime.
• Application of momentum, value, carry, and reversal as timing signals proved largely ineffective for generating excess returns.
In this week’s commentary, we want to further contribute to research by introducing a value timing signal for credit.
Finding Value in Credit
Identifying a value signal requires some measure or proxy of an asset’s “fair” value. What can make identifying value in credit so difficult is that there are a number of moving pieces.
Conceptually, credit spreads should be proportional to default rates, recovery rates, and aggregate risk appetite, making determining whether spreads are cheap or expensive rather complicated. Prior
literature typically tackles the problem with one of three major categories of models:
• Econometric: “Fair value” of credit spreads is modeled through a regression that typically explicitly accounts for default and recovery rates. Inputs are often related to economic and market
variables, such as equity market returns, 10-year minus 2-year spreads, corporate leverage, and corporate profitability. Bottom-up analysis may use metrics such as credit quality, maturity,
supply, and liquidity.
• Merton Model: Based upon the idea the bond holders have sold a put on a company’s asset value. Therefore, options pricing models can be used to calculate a credit spread. Inputs include the
total asset value, asset volatility, and leverage of the firm under analysis.
• Spread Signal: A simple statistical model derived from credit spread themselves. For example, a rolling z-score of option-adjusted spreads or deviations from real yield. Other models (e.g.
Haghani and Dewey (2016)) have used spread plus real yield versus a long-run constant (e.g. “150 basis points”).
The first method requires a significant amount of economic modeling. The second approach requires a significant amount of extrapolation from market data. The third method, while computationally
(and intellectually) less intensive, requires a meaningful historical sample that realistically needs to cover at least one full market cycle.
While attractive for its simplicity, there are a number of factors that complicate the third approach.
First, if spreads are measured against U.S. Treasuries, the metric may be polluted by information related to Treasuries due to their idiosyncratic behavior (e.g. scarcity effects and flight-to-safety
premiums). Structural shifts in default rates, recovery rates, and risk appetites may also cause a problem, as spreads may appear unduly thin or wide compared to past regimes.
In light of this, in this piece we will explore a similarly simple-to-calculate spread signal, but one that hopefully addresses some of these short-comings.
Baa vs. Aaa Yields
In order to adjust for these problems, we propose looking at the steepness of the credit curve itself by comparing prime / high-grade yield versus lower-medium grade yields. For example, we could
compare Moody’s Season Aaa Corporate Bond Yield and Moody’s Season Baa Corporate Bond Yield. In fact, we will use these yields for the remainder of this study.
We may be initially inclined to measure the steepness of the credit curve by taking the difference in yield spreads, which we plot below.
Source: Federal Reserve of St. Louis. Calculations by Newfound Research.
We can find a stronger mean-reverting signal, however, if we calculate the log-difference in yields.
Source: Federal Reserve of St. Louis. Calculations by Newfound Research.
We believe this transformation is appropriate for two reasons. First, the log transformation helps control for the highly heteroskedastic and skewed nature of credit spreads.
Second, it helps capture both the steepness andthe level of the credit curve simultaneously. For example, a 50-basis-point premium when Aaa yield is 1,000 basis points is very different than when
Aaa yield is 100 basis points. In the former case, investors may not feel any pressure to bear excess risk to achieve their return objectives, and therefore a 50-basis-point spread may be quite
thin. In the latter case, 50 basis points may represent a significant step-up in relative return level in an environment where investors have either low default expectations, high recovery
expectations, high risk appetite, or some combination thereof.
Another way of interpreting our signal is that it informs us about the relative decisions investors must make about their expected dispersion in terminal wealth.
Constructing the Value Strategy
With our signal in hand, we can now attempt to time credit exposure. When our measure signals that the credit curve is historically steep, we will take credit risk. When our signal indicates that
the curve is historically flat we will avoid it.
Specifically, we will construct a dollar-neutral long/short portfolio using the Dow Jones Corporate Bond Index (“DJCORP”) and a constant maturity 5-year U.S. Treasury index (“FV”). We will
calculate a rolling z-score of our steepness measure and go long DJCORP and short FV when the z-score is positive and place the opposite trade when the z-score is negative.
In line with prior studies, we will apply an ensemble approach. Portfolios are reformed monthly using formation ranging from 3-to-6 years with holding periods ranging from 1-to-6 months. Portfolio
weights for the resulting strategy are plotted below.
Source: Federal Reserve of St. Louis and Global Financial Data. Calculations by Newfound Research.
We should address the fact that while both corporate bond yield and index data is available back to the 1930s, we have truncated our study to ignore dates prior to 12/1949 to normalize for a post-war
period. It should be further acknowledged that the Dow Jones Corporate Bond index used in this study did not technically exist until 2002. Prior to that date, the index return tracks a Dow Jones
Bond Aggregate, which was based upon four sub-indices: high-grade rails, second-grade rails, public utilities, and industries. This average existed from 1915 to 1976, when it was replaced with a new
average at that point when the number of railway bonds was no longer sufficient to maintain the average.
Below we plot the returns of our long/short strategy.
Source: Federal Reserve of St. Louis and Global Financial Data. Calculations by Newfound Research. Returns are hypothetical and backtested. Returns are gross of all management fees, transaction
fees, and taxes, but net of underlying fund fees. Total return series assumes the reinvestment of all distributions.
The strategy has an annualized return of 0.84% with a volatility of 3.89%, generating a Sharpe ratio of 0.22. Of course, long-term return statistics belie investor and manager experience, with this
strategy exhibiting at least two periods of decade-plus-long drawdowns. In fact, the strategy really has just four major return regimes: 1950 to 1970 (-0.24% annualized), 1970 to 1987 (2.59%
annualized), 1987 to 2002 (-0.33%), and 2002 to 2019 (1.49% annualized).
Try the strategy out in the wrong environment and we might be in for a lot of pain.
Momentum to the Rescue?
It is no secret that value and momentum go together like peanut butter and jelly. Instead of tweaking our strategy to death in order to improve it, we may just find opportunity in combining it with a
negatively correlated signal.
Using an ensemble model, we construct a dollar-neutral long/short momentum strategy that compares prior total returns of DJCORP and FV. Rebalanced monthly, the portfolios use formation periods
ranging from 9-to-15 months and holding periods ranging from 1-to-6 months.
Below we plot the growth of $1 in our value strategy, our momentum strategy, and a 50/50 combination of the two strategies that is rebalanced monthly.
Source: Federal Reserve of St. Louis and Global Financial Data. Calculations by Newfound Research. Returns are hypothetical and backtested. Returns are gross of all management fees, transaction
fees, and taxes, but net of underlying fund fees. Total return series assumes the reinvestment of all distributions.
The first thing we note is – even without calculating any statistics – the meaningful negative correlation we see in the equity curves of the value and momentum strategies. This should give us
confidence that there is the potential for significant improvement through diversification.
The momentum strategy returns 1.11% annualized with a volatility of 3.92%, generating a Sharpe ratio of 0.29. The 50/50 combination strategy, however, returns 1.03% annualized with a volatility of
just 2.16% annualized, resulting in a Sharpe ratio of 0.48.
While we still see significant regime-driven behavior, the negative regimes now come at a far lower cost.
In this study we introduce a simple value strategy based upon the steepness of the credit curve. Specifically, we calculated a rolling z-score on the log-difference between Moody’s Seasoned Baa and
Aaa yields. We interpreted a positive z-score as a historically steep credit curve and therefore likely one that would revert. Similarly, when z-scores were negative, we interpreted the signal as a
flat credit curve, and therefore a period during which taking credit risk is not well compensated.
Employing an ensemble approach, we generated a long/short strategy that would buy the Dow Jones Corporate Bond Index and short 5-year U.S. Treasuries when credit appeared cheap and place the opposite
trade when credit appeared expensive. We found that this strategy returned 0.84% annualized with a volatility of 3.89% from 1950 to 2019.
Unfortunately, our value signal generated significantly regime-dependent behavior with decade-long drawdowns. This not only causes us to question the statistical validity of the signal, but also the
practicality of implementing it.
Fortunately, a naively constructed momentum signal provides ample diversification. While a combination strategy is still highly regime-driven, the drawdowns are significantly reduced. Not only do
returns meaningfully improve compared to the stand-alone value signal, but the Sharpe ratio more-than-doubles.
Unfortunately, our study leveraged a long/short construction methodology. While this isolates the impact of active returns, long-only investors must cut return expectations of the strategy in half,
as a tactical timing model can only half-implement this trade without leverage. A long-only switching strategy, then, would only be expected to generate approximately 0.5% annualized excess return
above a 50% Dow Jones Corporate Bond Index / 50% 5-Year U.S. Treasury index portfolio.
And that’s before adjustments for hindsight bias, trading costs, and manager fees.
Nevertheless, more precise implementation may lead to better results. For example, our indices neither perfectly matched the credit spreads we evaluated, nor did they match each other’s durations.
Furthermore, while this particular implementation may not survive costs, this signal may still provide meaningful information for other credit-based strategies. | {"url":"https://blog.thinknewfound.com/2019/07/","timestamp":"2024-11-07T04:03:59Z","content_type":"text/html","content_length":"721767","record_id":"<urn:uuid:5960d0d9-097d-4c2d-9b65-01bbc08954fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00347.warc.gz"} |
CMT Seminar: Interacting Hofstadter spectrum of bosons in an artificial gauge field - Department of Physics
CMT Seminar: Interacting Hofstadter spectrum of bosons in an artificial gauge field
Monday, January 31, 2011 – 3:15pm
Reiss 261A
Stephen Powell
University of Maryland
In the presence of a magnetic field, noninteracting particles on a lattice have a remarkable fractal spectrum, known as the “Hofstadter butterfly”. Motivated by experimental advances in the synthesis
of gauge potentials for ultracold atoms, we consider the superfluid phase of interacting bosons on a square lattice in the presence of a magnetic field. We show that superfluid order implies spatial
symmetry breaking, and predict clear signatures of many-body effects in time-of-flight measurements. We then develop a Bogoliubov expansion to describe the quasiparticle excitations, whose spectrum
combines the intricacy the Hofstadter butterfly with the characteristic features of the superfluid phase.
References: S. Powell, R. Barnett, R. Sensarma, and S. Das Sarma, Phys. Rev. Lett. 104, 255303 (2010); Phys. Rev. A 83, 013612 (2011)
Host: Chris Varney | {"url":"https://physics.georgetown.edu/cmt-seminar-interacting-hofstadter-spectrum-bosons-artificial-gauge-field/","timestamp":"2024-11-04T01:31:20Z","content_type":"text/html","content_length":"106050","record_id":"<urn:uuid:e307b237-273a-43ca-bbcc-8e20888be0d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00609.warc.gz"} |
Demystifying the Therefore Symbol: An Expert‘s Guide to Using ∴ in LaTeX – TheLinuxCode
Demystifying the Therefore Symbol: An Expert‘s Guide to Using ∴ in LaTeX
As an academic researcher and LaTeX enthusiast, I constantly rely on the \therefore command. That simple three-dotted triangle symbol (∴) allows me to elegantly and precisely convey logical reasoning
in my papers and technical documents.
But I remember first learning LaTeX and struggling to figure out how to insert a basic therefore symbol. Between installing new packages, applying the right syntax, troubleshooting errors, and
formatting issues, it was a challenge!
That‘s why I decided to write the ultimate guide to using therefore and other logic symbols in LaTeX. Whether you‘re a beginner typesetting your first research paper or an experienced LaTeX user
looking to expand your syntax, this tutorial aims to demystify \therefore usage and take your deductive arguments to the next level. Let‘s do this!
Why Therefore and Logic Symbols Matter in Technical Writing
First, a quick primer on why logic symbols like therefore are so invaluable for mathematical and technical writing:
Precise and Unambiguous – Symbols like ∴ encapsulate an entire concept ("therefore") into a compact symbolic form. This removes possible ambiguity of natural language.
Improves Readability – Symbols help break up long strings of text and make logical flow more apparent.
Logical Rigor – Mathematical and philosophical reasoning depends on logical connectives. Symbols accurately represent logic.
Language-Agnostic – Logic symbols can be understood internationally, independent of vocabulary or grammar of different languages.
Rich History – Symbols like therefore derive from centuries-old Aristotelian logic tradition, as we‘ll see below.
In fact, research shows LaTeX usage is rapidly growing in academic publishing and beyond:
Field % Papers Using LaTeX
Astronomy 90%
Mathematics 46%
Computer Science 39%
Physics 36%
Economics 32%
Table data source: arXiv LaTeX Usage
No wonder LaTeX and its suite of logic symbols are a must-have skill for STEM students and researchers!
Origins of the Therefore Symbol
Before we dive into LaTeX implementation, let‘s briefly appreciate where logic symbols like therefore originate from linguistically and mathematically:
Aristotelian Logic – therefore finds its roots in philosophical logic tracing back to Aristotle‘s work analyzing deductive arguments in ancient Greece.
Geometric Proofs – Mathematical proofs have relied on logic symbols like therefore since geometric theorems of Euclid and other ancient mathematicians.
Typographic Evolution – Printers and typographers gradually standardized visual symbols for logic words like therefore over centuries of publishing.
Modern Notation – Mathematicians and logicians continue to refine symbolic logic notation, but timeless symbols like ∴ remain indispensable.
So next time you type \therefore in LaTeX, keep in mind you are taking part in a rich logical tradition dating back millennia!
Using Therefore in LaTeX: Step-by-Step
Alright, time for the hands-on tutorial! Here are the key steps to start inserting beautiful therefore symbols into your own LaTeX documents:
1. Install the amssymb Package
The therefore command \therefore comes from the amssymb package. So first you‘ll need to install this package if it‘s not already in your LaTeX distribution.
Within the preamble of your document, include:
This makes all symbols from amssymb available, including:
• \therefore
• \because
• \And
• \Or
• \iff
• \implies
Check out the full amssymb documentation for dozens more logic, set, relation, and arrow symbols.
2. Insert a Basic Therefore
After importing amssymb, you can now use:
To insert a ∴ symbol anywhere in your document.
For example:
a &> b \\
\therefore \\
a+c &> b+c
Renders as:
a &> b \
\therefore \
a+c &> b+c
Simple as that! No more writing out long "therefore" statements.
3. Use Therefore in Math Environments
Logic symbols really shine when used in mathematical proofs and equations:
a = 2,\quad b = 3 \\
\therefore \\
a + b = 2 + 3 = 5
Renders as:
a = 2, \quad b = 3 \
\therefore \
a + b = 2 + 3 = 5
The therefore succinctly connects the steps while improving readability.
You can use \therefore anywhere within align, gather, multline and other math environments too.
4. Format Therefore Stylistically
To integrate \therefore seamlessly, it helps to apply some light formatting:
• Space above/below – Add blank lines before and after therefore for separation:
a = 2 \\
b = 4
• Manual spacing – Use \quad or \hspace{2em} to tweak spacing around the symbol:
\hspace{1em} \therefore \hspace{1em}
• Center therefore – Center the symbol with \centering for emphasis:
• Add punctuation – Include commas, periods, etc. based on syntax:
A > B, \\
\therefore, \\
C > D.
• Style variations – Change therefore style with \mathbf, \mathit, \mathsf, etc:
Get creative with small adjustments like these to integrate \therefore seamlessly!
5. Use Alternate Symbols Like \because
Amssymb provides alternatives like \because for more logical notation flexibility:
A = B \\
\because \\
B = C
Renders as:
A = B \
\because \
B = C
Other amssymb symbols like \iff, \implies, \given are handy for statements like equivalence, implication, assumptions etc.
6. Troubleshooting Therefore Errors
Here are some common \therefore errors and fixes:
• Undefined control sequence – Ensure amssymb package is installed/imported.
• Spacing issues – Check line breaks and formatting around therefore.
• Weird output – Recompile LaTeX document and ensure symbol fonts available.
• Alignment problems – Double check math environment syntax on adjacent rows.
• Blank symbol – Use \usepackage[latin1]{inputenc} for proper character encoding.
Still stuck? Post on LaTeX forum communities like TeX – LaTeX Stack Exchange or LaTeX-Community.org for help.
Comparing Therefore and Alternate Logic Symbols
Now that you‘re comfortable with \therefore, let‘s compare it with some other handy logic symbols available in amssymb.
Therefore vs Because
• Therefore – Denotes logical consequence ("A, therefore B")
A \\
\therefore \\
• Because – Denotes logical cause ("B, because A")
B \\
\because \\
• Therefore is more common in mathematical proofs, while because is useful for explanatory statements.
Therefore vs Iff (If and only if)
• Therefore – One-directional implication
A \\
\therefore \\
• Iff – Indicating equivalence ("A if and only if B")
A \iff B
• Iff is a stronger statement of logical equivalence in both directions.
Therefore vs Implies
• Therefore – Conclusion based on preceding statements
A, B \\
\therefore \\
• Implies – Conditional relationship ("A implies B")
A \implies B
• Implies is useful for indicating conditional hypotheses.
The context will determine which symbol suits your argument best!
Formatting Tips for Logic Symbols
To take your LaTeX logic notation to the next level, consider these formatting and typographic tips:
• Use a larger font for key logic symbols:
{\Large \therefore}
• Bold or italicize symbols to add emphasis:
{\bf \therefore}
• Add color for visual distinction:
{\color{blue} \therefore}
• Center logic statements for clarity:
A \\
\therefore \\
• Adjust symbol spacing for visual balance:
\quad \therefore \quad
• Use manual breaks (\) for multi-line statements:
A, B, C \\
\therefore \\
Apply these and other LaTeX formatting techniques as needed depending on context.
Logic Resources for Further Learning
Hopefully this guide has demystified \therefore usage for you. Here are some next-level resources for taking your LaTeX logic skills even further:
You now have all the tools to write flawless logical arguments in LaTeX! Let me know if you have any other questions – whether about therefore symbols or other aspects of LaTeX and mathematical
typesetting. I‘m always happy to help a fellow LaTeX enthusiast. Write logically out there! | {"url":"https://thelinuxcode.com/therefore-symbols-in-latex/","timestamp":"2024-11-06T02:33:47Z","content_type":"text/html","content_length":"175989","record_id":"<urn:uuid:4b53c06d-d648-4890-91cd-453c6b9b2f14>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00822.warc.gz"} |
Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders
Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders act as fundamental tools in the world of maths, providing a structured yet flexible system for students to explore and grasp mathematical
ideas. These worksheets offer an organized strategy to understanding numbers, supporting a solid foundation whereupon mathematical effectiveness prospers. From the most basic counting workouts to the
details of advanced calculations, Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders deal with students of varied ages and ability degrees.
Introducing the Essence of Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders
Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders
Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders -
Step 1 Write the dividend the number being divided and divisor the number we re dividing by in the long division format with the dividend inside the division bracket and the divisor outside the
division bracket Step 2 Work on the dividend from left to right
Let students in grade 3 and grade 4 practice dividing 2 digit by 1 digit whole numbers with grids calculate quotients and remainders solve division word problems figure out the missing numbers
comprehend the relationship between multiplication and division and check your answers solve division riddles and more
At their core, Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders are cars for conceptual understanding. They envelop a myriad of mathematical concepts, directing students with the
labyrinth of numbers with a series of interesting and deliberate exercises. These worksheets transcend the borders of conventional rote learning, encouraging energetic engagement and fostering an
instinctive understanding of mathematical relationships.
Supporting Number Sense and Reasoning
Dividing A 2 Digit Dividend By A 1 Digit Divisor And Showing Steps A Division Worksheet
Dividing A 2 Digit Dividend By A 1 Digit Divisor And Showing Steps A Division Worksheet
Here you will find a selection of free Division sheets designed to help your child learn to do 2 digit long division The sheets are graded so that the easier ones are at the top Using these sheets
will help your child to divide a range of 3 4 and 5 digit numbers by two digits
Course 4th grade Unit 5 Lesson 1 Remainders Division FAQ Estimating division that results in non whole numbers Estimate to divide by 1 digit numbers Understanding remainders Introduction to
remainders Interpret remainders Divide with remainders 2 digit by 1 digit
The heart of Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders depends on growing number sense-- a deep comprehension of numbers' significances and interconnections. They encourage
expedition, welcoming students to dissect arithmetic procedures, understand patterns, and unlock the enigmas of sequences. Via provocative obstacles and rational puzzles, these worksheets come to be
portals to honing thinking skills, nurturing the analytical minds of budding mathematicians.
From Theory to Real-World Application
Division Worksheets 3rd Grade
Division Worksheets 3rd Grade
You can use this fun code hunter activity to get your children to practise dividing with remainders Children can scan the QR codes to self check division calculations involving 2 digit numbers and 1
digit numbers and remainders This fun resource encourages independence and supports using the formal short method of division The above video
Know more about Divide 2 Digit Numbers by 1 Digit Numbers with Remainder Horizontal Division Worksheet What will your child learn through this worksheet Students should use their knowledge of
multiples and remainders to divide numbers using the long division method once they have mastered their basic division facts
Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders serve as channels connecting theoretical abstractions with the apparent truths of day-to-day life. By infusing functional situations into
mathematical workouts, students witness the importance of numbers in their surroundings. From budgeting and measurement conversions to comprehending analytical information, these worksheets empower
trainees to wield their mathematical prowess beyond the boundaries of the classroom.
Varied Tools and Techniques
Adaptability is inherent in Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders, employing an arsenal of instructional tools to cater to varied discovering designs. Aesthetic help such as
number lines, manipulatives, and electronic resources act as companions in visualizing abstract ideas. This diverse technique makes certain inclusivity, accommodating learners with various
preferences, staminas, and cognitive styles.
Inclusivity and Cultural Relevance
In a progressively diverse globe, Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders welcome inclusivity. They transcend cultural limits, integrating instances and problems that resonate
with students from diverse histories. By incorporating culturally pertinent contexts, these worksheets promote an atmosphere where every learner feels stood for and valued, boosting their link with
mathematical concepts.
Crafting a Path to Mathematical Mastery
Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders chart a course towards mathematical fluency. They infuse perseverance, essential thinking, and analytical skills, vital attributes not
only in maths yet in various elements of life. These worksheets equip learners to navigate the intricate terrain of numbers, nurturing a profound admiration for the beauty and reasoning inherent in
Accepting the Future of Education
In a period marked by technological innovation, Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders effortlessly adapt to digital platforms. Interactive interfaces and electronic resources
boost traditional understanding, offering immersive experiences that go beyond spatial and temporal limits. This combinations of conventional methods with technical innovations declares an appealing
period in education and learning, fostering an extra dynamic and engaging learning atmosphere.
Final thought: Embracing the Magic of Numbers
Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders represent the magic inherent in mathematics-- a captivating journey of exploration, exploration, and proficiency. They transcend
traditional pedagogy, acting as drivers for stiring up the flames of inquisitiveness and query. Via Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders, students embark on an odyssey,
unlocking the enigmatic world of numbers-- one issue, one service, at once.
Dividing 1 Into 2 Digit Worksheet Have Fun Teaching
Dividing 2 Digit By 1 Digit Whole Numbers Worksheets Division Division With Remainders As
Check more of Dividing 2 Digit By 1 Digit Numbers Worksheet With Remainders below
2 Digit Divisor Worksheets
2 Digit By 1 Digit Division
Division By Two Digit Numbers Worksheets
Two Digit By One Digit Division With Remainders KidsPressMagazine
Divide 2 Digit Dividend By 1 Digit Divisor With Remainders KidsPressMagazine
2 Digit Into 3 And 4 Digit Division With Remainders Worksheet For 4th 5th Grade Lesson Planet
Dividing 2 digit By 1 digit Numbers Worksheets Math Worksheets 4 Kids
Let students in grade 3 and grade 4 practice dividing 2 digit by 1 digit whole numbers with grids calculate quotients and remainders solve division word problems figure out the missing numbers
comprehend the relationship between multiplication and division and check your answers solve division riddles and more
Long Division Remainders 2 digit By 1 digit K5 Learning
Division practice worksheets with 2 digit dividends and single digit divisors questions are in long division format Free Worksheets Math Drills Division Printable
Let students in grade 3 and grade 4 practice dividing 2 digit by 1 digit whole numbers with grids calculate quotients and remainders solve division word problems figure out the missing numbers
comprehend the relationship between multiplication and division and check your answers solve division riddles and more
Division practice worksheets with 2 digit dividends and single digit divisors questions are in long division format Free Worksheets Math Drills Division Printable
Two Digit By One Digit Division With Remainders KidsPressMagazine
2 Digit By 1 Digit Division
Divide 2 Digit Dividend By 1 Digit Divisor With Remainders KidsPressMagazine
2 Digit Into 3 And 4 Digit Division With Remainders Worksheet For 4th 5th Grade Lesson Planet
Division Of 2 Digit Numbers Worksheet Worksheet
Printable Division Worksheets For Teachers MATH ZONE FOR KIDS
Printable Division Worksheets For Teachers MATH ZONE FOR KIDS
Dividing 2 Digit By 1 Digit Year 4 Mattie Haywood s English Worksheets | {"url":"https://szukarka.net/dividing-2-digit-by-1-digit-numbers-worksheet-with-remainders","timestamp":"2024-11-09T01:03:17Z","content_type":"text/html","content_length":"27243","record_id":"<urn:uuid:7270fbf2-b5e9-4b93-87b7-9b9e5631b4c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00095.warc.gz"} |
American Mathematical Society
Jacques Tits (1930–2021)
Richard M. Weiss
Communicated by Notices Associate Editor Steven Sam
Figure 1.
Jacques Tits at age 19.
Jacques Tits was born in Uccle, a municipality of Brussels, on August 12, 1930, and died on December 5, 2021. The son of a mathematician, Tits displayed extraordinary mathematical ability at an early
age. He received his doctorate at the University of Brussels in 1950 and spent the following year at the Institute for Advanced Study. In 1964, he moved from the University of Brussels to a
professorship in Bonn, and then in 1973 to the Collège de France, where he remained for the rest of his career. For almost thirty years he held courses and seminars at the Collège de France and for
nineteen years, Tits was editor-in-chief of the Publications Mathématiques de l’IHES.
Tits made many fundamental contributions to our understanding of the structure of semisimple algebraic groups and finite simple groups and did more than anyone to explore and reveal the geometric
nature of these subjects.
When Tits was young, Chevalley had shown that semisimple algebraic groups over an algebraically closed field are classified up to isogeny by Dynkin diagrams. Tits made a careful study of various
structural features of groups defined over an arbitrary field, much of which appeared in a famous collaboration with Borel. Tits then used this work to show that semisimple algebraic groups over an
arbitrary field are uniquely determined by combinatorial data in the form of a Tits index (a Dynkin diagram endowed with certain decorations) and an anisotropic kernel, the two things knitted
together by Galois descent as described in his lecture notes from a meeting that took place in Boulder in 1966. This deep result is in the spirit of the theorem of Wedderburn that says that a simple
associative ring which is finite-dimensional over its center is isomorphic to for some division ring , and being analogs of the Tits index and the anisotropic kernel. Another important analog is the
theorem of Witt that says that a finite-dimensional quadratic form is uniquely determined by its anisotropic part and the dimension of its hyperbolic part.
Tits is best known for the theory of buildings. A building is a geometric structure defined by a few simple axioms involving a notion of dimension called the rank. The simplest example of a building
of rank is the projective space associated with a vector space of dimension . Further examples arise when the vector space carries a quadratic or Hermitian form. Together these are the buildings
associated to the classical groups.
Buildings have distinguished substructures called apartments. A building is spherical if its apartments are finite. A building is irreducible if it is not a direct product. The classical buildings
are all spherical and irreducible.
Tits introduced the notion of a BN-pair (also known as a Tits system) and used it together with the structural features revealed in his work with Borel to show that to every absolutely simple
algebraic group of positive -rank for a given field , there is an irreducible spherical building of rank on which the group acts. When is classical, then so is the building. These buildings are
particularly fascinating objects in the case when is exceptional.
In a celebrated volume of Springer Lecture Notes published in 1974, Tits gave the classification of irreducible spherical buildings of rank at least . His classification shows that every such
building is either classical or exceptional or belongs to a unique family defined over an imperfect field in characteristic .
Inspired by the work of Iwahori and Matsumoto, Tits went on to investigate affine buildings. Affine buildings are those in which the apartments have a natural representation as an affine space. Every
affine building of rank has a boundary which carries the structure of a spherical building of rank (and is called the building at infinity of ). Tits observed that for every absolutely simple
algebraic group of positive -rank for a field that is complete with respect to a discrete valuation, there is an affine building on which the group acts and that its boundary is precisely the
spherical building associated with . His famous lectures on the structure of absolutely simple groups isotropic over a local field and their affine buildings were delivered at a meeting in Corvallis,
Oregon in 1977. In two monumental volumes of the Publications Mathématiques de l’IHES (and in lecture notes from a conference on Lake Como in 1984), Tits and Bruhat completed the classification of
irreducible affine buildings of rank for . Central to this classification is the notion of a valuation of a root datum of the building at infinity.
Figure 2.
At the induction of Jacques Tits into the Order Pour le Mérite, Bonn, 1996.
Tits and Borel had shown that a semisimple algebraic group of positive -rank has a configuration of subgroups, which Tits called a root datum. Tits identified a corresponding property for spherical
buildings he called the Moufang condition in honor of Ruth Moufang, a pioneer in the study of projective planes. Every irreducible spherical building of rank at least satisfies this condition, and
every spherical building satisfying this condition possesses, in a suitable sense, a root datum.
An irreducible spherical building of rank is simply a connected bipartite graph in which every vertex has at least three neighbors and , where is the girth, is the diameter of the graph, and the
minimal circuits are the apartments. Tits called such graphs generalized -gons.
The residues of a building are certain distinguished subbuildings. Every building is, in a suitable sense, an amalgam of its irreducible rank residues; and if the building is spherical, then it is,
in fact, uniquely determined by these subbuildings. The proof of this was a crucial step in Tits’ classification result for spherical buildings.
There is now an enormous literature on the subject of generalized polygons, especially finite generalized polygons. Generalized polygons are, however, too numerous to classify (every projective plane
can be viewed as a generalized triangle, for example), but Tits observed that the irreducible rank residues of an irreducible spherical building of rank all satisfy the Moufang property as do all the
spherical buildings associated to an absolutely simple algebraic group of -rank . In 2001, Tits and Weiss classified generalized polygons that satisfy the Moufang property. They are almost all the
spherical buildings associated with an exceptional or classical group, but this time the list of exceptions is longer.
Affine buildings are -spaces uniquely determined by their boundary whenever the boundary satisfies the Moufang condition and the field is complete. It is this property that makes affine buildings a
subject of great interest in geometric group theory. It also points toward a possible connection to physics through the holographic principle.
Jacques Tits and Mark Ronan introduced and developed the notion of a twin building. This notion was inspired by Tits’ work on Kac–Moody groups which points to another possible connection with
physics. Tits also extended the Moufang condition to buildings of rank with the notion of a Moufang set. Moufang sets have proved to be an essential tool in the study of absolutely simple algebraic
groups of -rank .
Tits maintained a keen interest in the classification of finite simple groups as it unfolded. As a tool for identifying the finite groups of Lie type, spherical buildings played an essential role in
the classification. Later a theory of “diagram geometries” based on older ideas of Tits’ was introduced by Francis Buekenhout and others with the goal of including the sporadic groups in this
geometric picture. This led, in turn, to Tits’ “local approach” to buildings. Tits wrote papers on Griess’s construction of the monster and moonshine and on several other sporadic groups as well and
he proved the simplicity of , now called the Tits group.
Figure 3.
Jacques Tits wearing the medal of the French National Order of the Legion of Honor, Paris, 1995.
Tits introduced the notion of the Coxeter complex associated to a Coxeter group (and the term Coxeter group itself) including roots, projection maps, and other essential features of these complexes.
He proved fundamental results about the structure of the automorphism group of a tree, the simplest of all affine buildings. Tits analyzed geometric structures associated with the Suzuki and Ree
groups, showed that these groups are classified by “Tits endomorphisms” of the corresponding field, and proved their simplicity even when the field is imperfect.
In 1970, Tits proved that in characteristic , every finitely generated linear group contains either a solvable subgroup of finite index or a non-abelian free group. Now known as the Tits alternative,
this result has inspired a host of generalizations.
In 1964, Tits proved a remarkable result about the simplicity of the subgroup of the group of rational points of a -simple algebraic group generated by certain unipotent elements. Some remarks in
this paper gave rise to the Kneser–Tits conjecture. His 1968 paper on quadratic forms became the starting point of the Book of Involutions. In 1971, Tits determined all the -irreducible linear
representations of a reductive group over an arbitrary field. He also took the first steps in the theory of pseudo-reductive groups.
In what is now known as the Tits-Kantor-Koecher construction, Tits obtained Lie algebras from arbitrary Jordan algebras. He devised the Freudenthal-Tits magic square which forges a Lie algebra out of
a composition algebra and a degree Jordan algebra, yielding all exceptional Lie algebras if the field is algebraically closed. He also produced the first and second “Tits constructions” which play a
central role in the structure theory of Jordan algebras.
This brings to a close our attempt to name the highlights of Tits’ mathematical career, but no brief summary can encompass them all, nor can we sufficiently describe the influence Tits’ mathematics
has had on group theory and all its many neighboring disciplines.
Figure 4.
Jacques and Marie-Jeanne in Oslo, 2008.
Jacques Tits and I met for the first time in the early 1970s in Bonn. Friedrich Hirzebruch, who had convinced him to take a position there, introduced me to him in the tea room on the ground floor of
Beringstrasse 1. This remains a special memory for me because of the many precious opportunities I had to meet him later in my life.
Indeed, after I became director of the IHÉS in 1994, we developed a close and trusting relationship, something I am highly grateful for because of the thoroughness with which he approached questions
I posed to him. His sense of humour and his gentle way of talking to people were legendary.
Jacques Tits was one of the very first visitors to IHÉS, shortly after its creation in 1958. He lectured several times at the Institute’s first location Rond-Point Bugeaud, near Place de l’Étoile in
the heart of Paris’s XVI arrondissement. Here is what he wrote in a letter to Léon Motchane, the founder of IHÉS and its first director, dated 15 July 1961: “Pour autant que l’on puisse juger de son
propre travail, je compte les deux séjours que j’ai faits à l’Institut des Hautes Études Scientifiques parmi les périodes les plus productives de ma carrière scientifique.”
Later, when the Institute had moved to Bois-Marie in Bures-sur-Yvette, where it still is, Jacques came for several long visits, staying with his wife Marie-Jeanne at the Ormaille Résidence.
Figure 5.
Announcement of Tits’ first lecture at IHES.
Jacques Tits was asked to become editor-in-chief of the Publications Mathématiques de l’IHÉS in 1980, succeeding Jean Dieudonné who had held the position for 20 years. He was an extraordinarily
dedicated editor-in-chief. In the interview Pierre Deligne gave on the occasion of the conference held in 2000 to honour JacquesFootnote^1 when he retired from the Collège de France and shortly
after having left the helm of the Publications Mathématiques, Pierre Deligne says: “C’était une situation idéale. Tits faisait énormément de travail…Il était un despote éclairé. Il jouait son rôle
parfaitement et savait prendre des décisions quand il fallait mais il consultait d’abord.” The advice Jacques Tits gave to Étienne Ghys when he took over the editorship from him is revealing: “Vous
savez, c’est très facile, il suffit d’aimer la revue.”
The movie shot on this occasion by Jean-François Dars and Anne Papillault on behalf of the CNRS is entitled “A Jacques Tits.” It can be found at: https://images.cnrs.fr/video/1168.
His extremely careful checking of all articles to be published there contributed certainly to the high recognition the journal enjoys in the mathematical community. During his editorship, the journal
attracted a number of landmark articles, some of them quite long.
On a number of occasions during my time as Director of IHÉS, I reached out to him for advice on issues related to scientific initiatives IHÉS should take concerning mathematicians to invite or events
to organize. His in-depth knowledge of the mathematical community at a high level was very valuable. Earlier, he had been helpful in securing some financial support for the IHÉS from the Belgian
His later years were not easy due to a difficult health condition which confined him to his apartment. At the end of his life, Tits needed care the around the clock, but his sense of humor persisted
in spite of all the hardships and his mind remained clear and agile.
Several mathematicians paid him regular visits. Jean-Pierre Serre would come to see him every three weeks more or less. I accompanied Misha Gromov on some visits, as Jacques appreciated my providing
some “translation” of what Misha said.
His funeral was very simple. Besides the testimony of a family representative, Étienne Ghys and Michel Broué presented accounts of their admiration for him. I had the privilege of reading short
testimonies received from five Fields medallists. Jacques Tits made the Foundation Hugot of the Collège de France his sole legatee.
Jean-Pierre Bourguignon is the Nicolaas Kuiper Honorary Professor at IHÉS. His email address is jpb@ihes.fr.
It has been my good fortune that Tits was a professor at the ULB (Université Libre de Bruxelles) in the early sixties. He gave me two crucial pieces of advice: “Do what you want” and “Go to Paris.”
The latter was easier said than done. Tits made it possible by introducing me to Grothendieck (at the Fall 1964 Bourbaki seminar), who, together with his colleagues, enabled me to become
“pensionnaire étranger” at the ENS (École Normale Supérieure in Paris).
Every Thursday afternoon during one of my last years of high school, I would bicycle to the ULB to attend Tits’ course on Lie groups. I vividly remember the day he wanted to define the adjoint group.
He began a pedestrian proof that the center is an invariant subgroup, then stopped to say (rough translation): “In fact, this is obvious. As I can define the center, it is stable by any automorphism,
hence by inner automorphisms.” For me, this interrupted proof was a revelation of the power of “transport of structures.” It also shows how symmetry was never far from his mind.
I also fondly remember the day when I bicycled to his house in the pouring rain to show him some mathematics, and how, arriving unannounced, I was warmly received–and dried–by him and Marie-Jeanne.
In 1964, Tits left Brussels for Bonn. It was only in 1973 that I again saw him regularly. Attending his course at the Collège de France was one of the highlights of my week.
Tits was a perfectionist. When he succeeded Dieudonné as editor of the Publications Mathématiques de l’IHÉS, he devoted a lot of energy to it, but he enjoyed the result, and the beautiful typography.
Tits resigned when the composition was computerized.
He had a great interest in languages. He learned Japanese to better enjoy his visits and Chinese to read classical poetry. In French, he suffered when I failed to use a needed imperfect subjunctive,
and regularly chided me for my Belgicisms, correcting my “rouler en vélo” to “rouler à vélo.”
He and Marie-Jeanne were inseparable. When walking became difficult, he leaned on her. Her death was a shock from which he did not recover.
His death, during the covid epidemic, came unexpectedly. I could only find some solace by reading from his Collected Works, where his spirit remains.
Figure 6.
The Tits family: Léon (nicknamed Pap), Jacques (Yaak), Jean (Coc), Yvonne (Vonne), Ghislaine (Ghaine), and Louisa (Mam).
Pierre Deligne is professor of mathematics at the Institute for Advanced Study in Princeton. His email address is deligne@ias.edu.
As a thesis advisor, Jacques Tits was always supportive and benevolent to me, and I benefited immensely from his approachable demeanor and generous personality. In our infrequent work sessions, I had
the privilege to witness the workings of his mind and to appreciate his unfailing, often self-deprecating, sense of humor. Even though the problem he suggested to me was purely algebraic, his line of
thought was infused with geometric insights.
This unique opportunity bestowed on a student in Belgium by an illustrious mathematician from Bonn University who was about to move to the Collège de France was a result of Tits’ attachment to his
country of birth. While he lived abroad, he regularly returned to visit not only his family, but also his colleagues in the mathematics department of the Université Libre de Bruxelles, from which he
had graduated and which had offered him his first position. Tits had to become a French citizen in order to take his chair at the Collège de France, but he kept an enduring connection with Belgium.
He once recounted that on an official visit at the Collège de France the French president Valéry Giscard d’Estaing asked him where he came from (“Et vous, d’où sortez-vous ?,”) expecting as a reply
the name of any of the prestigious French grandes écoles. Tits replied: “Er… from Belgium.”
Jacques Tits was the brother of my maternal grandfather. I met him on rare occasions, but my mother Janine Tits was his closest relative during the latter period of his life. Together with her
brother André Tits, she was of great help in gathering the family memories that are shared in this contribution.
Born on August 12, 1930 Jacques was a lively, joyful child, curious about everything. He lived in the family home at 21 Avenue Victor-Emmanuel III in Uccle, Belgium, with his parents, his older
brother Jean, and his older sisters Ghislaine and Yvonne. As a child, he dreamt of becoming a tramway driver: he loved watching the driver doing his thing. Jacques’ father, Léon Tits (born in
February 1880), was employed as an assistant in the mathematics department at Université Catholique de Louvain (UCL). At the time, he was a Catholic priest, like most professors and many assistants
back then at UCL. By 1914 though, he was in disagreement with the clergy. He left the priesthood and was forced to resign from the university. The Catholic Church made it difficult for him to find
employment elsewhere. His parents and many of his relatives rejected him as well. In 1917, he married Louisa André, a remote cousin, a warm, honest person, who worked as a piano teacher. The family
lived happily, though with limited means. Léon died of Parkinson’s disease in 1943, in the midst of World War II. After Léon’s passing, Jacques’ older brother Jean became the family’s breadwinner and
took over his father’s private tutoring.
In 1941, Jean was starting as an engineering student at the Université Libre de Bruxelles (ULB). As Jacques would hear his dad and brother discuss integration, he wanted to understand. It was shortly
before Léon’s death, when Jean told his bed-ridden father “now I know” that Jacques is truly exceptional. Jacques started to teach university-level mathematics to his brother’s classmates who were
encountering difficulties. He soon decided, on the encouragement of his mathematics teacher, Charles Nootens, to attempt the entrance examination to the ULB’s engineering school.
In preparation for the entrance examination, Jacques had to learn trigonometry, so Jean lent him his 60-page textbook. The next day, Jacques knew it all. His secret: start from the end; if you
understand the end, you can reconstruct the rest. He passed the entrance examination with flying colors. At age 14, he entered the ULB as a student in mathematics.
In his third year at the ULB, under the direction of Paul Libois, he obtained startling results in algebra, in particular on -transitive groups. He obtained his bachelor’s degree at age 18, then his
PhD before age 20. I recall seeing a framed newspaper article at my grandparents place praising his accomplishment as the youngest Belgian Doctor of Science.
In 1949, Jacques met then-Princeton mathematician Emil Artin at the Colloque d’algèbre et théorie des nombres in Paris. Artin invited him to visit Princeton. This would be Jacques’ first trip outside
of Europe.
Also in 1949, Jacques’ first niece, my mother Janine, was born. “Yaak” became “Oncle Yaak.” He loved taking care of Janine. In his late years, he confided to Janine that he would have loved to have
children of his own and that he considered her as his daughter.
In 1953, in Rome, Jacques met Marie-Jeanne Dieuaide, herself an FNRS Fellow from Belgium. Her field was history. Jacques and Marie-Jeanne were housed in the same dormitory building. Marie-Jeanne
later confided to us that, before Jacques’ arrival, she had joked with other FNRS fellows: “A mathematician is joining us? I hate math! Too serious and boring for me!” She soon changed her mind and
proceeded to make him see other horizons. Marie-Jeanne and Jacques got married in Brussels on September 8, 1956.
Figure 7.
Marie-Jeanne and Jacques, Rome, 1953.
From 1956 to 1962, Jacques taught extensively at the ULB. In 1964, Jacques and Marie-Jeanne left for Bonn, where they would remain for ten years. Jacques kept close contacts there for the remainder
of his life.
A close friendship had been formed between Jacques and Jean-Pierre Serre. Serre wished to have Jacques with him at the Collège de France, and succeeded after Jacques changed his citizenship to
French, at that time a requirement for obtaining a professorial position at the Collège de France. In 1975, Jacques gave his inaugural lecture at the Collège de France. This lecture was addressed to
a general audience, and Jacques succeeded in making it seemingly understandable, even exciting, to the “person in the street,” bringing to life the central role played by symmetry in mathematics.
In 2008, several family members had the privilege of attending the Abel Prize award ceremony in Oslo. Jacques, in a wheelchair, peppered his speech with the humorous touch that characterized him.
In the latter portion of his life, Jacques had health problems. Multiple times, Marie-Jeanne contacted Janine, head-pharmacist at Verviers Hospital, asking her to consult with Jacques’ doctors
concerning his ailments. Several times Janine, in close consultation with her cousin Claude, had both Jacques and Marie-Jeanne urgently hospitalized in Paris. In spite of all these travails, Jacques
never complained. Always accepting his fate, smiling, full of great charm and humor, he had an amusing anecdote for everyone.
My wife Tatiana Sirbu recently accompanied my mother to Paris. Originally from Moldova, she speaks fluent Russian. Jacques wanted to hear about her home country, her youth in the USSR, her current
research work on deportations and transfers of populations during the Soviet era; they even had long conversations in Russian together. That day Jacques was especially witty. Beside being fluent in
English and German, he could converse in Russian and Italian, and was in the process of learning Spanish (he wanted to read Don Quixote in the original) from his then chief homecare person, Madame
Rodriguez, plus an Assimil book. He also studied Chinese and Japanese. He once confided to my mother and André that he still had some to-be-written mathematics papers in his head.
Jacques and Marie-Jeanne never had children. Their child was their research, their life was the Collège de France. An idea emerged: Would Jacques bequeath his entire estate to the Collège de France,
specifically to its Fondation Hugot? Jacques was delighted at such a thought. Jean-Pierre Serre contacted the Fondation Hugot and soon Jacques wrote a will, before two witnesses: his dear friends
Jean-Pierre Serre and Jean-Pierre Bourguignon (then President of the European Research Council). This being settled, Jacques was serene. He received the promise that he would never have to leave his
apartment, and Florence Terrasse-Riou, director of Fondation Hugot, told him that, down the road, his apartment would remain the “Apartment Jacques Tits” and would be made available as housing for
Jacques never showed interest in using new technologies. He lived without a TV or even a radio; newspapers, magazines, and books were sufficient for him. For his 91st birthday (on August 12, 2021),
André offered him a laptop so that, with the help of Stéphanie, his chief homecare person at the time, he could read emails we sent him and interact with us on Skype.
On December 1, 2021, Janine had a pleasant Skype conversation with Jacques and Stéphanie. Jacques sent her a virtual kiss. He would leave us four days later at dawn.
Always generous, charming, smiling, and joking, Jacques expressed interest for all things. We keep from him enchanted memories, a life model for future generations.
Figure 8.
Jean-Pierre Bourguignon, Jean-Pierre Serre, Jacques Tits, Florence Terrasse-Riou, Claude Piret, and André Tits in the Tits’ apartment in Paris, 2017.
Pierre-Antoine Absil is professor of applied mathematics at the University of Louvain. His email address is pa.absil@uclouvain.be.
Jacques Tits was born in 1930 as the youngest in a family of four surviving children. As a child, he played a lot with his sister Yvonne, who preceded him by eighteen months. The two felt like twins.
They kept this special complicity throughout life. There was a mathematics gene in the family. Jacques’ father was a high school mathematics teacher. He taught Jacques how to calculate at the age of
four. Jacques made rapid progress and skipped grades in elementary school. His father quickly realized his uncommon mathematical gift and did his best to develop it. Unfortunately, he died as Jacques
was approaching thirteen. Jacques found his own way to help his mother to make ends meet. He gave lessons in mathematics to students preparing for the entrance exam to the Faculty of Applied Sciences
at the Université Libre de Bruxelles (or ULB). He took that opportunity to take the exam himself. Jacques came out first in the exam, and this allowed him to start early working on a bachelor’s
degree in mathematics at the Faculty of Sciences of the ULB. He got his BA at eighteen. Two years later, he defended a PhD thesis prepared under the direction of Paul Libois, who had been his
geometry professor throughout his studies. After that, he obtained postdoctoral support from the science foundation in Belgium (FNRS). This gave him the opportunity to present his habilitation and to
start an academic career at the ULB. In particular, he assisted Paul Libois by contributing to the teaching of the projective geometry course for second-year students in mathematics.
I became a student at the ULB in 1950. Projective geometry was one of the courses I had to take and Jacques Tits was the professor. I enjoyed his very personal style of teaching. In the tradition of
Enriques as later developed in a two-volume book by Veblen and Young, one usually started from the axiomatics of the projective plane and deduced its main properties. After this one climbed a
dimension higher and if time allowed one reached general projective space. Jacques Tits began with the projective line and the characterization of the group of projectivities on the line among triply
transitive groups. In dimension 2, he introduced the nearly fourfold transitive groups and again established the relationship with the group of projectivities of the projective plane. Then he could
deduce the traditional properties more easily. In spite of the difficulty of the material, he succeeded in keeping his lectures understandable to the students. We owed this to his very communicative
style of teaching, which was very different from the distant manner adopted by most professors at that time in universities. His eyes sparkled with intelligence and above all he radiated kindness.
Figure 9.
Franz Bingen and Jacques Tits at the wedding of Tits’ sister Yvonne and Bingen’s brother Roald in Brussels, June 19, 1954.
Three years later, I met Jacques again in very different circumstances. My brother Roald went to a summer camp in Austria. Holidays did not go according to plan for him, as an angina forced him to
take to his bed. One young lady paid special attention to him during his convalescence. She was, by a nearly miraculous coincidence, Jacques’ sister Yvonne. She and Roald soon became engaged and they
married on June 19, 1954. At the wedding, Jacques (24 years old) and myself (two years younger) were of course present. We raised our glass to the health of the newlyweds and also, to a certain
extent, to a new friendship that would last for the rest of our lives. Obviously, Jacques felt very comfortable in our family. We were soon separated, however, by our various scientific stays abroad.
The early sixties was an exciting time at the ULB. Jacques became a full professor, while I was appointed lecturer in the Flemish section of the university. We found it important, at a time when the
number of students, and therefore of professors, researchers, and assistants, was growing significantly, to activate research in mathematics at the university. Our contribution took the form of a
seminar devoted to a current scientific subject. The first year was devoted to Banach algebras. Jacques wanted to better understand the link between a commutative Banach algebra with unit and its
compact spectrum. Lucien Waelbrouck, who had studied continuous inverse algebras, was playing the third wheel at the seminar, which was rapidly named the BTW seminar (BTW is the acronym for value
added tax in Dutch). Through Georges Papy, we got to know another young mathematical prodigy, Pierre Deligne, still on the benches of secondary school. To interest him, we oriented the second and
third seminars towards algebraic geometry and Lie algebras, this time with the help of Guy Valette and Firmin Bratslavski, two geometers.
In 1964, Jacques Tits obtained a chair of mathematics at the University of Bonn, better tailored to his mathematical interests than his assignment in Brussels. Jacques came to Belgium regularly to
visit family, in particular his dear sister Yvonne and her children. Here is how Christine, a daughter of Yvonne, describes her relationship with her uncle: “…for us, Uncle Jacques was above all this
super-funny uncle, extremely simple and kind, who came to visit us once a year according to the availability of his conference life, staying with us for the weekend. He told us extraordinary stories,
experienced during his travels around the world. Magical moments for the children that we were, where he had this mischievous side, disarming with candor alongside an immense sweetness. This is the
image that, I am sure, my sister and brother will keep with me of this uncle we loved a lot and who made us laugh and dream so much.”
Much, much later, around 2008, I started meeting him again, this time in his apartment in Paris. My wife and myself went several times a year to the ballet at the Paris Opera and always took the
opportunity to visit Jacques and his wife Marie-Jeanne Dieuaide. Jacques had developed Parkinson’s disease. He had his Complete Works on his bedside table. He leafed them through with us and asked
for the latest news in his family. Jacques passed away peacefully in December 2021. His friends retain the image of a brilliant mathematician with a charming personality.
Figure 10.
Hendrik Van Maldeghem, Gopal Prasad, Pierre-Emmanuel Caprace, Jef Thas, Bertrand Rémy, Jean-Pierre Serre, Ernie Shult, Bernhard Mühlherr, Jacques Tits, Francis Buekenhout, Marie-Jeanne Tits, Richard
Weiss, and Mark Ronan at a colloquium in honor of Jacques Tits’ 75th birthday at Ghent University, October, 2005.
Franz Bingen is professor emeritus of mathematics at the Free University of Brussels. His email address is fbingen@vub.be.
I would just like to tell here how sad many of us are, who have known Jacques Tits professionally and personally. A peculiar intuition, a source of exceptional ideas, an original and quite productive
point of view, and even a kind of library, have disappeared. This is quite a loss.
But the first feeling which comes to my heart when I think of him is: kindness. To chat with him was always pleasant, reassuring, quiet. He also expressed — I am not sure I find the right word — a
kind of unusual modesty. A kind of modesty always spiced up with a soft and constant sense of humor. Once he was giving a lecture at the Bourbaki Seminar, and at one point he had to mention a theorem
known all around the world as “Tits’ Theorem;” he talked about “le théorème de moi.”
He had been very precocious, defending the equivalent of a Habilitation at the age of 20. Years later, he explained to me that the main hardship for a mathematician is always to understand (and to
smoothly accept) that one day you will find someone quicker, brighter, “better.” Once this is accepted, a mathematician’s life is marvellous, he added.
He was both attached to traditions and opened to rational reasons (he was a mathematician…). A student of mine submitted a “Note aux Comptes-Rendus de l’Académie des Sciences” which was good. But the
author insisted on writing “je définis,” “je démontre,” etc. and at first Tits would not accept this. He wanted the usual “nous définissons,” “nous démontrons,” etc. The student insisted that no one
else but he had defined and proved, and he added that only the late Kings of France would speak of themselves with “nous.” Tits accepted “je.”
The Collège de France was profoundly renovated at the end of the last century. One day the room where he was supposed to deliver the first lecture of his annual course was unavailable, and there were
signs on the main door which directed the audience to another room. Tits arrived from the rear and did not see the signs. “Voilà, personne pour mon cours, je savais que cela arriverait un jour” was
what he immediately thought, and when he eventually found the right room he was still pale. Needless to say, though, that Jean–Pierre Serre, among others, never missed one of his lectures.
Figure 11.
Michel Dehon and Jacques Tits at the conclusion of a talk by Tits in Ghent, October 25, 1979.
The late seventies were an exciting time to study mathematics in Brussels. The two universities (the French-speaking one, ULB, and the Dutch-speaking one, VUB) were sharing the same campus, and there
was intensive collaboration between the two mathematics departments, with a number of professors enjoying dual affiliations. People like Jean Bourgain and Ingrid Daubechies were active at VUB, and at
ULB we enjoyed regular visits of extraordinary alumni like Pierre Deligne and most frequently Jacques Tits. We were lucky, as undergrads, to have two young geometry teachers, Francis Buekenhout and
Jean Doyen, who strongly encouraged us to attend research seminars. So, from my 4th and final undergraduate year (1979–1980), I enjoyed following Tits’ seminar talks, in Brussels and in Ghent. Even
if I did not always have the prerequisites, I was always impressed by his clarity, and there was always something deep to extract from his beautiful lectures. That same year, in spring 1980, my
mathematical inclinations were leading me towards operator algebras; I applied for a PhD thesis scholarship from the Belgian Fund for Scientific Research (FNRS), and I was lucky to get it.
Simultaneously, my official thesis supervisor Lucien Waelbroeck had a severe accident that kept him away from academia for a full year. So I found myself in the embarrassing situation of having a
scholarship but no supervisor. To help me out of this unpleasant situation, Buekenhout arranged an appointment for me with Tits. I was extremely intimidated, and trying to make me more comfortable
Tits said, waving his hands about 60 cm from each other: “Oh but I know you, you were like that first time I saw you.” He was alluding to the fact that he met me as a baby boy, back in 1959, when my
father Guy Valette was doing his PhD thesis with him. (My father, born in 1934, was Tits’ first PhD student.) Even more intimidated, I nevertheless succeeded in explaining my thesis project. Tits
exclaimed: “Young man, if you want to do operator algebras today, there is one saving grace: go to Paris and work with Alain Connes!” With the recklessness of youth, I went to find Connes in Paris
and indeed became his unofficial PhD student. Two years later Connes got the Fields medal. In retrospect, Tits gave me the best advice in my career.
Since my thesis was on -algebras associated with real or -adic simple Lie groups, I frequented group theory conferences where I would occasionally meet Tits. Sometimes I had the good fortune to be
invited to his table for lunch or dinner and got to experience how sweet and gentle he was, but also how funny and witty he could be, with a typically Belgian sense of self-mockery. Tits’ style of
writing was akin to his style of lecturing: a model of clarity and exposition. Un grand monsieur.
Figure 12.
Tits lecturing at the 80th birthday conference for Tits’ advisor Paul Libois at the Université Libre de Bruxelles, April 1, 1981.
Alain Valette is professor of mathematics at the University of Neuchâtel. His email address is alain.valette@unine.ch.
Jacques Tits was a kind and witty friend. I met Jacques and Marie-Jeanne in the fall of 1966. He was a visiting professor at Yale and I, a beginning graduate student. I did not have the prerequisites
to attend his course on algebraic groups but was looking for the opportunity to speak French. They were both extremely kind with a good understanding of what belonging to a French-speaking minority
meant. This developed into a life-long friendship. A few years later while they were visiting Ottawa, I introduced my future wife to them. I had tried to explain to her Jacques’ mathematical
importance without much success until I told her he received his PhD at 19. She asked Jacques if this was true. “Yes.” “But you are a genius!” His answer was yes but with a connotation of there are
things in life that can’t be helped. Marie-Jeanne looked aghast and said “Jacques!” All three of us broke into laughter and Lise looked nonplussed. What made it so funny was that it was really out of
At the 1974 ICM in Vancouver, he began his talk with: Pick a group. Any group. Your favorite group. Let’s say . Looking around at those who were laughing or smiling, you could tell who would enjoy
the talk. In 1988–89, we spent a sabbatical in Paris. Marie-Jeanne was helpful in finding us a place to stay. Early on, Jacques made the rounds of the mathematical libraries to introduce me to the
librarians. I thought, what a waste of time, we could have discussed math instead. But, of course, he knew what he was doing. Without his personal intervention, I would not have been allowed to use
these institutions.
Tits was proud to be a foreign member of the German order Pour le Mérite founded by Frederick the Great. There are no more than 40 German members and 40 foreign ones.
Figure 13.
Tits teaching at the Université Libre de Bruxelles his first year as an Assistant Professor, 1957.
In the preface of his book 2, Nathan Jacobson wrote: “I am greatly indebted to Jacques Tits who took time off from his own important researches on algebraic groups to derive, via the theory of
algebraic groups, the elegant constructions of exceptional Jordan algebras which we have given in Chapter IX.”
Jacques and Marie-Jeanne were inseparable. Her illness and death were the great tragedy of the end of his life. When they spent a few weeks in Ottawa, I rented a two-bedroom suite in a nearby hotel.
The staff cleared one of the bedrooms and installed two banquet tables side by side so they could work together. In our conversations, “What are you reading?” was a frequent question. Once Jacques
answered “We are rereading Proust.” “Are you reading the same thing?” “Of course. We read in bed; one of us reads aloud until we feel sleepy.”
Michel Racine is professor emeritus of mathematics at the University of Ottawa. His email address is mracine@rogers.com.
When I think of Jacques Tits, I think of a kind and generous person. Among the leaders of French mathematics of his generation, he stands out as the one who saw value in what I was doing, and took
steps to further my career.
I met Jacques somewhat by accident, for me a very happy accident. We both spent 1971–72 at IAS and we both had apartments in the IAS visitors apartment cluster. My walk to Fuld Hall took me past the
Tits’ apartment. Apparently, my whistling while walking by attracted the attention of Jacques and his wife Marie-Jeanne. (Perhaps because it was off key. I am not at all musical, but they were too
polite ever to say that.)
Jacques arranged to have me invited to visit the Sonderforschungsbereich run by Friedrich Hirzebruch at the University of Bonn for 1973–74. In Bonn, I had two significant mathematical interactions
with Jacques. Günter Harder was also party to these. The first concerned the orbit structure of pairs of classical groups acting on the tensor product of their standard modules. This led me to the
idea of dual pairs in the symplectic group, which has been my main research focus in the half century since. I first reported on the basic ideas at the Arbeitstagung conference that marked the end of
the academic year for the Sonderforschungsbereich.
The second was about a question of Harish-Chandra, who was trying to establish properties of character distributions, which had been so central to his work on real reductive groups, in the -adic
situation. Harish-Chandra wanted to know if certain constructions could be guaranteed to be well-behaved relative to nice maximal compact groups. I knew how to show this was so when the size of the
residual field was large enough. Jacques, with his understanding of algebraic groups as group schemes, was able to show that this meant it could always be done. He communicated the results to
Harish-Chandra, who incorporated them into his work on characters.
I saw Jacques next in the summer of 1977, at the AMS Symposium on “Automorphic Forms, Representations and L-Functions” in Corvallis, Oregon. He gave a set of plenary lectures on buildings and their
implications for reductive algebraic groups over local fields. I also gave a talk in Corvallis, sketching how the ideas conceived in Bonn had developed since 1974 and some implications for -adic
representation theory. The main facts were mostly still quite conjectural, but Jacques again was supportive, and in the following year he invited me to give a talk at the Collège de France.
Over the following decade plus, I had the pleasure of seeing Jacques and Marie-Jeanne in New Haven, when Jacques would visit Yale. He had substantial interests in common with Nathan Jacobson (Jordan
algebras) and Walter Feit (finite groups). A most enjoyable feature of these visits was the farewell dinner at the Union League Cafe, generally considered the best restaurant in New Haven.
I regret that I hardly saw Jacques after 1990. What remains strong is gratitude for the substantial help and encouragement he gave, and appreciation for the person he was.
Roger Howe is professor emeritus of mathematics at Yale University. His email address is roger.howe@yale.edu.
In 1969, I followed a series of lectures on “Groupes de Chevalley” at the University of Brussels; the main organizers were Francis Buekenhout and Franz Bingen. There I learnt about the work of
Jacques Tits on BN-pairs. A few years later, I read the book of Peter Dembowski on finite geometries, and learned about generalized polygons. These objects were defined by Tits in his famous 1959
paper on triality. In Dembowski’s book, I found the description of certain Tits’ generalized quadrangles arising from ovoids. I generalized the construction of these and gave a lecture on it at a
summer school in Italy in 1972. I sent the paper to Jacques Tits, and he answered me from Princeton that he would present my work to Geometriae Dedicata. So my first contacts with Jacques Tits
started in the period 1972–1974. That was the beginning of the many handwritten letters I received from him. It was also the starting point of my research on generalized polygons, one of the topics
on which I am still working.
In 1976, Jacques Tits sent a letter to Stanley Payne, Francis Buekenhout, and me. In his letter he included the preprint “Quadrangles de Moufang, I.” He also mentioned that due to his moving from
Bonn to the Collège de France his collection of reprints and preprints was “in a shamble” and so he was not sure that the results in his text were new. He asked us if the results were known up to the
best of our knowledge. He also said that “the interest of the paper is certainly quite limited.” In fact this paper is part of his huge achievement, the classification of all Moufang polygons. This
letter also shows his kind way of dealing with much younger researchers. (I was 32 and was of course very honored.) In February 1979, Jacques Tits was awarded the title of Doctor Honoris Causa by
Ghent University. It was a great honor for me to be his promoter and to introduce him during the ceremony. It was the first time that my wife and I met Jacques Tits and his wife Marie-Jeanne
Dieuaide. We talked about a lot of things, not only mathematics. We told him that we liked Italy very much and that we stayed twice at the Academia Belgica in Rome. It appeared that in 1953 Jacques
stayed at the Academia and that he met Marie-Jeanne there while she was doing research on Medieval History.
In the winter of 1979, Jacques Tits and his wife visited Ghent University again, and Jacques gave a talk on diagram geometries. Jacques Tits and his wife were very fond of Ghent. While being in
Ghent, Marie-Jeanne consulted the archives of the city in the frame of her research about the Flemish cities in the Middle Ages. They always stayed in their favorite hotel, the “Cour St Georges” in
an eighteenth-century building.
In 1981, they visited Ghent again, and now Tits talked about groups and Kac–Moody algebras. At the same time my colleague Stanley E. Payne, then professor at Miami University, Ohio, was visiting me.
We were busy working on our book Finite Generalized Quadrangles. In 1977, Payne published two long papers proving the uniqueness of the generalized quadrangle of order 4. Before including it in the
book, we discovered that the proof was incomplete. We mentioned this to Tits who immediately started to write on the blackboard in my office. The next day he came up with a proof that certain
configurations could not exist. This allowed us to complete the missing part in the proof. His reasoning was very original, and I am not sure that Payne and myself would have found it. Tits was just
in time visiting me in Ghent to save us!
For Tits’ 60th birthday, the Belgian Mathematical Society organized a conference in 1990 at the Palace of the Academies in Brussels. The rector of our university invited Jacques, Marie-Jeanne, my
wife and myself for an informal lunch, just the five of us in a room adjacent to the meeting room of the University Board. The rector had to preside at a meeting of the Board, which was supposed to
start right after the lunch, but the rector was so charmed by the Tits family that he let the Board know that they could start without him.
For my 50th birthday, some of my colleagues organized a two-day conference as a surprise. An even bigger surprise was that Jacques and Marie-Jeanne showed up. Jacques gave a beautiful talk on Moufang
In 1996, we organized a conference in honor of the 65th birthday of Jacques Tits. Then in 2003, I had the opportunity to see him again in Brussels at a conference in honor of my good colleague
Francis Buekenhout. Some years later, my colleagues Van Maldeghem and Mühlherr organized a meeting for the 75th birthday of Jacques.
The last time I spoke briefly to Jacques was in 2008 at the Palace of the Academies in Brussels, during a ceremony in honor of Jacques Tits and Pierre Deligne being awarded the Abel prize
respectively the Wolf Prize. His health was not good anymore and he needed a wheelchair.
Jacques Tits was a great man, not only as a mathematician but also as a human being. We all will miss him.
Jef Thas is professor emeritus of mathematics at the Ghent University. His email address is thas.joseph@gmail.com.
The first time I heard about Jacques Tits was in a lecture for second-year undergraduates at Ghent University. It was a course in projective geometry, and the professor (Julien Bilo), nearing his
retirement, was more concerned about telling stories than presenting mathematical results. One story was about a little boy wearing short pants that he met at Brussels University, who amazed his
professors with his knowledge and mathematical insight. Jacques was barely 14 when he entered university.
Some years later—I think I was still a student, or perhaps a first-year PhD—Jacques Tits visited Jef Thas and I saw Jacques in real life. Jacques was an honorary doctor at our university (on the
initiative of Jef Thas) and paid regular visits. The first talk I heard him present was in Mons in the same year, I think it was 1983. I remember him writing down the correct order of the monster on
the blackboard, excusing himself for knowing it by heart by pretending that this huge number consisted of his telephone number, then his bank account number, then his social security number, etc. The
talk was in French, as far as I remember, but it did not matter. In fact, that is one of the many things I always liked about Jacques’ talks: he spoke and pronounced very clearly, using simple words;
it did not matter in which language he was speaking—and he spoke many languages! His explanations always made the audience feel that they understood everything; that was his special gift.
One especially charming feature about Jacques was that he always made people feel important; for him all mathematicians were equal, he never looked down on lesser gods. I experienced this myself
several times. For example, after finishing my PhD, which was about the special class of affine buildings of type , I wrote a letter to Jacques explaining what I did (no email at that time; it was
1984). In the same year, Jacques classified all affine buildings of irreducible type and rank at least 4. His reply to my letter started with the sentence “It seems that we have been working along
the same lines this year.” As a second example, many years later, in 1994, I invited Jacques Tits to present a talk at the conference celebrating the 50th birthday of Jef Thas, my mentor at Ghent
University. We were publishing the proceedings and I persuaded Jacques to submit a paper. He wrote one about the Moufang condition for generalized polygons and the relation with root systems. He
wrote this by hand, and I committed myself to put the text into LaTeX. Doing this, I discovered a small oversight in one of the formulations. (He’d overlooked that the root groups of the smallest
Suzuki group are abelian.) Jacques was very pleased, and at the conference he started his talk by thanking the organizers, as usual, but added, “if you ever write a paper full of mistakes, just send
it to Van Maldeghem to type it out, and he will not only do this, but also correct all your errors.”
I remember Jacques as someone who was very generous and thankful. In the 90s, I followed several courses of his at the Collège de France. Every Tuesday in winter, I drove 630 kilometers from Ghent to
the center of Paris and back to follow his lectures. Sometimes PhD students joined me, either for the full course, or on a sporadic basis; one time my two sisters even joined me (they were math
teachers). And at the end of every course, Jacques invited me and everyone else who joined me regularly, to an extended lunch in Paris (more like a dinner at noon). He was so thankful that we came
from so far just for him—but of course the pleasure and the added value were entirely ours.
Jacques’ lectures were very pleasant to follow.
His style was informal but in a way very efficient. For instance he talked about “a path with hair on” to mean a path together with all neighbors of its inner vertices, which was immediately clear to
everyone and fine for us. But Serre did not agree with his informality and often interrupted him asking for more precise and mathematically sound definitions and expressions. Jacques explained
everything in a rather geometric way, which I liked. He only struggled when there was a choice between two alternatives, like plus or minus, inside or outside, left or right. I remember I corrected
him once (no big deal in fact), and then for the rest of the lecture he called on my help to decide every dichotomy he encountered….
Figure 14.
Francis Buekenhout, Jean-Pierre Serre, Jacques Tits, and Hendrik Van Maldeghem in the first row, at the conference in honor of Jacques’ 75th birthday in Ghent, 2005.
There were three conferences in honor of Jacques held in Belgium, one in Brussels and two in Ghent. The first was on the occasion of his 60th birthday, the second for his 65th birthday, and the third
for his 75th birthday. I was not involved in the organization of the first one (that was Jef Thas), but I was the main and local organizer of the other two. We celebrated his 65th birthday in fact
one year late, in 1996. At that conference he gave a double talk, one on Friday and one on Saturday. In the first talk, my sister (a teacher in mathematics in high school) came into the room with two
dozen school girls, on a school trip to see one of the greatest mathematical minds in action. Jacques was absolutely not disturbed by that and made his young audience feel welcome with a few jokes.
The next day, he continued his lecture, but he was so into it, that he lost track of time. When he looked at his watch after one hour and a half (he was supposed to speak for 50 minutes, but nobody
minded), he exclaimed “My God, look at the time, when did I start?” To which Francis Buekenhout dryly replied “Yesterday.”
Soon after that conference, we celebrated Francis’s 60th birthday with a special session in the one-week conference “Finite Geometry and Combinatorics.” Jacques came over to give a talk on the new
class of Moufang quadrangles that Richard Weiss had just discovered. This triggered the following example of Jacques’ humor (a humor that, in contrast with some professional jokers, complimented
people instead of insulting them or making fun of them). With this new class, Jacques confessed with a little bit of drama “my friend Richard disproved my old conjecture, and so he proved me wrong,”
and then he continued along the line “but luckily I have two other friends, Bernhard and Hendrik, because they saved my conjecture by showing that the new quadrangles fit into the broad picture of
generalized Galois descent, so all Moufang quadrangles are of algebraic origin after all.”
At all conferences that I organized and invited Jacques, I had the pleasure of accompanying him to lunch and dinner, and even of inviting him to my home. These were always very joyful experiences for
The conference celebrating his 75th birthday was the last one in which I saw him participate. When I took him to the train station, he immediately asked for a wheelchair, and he apologized to me
saying “It must be awful to see a friend be discharged in a wheelchair like that, but do not worry, I am getting used to it.” His Parkinson’s had become worse (during one of the conferences that he
organized on Algebraic Groups in Oberwolfach, he confided in me that this illness prevented him from riding a bicycle, which he would have loved to do). It was also the last time I saw Jacques in
One of the greatest honours in my scientific career was to be a co-editor of Jacques’ Collected Works. One of the highlights for me was the day that the four editors spent in Paris together with
Jacques asking him all sorts of questions. Jean-Pierre Tignol produced a transcript of these interviews. We didn’t use it for the Collected Works, but it is now an invaluable treasure to me. Jacques
talked nineteen to the dozen about all kinds of aspects of his life and career. Near the end of the production process of the Collected Works, I was the one making contact with Jacques through
Jean-Pierre Serre. I delivered two copies of the four books of his Collected Works to his apartment in Paris, on Thursday January 30, 2014. That day I had lunch with Serre at 13:30 and coffee with
Tits and his wife Marie-Jeanne at 16:00. Marie-Jeanne told an interesting story. She said that when historians meet (she was an historian) and discuss scientific matters, at the end of the day when
they separate they each have their own original ideas and beliefs. When mathematicians meet and started discussing various matters, at the end of the day they all agree, no matter what their original
belief was.
Marie-Jeanne and Jacques were together at many conferences. She accompanied Jacques as frequently as possible. When I had an appointment with Jacques after a lecture at the Collège, I noticed that he
always first called Marie-Jeanne just to say his lecture went well and ask how she was doing. She also took great care of Jacques when his illness became worse. Sadly, Jacques’ life companion passed
away too soon, on Tuesday February 2, 2016.
On Tuesday April 23, 2019, Bernhard Mühlherr and I presented Jacques with a hard copy of the Complement to the Collected Works of Jacques Tits 6. This was the last time that I saw him.
A few months before Jacques died, there was an initiative among my department to compose a booklet containing trivia about the math professors. One of the items was what is considered their greatest
scientific achievement. You could read great theorems there, proofs, prizes, and other concrete accomplishments in that rubric. On my page, it just mentioned my friendship with Jacques Tits.
Thank you, Jacques, for your beautiful mathematics, for your beautiful personality, and for your beautiful friendship. An architect died, but what he built will live on.
Figure 15.
Jef Thas, Arjeh Cohen, Dan Hughes, Francis Buekenhout, Jacques Tits, Ernie Shult, and Antonio Pasini at a conference in honor of Francis Buekenhout in Brussels, November, 2003.
My first encounter with Jacques Tits was in January, 1989. Tits was giving a course on twin buildings at the Collège de France. I was in Brussels working on buildings for my Diplom thesis and my
advisor, Francis Buekenhout, recommended that I attend Tits’ course. I was surprised that I was able to understand so much of his lectures despite my rudimentary knowledge about buildings. Only much
later, did it become clear to me that Tits possessed an extraordinary talent for describing the central ideas of his mathematics on a very concrete level. Since Buekenhout had let Tits know that I
would be attending his course, Tits offered that we could meet for an hour after one of his lectures so that I could ask him questions about buildings. I looked forward to this hour, but I was
anxious that I didn’t understand enough and the meeting would just waste his time. It turned out that my fears were completely unfounded. Tits listened carefully to the ideas I was working on, made
valuable suggestions and encouraged me to continue with my project.
Figure 16.
Jacques Tits, Bertrand Rémy, Gopal Prasad, Bernhard Mühlherr, and Jean-Pierre Serre at the meeting in honor of Jacques’ 75th birthday in Ghent, 2005.
Looking back, it is clear that those lectures influenced me more than anything else in my mathematical training. Around this time, Tits formulated several open questions about twin buildings and in
the following years I made a number of contributions to their solution. Throughout this time, we stayed in regular contact. Whenever I had some progress to report, it sufficed for him that I would
give the general idea; we never talked about the details. For me, these discussions were principally a kind of mathematical compass. At one point they were decisive in suggesting that I should pursue
a vague idea I’d described. This idea brought a breakthrough in the classification of twin buildings. Tits invited me to give a lecture about my results at the Collège de France. This invitation and
the fact that on another occasion Tits referred to me as his student, are among the greatest honors of my research career.
I remember well a number of meetings with Tits. Once, when I was in Paris with Hendrik Van Maldeghem for one of Tits’ lectures, the three of us met in Tits’ office, where there was a table piled high
with manuscripts and preprints. He told us that these were all the things that he still needed to work his way through and we’d better not steal anything! This was just one of the many small moments
when I got to enjoy Tits’ very special sense of humor. Tits was always interested in languages. We generally spoke in German to each other and he would say something if I used a construction that
wasn’t familiar to him. Because of my South German origins, this occurred fairly often and he liked to joke in these situations that my French was better than my German.
Jacques Tits’ mathematics opened exciting perspectives to mathematicians of the next generation. He enjoyed seeing others work with his ideas and he was generous with his support and encouragement. I
remember well a conversation he had with my first PhD student Pierre-Emmanuel Caprace at the conference in Ghent in honor of Tits’ 75th birthday. Caprace had given a talk about his thesis in which he
combined and modified many of Tits’ ideas to solve the isomorphism problem for Kac–Moody groups. Tits asked questions about the details, but as usual only a few general remarks were all he needed to
appreciate what Caprace had accomplished. Tits’ pleasure at seeing his mathematical ideas woven together in new ways and bearing valuable fruit was particularly clear that day.
I first “met” Jacques Tits in the late 70s in the math library of the Free University of Berlin, where I was a post-doc. Killing time looking through recent journals on display, I came across a paper
entitled “Non-existence de certains polygones généralisés, Part I” in the latest issue of Inventiones, where Tits was editor. In this paper, Tits began the proof that Moufang -gons exist only for , ,
, and . I didn’t know what a Moufang polygon was, but I had been working on generalizations of a theorem of William Tutte that says that finite trivalent graphs whose automorphism group acts
transitively on paths of length but not on paths of greater length exist only for , , and . I knew that special attention was needed to rule out the case . The coincidence in these numbers was
striking and within hours I understood how to prove a more general version of Tits’ result by combining a lemma in his paper with results that I had in my drawer. Tits reacted to news of my result
with charm and generosity. Not to leave things hanging, he wrote a much shorter version of his Part II using ideas from my paper and the two papers appeared quickly back to back.
In 1992, I was spending a couple of months of a sabbatical visiting Hendrik Van Maldeghem in Ghent. One afternoon Hendrik stuck his head in my office and said he was driving to Paris the next morning
to hear a lecture of Jacques Tits about Moufang polygons and would I like to come. In fact, the subject was the theorem that , , , or . This was before Thalys, and Paris was far away. We were on the
road at 5:00 AM, merged into the daily traffic jam on the Périférique just as the sun was rising, and then drove through the city, arriving at the lecture room just on time for the 9:00 AM lecture.
Tits came in and started to write on the board, but when he turned around and noticed me, he made a startled expression and said “Oh, this is like lecturing on the Riemann hypothesis and discovering
that Riemann is in the audience.” Tits always knew how to be witty and generous at the same time. Hendrik and I went down every Tuesday for the remaining lectures and on the last day, Tits invited
the two of us to a merry and lavish lunch in a nearby restaurant.
In 1993 Dina Ghinelli invited me to hold a series of lectures on Tits’ work on Moufang polygons in Rome. For the last part of his course, I worked through his unpublished notes on the Moufang
quadrangles that he called “indifferent.” At that time, Tits had classified Moufang triangles and octagons and announced the classification of Moufang hexagons, but this unpublished manuscript was
all that he’d done with Moufang quadrangles apart from describing examples coming from groups of type , , and in lectures at the Collège de France. Once I thought I’d really understood Tits’ proof in
the indifferent case, I grew ambitious and wanted to go farther. After much hesitation, I wrote a letter to Tits proposing that we collaborate to finish the classification and write the whole thing
up as a book. I was proposing coming in on a project in which he’d invested years of effort and was quite certain that my offer would be rebuffed. In fact, months went by with no reply. Tits was at
Yale for the semester visiting his old friend Nathan Jacobson. Still no reply. I’d mentioned my letter to Diego Benardete who was at Trinity College at that time. Later, I learned that at a tea after
a colloquium talk at Yale, Diego marched up to Tits with the words “Professor Tits, you’re keeping Weiss waiting!” This did the trick. Days later, shortly before his return to Paris in December, Tits
called me at home in Boston to say he agreed to work together. “But I’m very busy with many other projects,” he warned, “and it might take us five years!” He was wrong. In the end it took seven.
Figure 17.
Arjeh Cohen, Marie-Jean Tits, and Jacques Tits on a boat during the conference on buildings and diagram geometries by Lake Como, 1984.
These were a thrilling seven years. Our collaboration consisted mostly of written exchanges. Laptops were not yet common and Tits never used anything but a fax machine for his communications. Our
first goal was to complete the classification of Moufang quadrangles. Pushing the ideas in Tits’ indifferent paper, we arrived at the situation where the exceptional Moufang quadrangles should turn
up. In this case, we had to invent and classify structures that we later called “quadrangular algebras.” When the classification was essentially complete, I noticed a mistake in a lemma asserting the
existence of an element of order in one of the root groups when the characteristic is . Each repair to the proof fell apart. It turned out that there was, in fact, a new family of Moufang quadrangles
whose root groups were all abelian. Tits was thrilled. Within a week of hearing about them, Bernhard Mühlherr and Hendrik Van Maldeghem showed that these new quadrangles filled in a gap in Tits’
picture. They arise by descent from a group of type , but not one associated with an absolutely simple algebraic group, rather from a split pseudo-reductive group of type defined over a purely
inseparable field extension.
I think that Tits was particularly pleased with these developments because they confirmed his well-known attitude about the importance of characteristic . Characteristic was, as a rule, historically
excluded in the study of quadratic forms, Jordan algebras, and composition algebras. This offended Tits’ understanding of the geometric nature of these things. Here is what the authors of the The
Book of Involutions wrote in their introduction: “Not only was Jacques Tits a constant source of inspiration through his work, but he also had a direct personal influence, notably through his
threat—early in the inception of our project—to speak evil of our work if it did not include the characteristic case.”
Once or twice a year I was able to spend a month in Paris and often had the use of a small windowless storage room in the Collège de France Annexe as my office. Tits’ office was down the hall, but as
he’d warned me at the start, he was a very busy man and our meetings were always by appointment. Tits knew my limitations as a mathematician. I think, though, that he had respect for my persistence
and trusted me to get around the technical problems that arose on my own. But his guidance about what ought to be true was the real driving force behind the project. Tits often joked about seeing
things in his crystal ball, but what he was really referring to was his uncanny ability to see a whole world hidden in a Dynkin diagram.
In the bibliography we have included all the works of Jacques Tits as well as a few other books alluded to in this article. Volume I of 8 also includes a Curriculum Vitae and surveys of Tits’ work
written during his lifetime, including one by Tits himself.
Figures 1, 4, 6–8 are courtesy of Janine Tits.
Figure 2 is courtesy of the Order Pour le Mérite.
Figure 3 is courtesy of Richard M. Weiss.
Figure 5 is courtesy of Jean-Pierre Bourguignon.
Figure 9 is courtesy of Franz Bingen.
Figures 10 and 15 are courtesy of Jef Thas.
Figures 11 and 12 are courtesy of Alain Valette.
Figure 13 is courtesy of Jean Doyen.
Figures 14 and 16 are courtesy of Jean-Pierre Serre.
Figure 17 is courtesy of Nikolai Vavilov.
Photo of Richard M. Weiss is courtesy of Susan Ernst. | {"url":"https://www.ams.org/journals/notices/202301/noti2601/noti2601.html?adat=January%202023&trk=2601&pdfissue=202301&pdffile=rnoti-p77.pdf&cat=none&type=.html","timestamp":"2024-11-06T18:15:02Z","content_type":"text/html","content_length":"245153","record_id":"<urn:uuid:5d335a47-dc4e-4ad8-8d3c-f877ca304040>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00349.warc.gz"} |
Path Planning for Self-Driving Cars - Bosch Challenge - Atul Acharya
Late in summer 2017, Bosch, one of the leaders in automotive components, threw a challenge to Udacity learners.
The aim of the challenge was to have a (simulated) car drive itself autonomously along a three-lane, 7-km highway loop, in traffic, without colliding with other vehicles on the road. The autonomous
vehicle (called the ego car) should observe all the rules of the road:
• Drive at max speed 50 miles/hour
• Pass slow moving vehicles on the road
• Automatically accelerate/slow down, brake, as needed to avoid collisions
• Observe lane change rules - e.g. change lanes smoothly
• Should not cross into incoming traffic
• Max acceleration: 10 m/s^2 - to maintain passenger comfort
• Max jerk: 10 m/s^3 (rate of change of acceleration) - to maintain passenger comfort
• Complete the track around the loop, and drive at least ~5 miles without any violations(incidents)
In order to achieve a safe and smooth autonomous driving experience, the ego car must use sensor fusion data, localization, map information to accurately predict its future trajectory, and execute it
My self-driving car successfully completed the Bosch Path Planning challenge, and was one of the top 25 entries, from among hundreds of entries. 👍🚗🍾 Niiice
Check out the video below (click on the image). Best way is to watch it at 2x speed.
Here’s the Youtube video.
What is Path Planning
Path planning, also called motion planning is one of the most challenging problems in autonomous driving. It typically consists of three functions: Prediction, Behavior Planning, and Trajectory
• Prediction The predictor asks the question: Who is moving around me, and where? This function concerns with predicting the behavior of other moving objects around our (ego) autonomous vehicle,
such as other cars, bicycles, pedestrians, etc. How is this achieved? In a nut shell, there are two main approaches: process model approach and data-based approach.
The process model approach typically uses probabilistic methods to guess where and when each moving object will move next. Recall that sensor fusion data periodically gives our ego vehicle
information about where each moving object is. Thus, the ego car predicts (with a certain probability) where other vehicles will be next, and updates its (incorrect) guesses based on (updated)
actual sensor data.
The data based approach uses machine learning techniques to predict how likely some vehicle will turn right/left/go straight at an intersection, for example. This approach obviously uses a lot of
training data.
Regardless of the approach used, the planner makes an (accurate enough) estimate of next positions of other, relevant vehicles.
• Behavior planning The behavior planner answers the question: What should I do next? Given that other vehicles, pedestrians, etc. might be moving, how should our ego vehicle move? The behavior
planner determines which maneuvers to execute, taking into account its own destination, and the predicted behavior of other dynamic objects around it. For example, the behavior planner could plan
to slow down (if a car infront is slowing down), brake (if a pedestrian suddenly comes in its way), change lanes (if it becomes available, or to avoid a slow vehicle), and so on.
• Trajectory generation The trajectory generator answers the question: What precise path should I follow next? It takes the ego vehicle’s planned behavior and plots a precise trajectory to take.
This requires generating dozens (or more) trajectories per second, evaluating costs for each, and choosing a trajectory that satisfies the “best” conditions.
The trajectory generator takes into account important safety and comfort variables, such as speed limit (different for highway and urban driving), maximum forward and sideways acceleration and
jerk (for comfort), its own body weight, etc. The best trajectory is one that has the lowest cost (where a cost is a penalty for violating safe and comfortable driving conditions).
We use Frenet coordinates (s, d) instead of regular X,Y coordinates as this simplifies the path planning. In Frenet coordinates, s denotes longitudinal displacement along the road, while d denotes
lateral displacement from the center (yellow) dividing line. The track is approx. 6947 meters long, thus s varies from 0-6947 (length of the track). The d value shows distance away from center
divider: each lane is 4 meters wide, and there are three lanes: Left, Center, Right. Thus, the d value for Left lane will be 0 < d < 4, for Center lane (4 < d < 8), and for Right lane (8 < d < 12).
Thus, the (s,d) coordinates fully specify a car’s position on the track.
We use Frenet coordinates with transforms and spline interpolation to generate paths. As the picture above shows, the highway map is a loop, and in Frenet coordinates the outline is jagged and has
sharply cut line segments. This results in sharp acceleration and jerk around the corners. To help smooth this out, spline interpolation was used.
The key elements of the project are broken down into three parts:
• Sensor Fusion: understanding (nearby) traffic
• Path Planning: determining lane change behavior
• Trajectory construction
Sensor Fusion - Analyzing Traffic Information
In a typical self-driving car, various sensors (LIDARs, radars, cameras, etc.) continously provide enviroment information to answer key questions: What objects are nearby? How far are they? Are they
static or dynamic? How fast and in what direction are they traveling? Etc.
The simulator in this particular case provides a smaller subset of information: Cars traveling in this side of the ego car. It does not provide any info on cars in the opposite side of the road (i.e.
on-coming traffic). The highway track is a 3-lane highway on each side. The ego car starts in the Center lane, and then drives itself based on near-by traffic.
In my implementation, the ego car first analyzes sensor fusion data to determine its reference velocity:
– Is there a car in our lane, infront of us, within 30 meters? If so, follow its speed – Is there a car in our lane, within 20 meters? If so, slow down to something less than leading car’s speed – If
no car ahead of us, maintain a reference speed of 49 miles/hour, so that we don’t violate speed limit
This is implemented in lines 275-332 of main.cpp.
// 2. ---- SENSOR FUSION PROCESSING ----
// Determine Reference Velocity based on nearby traffic
double CLOSEST_DISTANCE_S = 100000;
bool change_lane = false;
bool too_close = false;
for (int i = 0; i < sensor_fusion.size(); i++)
double other_d = sensor_fusion[i][6]; // other car's d
double other_lane = getCarLane(other_d); // other car's lane#
double other_vx = sensor_fusion[i][3];
double other_vy = sensor_fusion[i][4];
double other_car_s = sensor_fusion[i][5];
// if Other Car in Ego car's lane..
if (other_lane == LANE)
double other_speed = sqrt(other_vx * other_vx + other_vy*other_vy);
// project where Other Car will be in the next few steps
other_car_s += (path_size * TIME_INTERVAL * other_speed);
// find front gap
double front_gap = other_car_s - car_s;
// check if Other Car is ahead and by how much
if (front_gap > 0 && front_gap < FRONT_BUFFER && front_gap < CLOSEST_DISTANCE_S)
CLOSEST_DISTANCE_S = front_gap;
if (front_gap > FRONT_TOO_CLOSE) {
// follow the front car
REF_V = other_speed * MS_TO_MPH;
// Yes - try a lane change
change_lane = true;
cout << "Front gap: " << front_gap
<< "\tReference velocity (mph): "
<< setprecision(4)
<< REF_V
<< ", current speed: "
<< car_speed
<< endl;
} else { // FRONT TOO CLOSE!
// go slower than front car
REF_V = other_speed * MS_TO_MPH - 5.0;
too_close = true;
// Definitely do a lane change!
change_lane = true;
cout << "FRONT TOO CLOSE! " << front_gap
<< "\tReference velocity (mph): "
<< setprecision(4)
<< REF_V
<< ", current speed: "
<< car_speed
<< endl;
cout << " Maybe Change Lane? " << yes_no(change_lane) << endl;
} // if in my lane
} // sensor-fusion
Path Planning: Lane Change Behavior
For lane change behavior, we use some simple heuristics. The car prefers to stay in its lane, unless there’s traffic ahead, in which case, it will try to find a lane it can safely move into.
1. Stay in the lane, and drive at reference velocity for as long as possible.
2. If there’s traffic ahead (as determined above), flag for a lane change.
3. First, check the traffic in the lane left of current lane (if it exists, i.e. if the ego car is not already in Left lane). Find the closest front and closest back gaps in this lane. Only if
there’s enough space in the front (20 meter buffer) and back (13 meter buffer), set target lane to this lane.
4. If the above (left-er) lane isn’t available to change into, check traffic in the lane right of the current lane (if such a lane exists, i.e. car is not already in the extreme Right lane). Perform
the same evaluation, and set target lane to this lane.
// 3. --- LANE CHANGE LOGIC: Determine Target Lane (if needed) ---
int delta_wp = next_waypoint - lane_change_waypoint;
int remain_wp = delta_wp % waypoints.map_x_.size();
// cout << " delta wp : " << delta_wp << endl;
// cout << " map wp size: " << map_waypoints_x.size() << endl;
// cout << " remain wp: " << remain_wp << endl;
if (change_lane && remain_wp > 2)
cout << "..Checking Lane Change from: "
<< getLaneInfo(LANE)
<< ", at s: " << car_s << endl;
bool did_change_lane = false;
// First - check LEFT lane
if (LANE != LANE_LEFT && !did_change_lane) {
bool lane_safe = true;
// Check if OK to go LEFT?
lane_safe = is_lane_safe(path_size,
LANE - 1, // To the Left of Current
sensor_fusion );
if (lane_safe) { // OK to go LEFT
did_change_lane = true;
LANE -= 1; // go Left by one lane
lane_change_waypoint = next_waypoint;
// NEXT - Try Right Lane?
if (LANE != LANE_RIGHT && !did_change_lane) {
bool lane_safe = true;
// Check if OK to go RIGHT
lane_safe = is_lane_safe(path_size,
LANE + 1, // to the Right of Current
if (lane_safe) { // OK to go RIGHT
did_change_lane = true;
LANE += 1; // go Right by one
lane_change_waypoint = next_waypoint;
cout << " Current Lane: "
<< getLaneInfo(LANE)
<< ", changed_lane: "
<< yes_no(did_change_lane)
<< ", s: " << car_s
<< endl;
} // if change lane
// --- END LANE CHANGE ---
is_lane_safe() implemented here:
// Path Planner -- Return TRUE if safe to change into given lane
bool is_lane_safe(const int num_points, // num of points to project speed for
const double ego_car_s, // Ego Car's s
const double ref_vel, // Ego Car's reference velocity
const double check_lane, // Lane to look for
const vector<vector<double> >& sensor_fusion_data)
bool ok_to_change = false; // should we move into the check_lane?
double SHORTEST_FRONT = 100000; // Really big
double SHORTEST_BACK = -100000;
cout << " Front buffer (m): " << LANE_CHANGE_BUFFER_FRONT
<< ", Back buffer: " << LANE_CHANGE_BUFFER_BACK << endl;
// Calculate the closest Front and Back gaps
for (int i = 0; i < sensor_fusion_data.size(); i++)
float d = sensor_fusion_data[i][6]; // d for a Traffic Car
double other_car_lane = getCarLane(d); // lane of the Traffic Car
// if a Traffic Car is in the lane to check
if (other_car_lane == check_lane) {
// get it's speed
double vx = sensor_fusion_data[i][3];
double vy = sensor_fusion_data[i][4];
double check_speed = sqrt(vx*vx + vy*vy);
// get it's s displacement
double check_car_s = sensor_fusion_data[i][5];
// see how far Other Car will go in TIME_INTERVAL seconds
// i.e. project its future s
check_car_s += ((double)num_points * TIME_INTERVAL * check_speed);
// see the gap from our Ego Car
double dist_s = check_car_s - ego_car_s; // WAS: ego_car_s
// remove -ve sign
double dist_pos = sqrt(dist_s * dist_s);
// store the shortest gap
// SHORTEST_S = min(dist_pos, SHORTEST_S);
if (dist_s > 0) { // FRONT gap
SHORTEST_FRONT = min(dist_s, SHORTEST_FRONT);
} else if (dist_s <= 0) { // BACK gap
SHORTEST_BACK = max(dist_s, SHORTEST_BACK);
cout << " gap (m): "
<< setprecision(5)
<< dist_s
<< ", closest front: "
<< setprecision(5)
<< SHORTEST_FRONT
<< ", closest back: "
<< setprecision(5)
<< SHORTEST_BACK
<< endl;
} // for-each-Traffic-car
cout << " gap (m): "
<< " >>> Closest Front: "
<< setprecision(5)
<< SHORTEST_FRONT
<< ", closest Back: "
<< setprecision(5)
<< SHORTEST_BACK
<< " <<< "
<< endl;
// Only if enough space in that lane, move to that lane
if ( (SHORTEST_FRONT > LANE_CHANGE_BUFFER_FRONT) &&
(-1 * SHORTEST_BACK > LANE_CHANGE_BUFFER_BACK))
ok_to_change = true;
cout << " CHECK Lane : " << getLaneInfo(check_lane) << ", OK_to_Change? " << yes_no(ok_to_change) << endl;
return ok_to_change;
} // end is-lane-safe
Trajectory Construction
As mentioned, we use Frenet coordinates (s, d) for path construction based on reference velocity and target d value for the target lane. Instead of a large number of waypoints, we use three waypoints
widely spaced (at 30 meters interval) and interpolate a smooth path between these using spline interpolation. These are the anchor points. (This technique is discussed in the project walkthrough). To
ensure that acceleration stays under 10 m/s^2, a constant acceleration is added to or subtracted from the reference velocity. The three anchor points are converted to the local coordinate space (via
shift and rotation), and interpolated points are evenly spaced out such that each point is traversed in 0.02 seconds (the time interval). The points are then converted back to frenet coordinates, and
fed to the simulator.
// 4.c In Frenet coordinates, add 30-meters evenly spaced points ahead of starting reference
double TARGET_D = 2 + LANE * 4; // d coord for target lane
vector<double> next_wp0 = getXY((car_s + SPACING), TARGET_D, waypoints.map_s_, waypoints.map_x_, waypoints.map_y_);
vector<double> next_wp1 = getXY((car_s + SPACING*2), TARGET_D, waypoints.map_s_, waypoints.map_x_, waypoints.map_y_);
vector<double> next_wp2 = getXY((car_s + SPACING*3), TARGET_D, waypoints.map_s_, waypoints.map_x_, waypoints.map_y_);
// Add these next waypoints to Anchor Points
// 4.d. Transform to Local coordinates
for (int i = 0; i < anchor_pts_x.size(); i++)
// SHIFT car reference angle to 0 degree
double shift_x = anchor_pts_x[i] - ref_x;
double shift_y = anchor_pts_y[i] - ref_y;
// ROTATION
anchor_pts_x[i] = (shift_x * cos(0 - ref_yaw) - shift_y * sin(0 - ref_yaw));
anchor_pts_y[i] = (shift_x * sin(0 - ref_yaw) + shift_y * cos(0 - ref_yaw));
// 4.e. Create a Spline
tk::spline s_spline;
// set Anchor points on the Spline
s_spline.set_points(anchor_pts_x, anchor_pts_y);
// 4.f. ADD points from Previous Path - for continuity
for(int i = 0; i < path_size; i++)
// 4.g. Target X and Y - Calculate how to break up spline points to travel at REF_VELOCITY
double target_x = SPACING; // HORIZON: going out to SPACING meters ahead
double target_y = s_spline(target_x);
double target_distance = sqrt((target_x * target_x) + (target_y * target_y));
double x_add_on = 0;
// 5. Fill up the rest of the path
for(int i = 1; i < 50-path_size; i++)
// if too slow, speed up by a small amount
if (car_speed < REF_V) {
car_speed += (MS_TO_MPH / 10); // 0.224;
} // else speed down by a small amount
else if (car_speed > REF_V) {
car_speed -= (MS_TO_MPH / 10); // 0.224;
// Calculate spacing of number of points based on desired Car Speed
double N = (target_distance / (TIME_INTERVAL * car_speed/MS_TO_MPH)); // num of points
double x_point = x_add_on + (target_x) / N;
double y_point = s_spline(x_point); // y on the spline
x_add_on = x_point;
double x_ref = x_point;
double y_ref = y_point;
// Transform coordinates back to normal
x_point = (x_ref * cos(ref_yaw) - y_ref * sin(ref_yaw));
y_point = (x_ref * sin(ref_yaw) + y_ref * cos(ref_yaw));
// Add to our Reference x, y
x_point += ref_x;
y_point += ref_y;
// FINALLY -- add to our Next Path vectors
The ego car can safely drive around the entire track at just under 50 miles/hour, without any violations. I ran the simulator for various lengths (5 miles to 12 miles) successfully.
Here is a resulting video showing a successful 5-mile drive.
The ego car drives itself fully autonomously along the entire highway track. However, there are limitations because of the simplified logic. Since the ego car prefers to stay in its lane and test for
a safe left lane first, it can get stuck behind a slow moving car if there’s traffic in the left lane, even if the right lane is empty, although it does find its way correctly. Also, at times, the
ego car can switch back and forth between Left lane to Center lane, due to traffic ahead, although this too is eventually handled.
Secondly, currently only implicit costs are awarded for lane change behavior. That is, the costs are binary 0/1 if a lane is safe to move into or not. A better alternative would be using true cost
functions that give varying costs based on:
• trajectories available
• traffic in neighboring lanes
• acceleration / jerk values
• collision avoidance
Thirdly, a Jerk Minimization technique could be used that smoothes out, using a quintic polynomial, the possible trajectories available to the ego car.
Fourth, one could project the future behavior of traffic and try to predict their trajectory and thus make the car more proactive.
Code for this project is on my repo
Many thanks to David Silver and Aaron Brown at Udacity for clearing up many of the concepts. 🙏🏻 | {"url":"https://atul.fyi/post/2017/09/01/path-planning-for-self-driving-cars/","timestamp":"2024-11-10T17:32:40Z","content_type":"text/html","content_length":"27617","record_id":"<urn:uuid:073beba4-9d7d-459a-8056-564fa97c3eb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00794.warc.gz"} |
Advanced Mathamatical Statistics
Github: https://github.com/Tiyasa-Saha/MTH522-Project-2/blob/main/Project%202.ipynb
K-Means clustering for location
K-Means clustering is a powerful tool for location analysis, enabling the grouping of geospatial data points into clusters based on their proximity or similarity. This technique is valuable for
segmenting locations, understanding market trends, optimizing resource allocation, and making informed decisions in various fields, from retail and urban planning to healthcare and environmental
First let us do a scatter plot of the latitude and longitude data to get an idea of the data
Now let us do the K-means clustering with a cluster size of 100
Cluster 1 Center: Latitude -87.77466608391609, Longitude 40.67427972027972
Cluster 2 Center: Latitude -121.12838608695652, Longitude 41.12909739130435
Cluster 3 Center: Latitude -84.15862464985995, Longitude 37.49454341736695
Cluster 4 Center: Latitude -105.33088211382113, Longitude 34.17090650406504
Cluster 5 Center: Latitude -117.77351302083333, Longitude 34.66760546875
Cluster 6 Center: Latitude -82.29514070351759, Longitude 32.234090452261306
Cluster 7 Center: Latitude -95.39037857142857, Longitude 29.66034642857143
Cluster 8 Center: Latitude -74.44290510948905, Longitude 40.786705596107055
Cluster 9 Center: Latitude -76.20114640883978, Longitude 38.916165745856354
Cluster 10 Center: Latitude -106.63796355353075, Longitude 40.951282460136675
Cluster 11 Center: Latitude -149.4990465116279, Longitude 62.47753488372093
Cluster 12 Center: Latitude -81.8052558922559, Longitude 28.721195286195286
Cluster 13 Center: Latitude -156.97926470588237, Longitude 20.828588235294117
Cluster 14 Center: Latitude -94.13923595505618, Longitude 42.08931086142322
Cluster 15 Center: Latitude -122.21451034482759, Longitude 47.371468965517245
Cluster 16 Center: Latitude -97.51444057971014, Longitude 31.395452173913043
Cluster 17 Center: Latitude -88.35704276315789, Longitude 31.789996710526317
Cluster 18 Center: Latitude -81.17313015873016, Longitude 40.23007619047619
Cluster 19 Center: Latitude -94.6525529953917, Longitude 35.52384331797235
Cluster 20 Center: Latitude -112.77275471698113, Longitude 33.27761455525606
Police Shooting Data Overview
We have been given the report on police shooting data by the Washington Post. Today, we will plot the trends in the significant columns to gain some basic insights into the data
Columns: [‘id’, ‘name’, ‘date’, ‘manner_of_death’, ‘armed’, ‘age’, ‘gender’,
‘race’, ‘city’, ‘state’, ‘signs_of_mental_illness’, ‘threat_level’,
‘flee’, ‘body_camera’, ‘longitude’, ‘latitude’, ‘is_geocoding_exact’]
Attaching the punchline report for project 1 –Predicting Diabetes Prevalence from Obesity and Inactivity: An Analysis of Health Disparities
Understanding T-test II
In the last post, we saw what a T distribution and Normal distribution is. Now, let us lok at some key features
In a normal distribution, data tends to cluster around the mean (68% of data lies within i standard deviation around the mean and 99% of data lies within 3 standard deviations). As one moves farther
away from the mean, the frequency of data points decreases exponentially. This phenomenon implies that the probability of an event occurring is closely tied to its proximity to the mean value. This
correlation is of paramount importance because it underscores the mean’s effectiveness as a precise descriptor of the distribution.
Understanding the mean value provides valuable insights into the population and its behavior. This is precisely why normality is crucial for conducting a T-Test. When dealing with a population that
does not exhibit a normal distribution, there is no assurance that the population mean carries inherent significance on its own. Consequently, knowledge of the mean may provide little to no
meaningful information about the dataset. In such cases, conducting a t-test becomes a futile exercise because determining whether the difference in means is statistically significant offers no
meaningful insights when the means themselves lack significance.
Central Limit Theorem
The central limit theorem states that as we sample data from any population, regardless of the population distribution, the samples’ means tends towards a normal distribution as the sample size
increases, i.e, Given a sufficiently large sample size from ay distribution, the sample means will be normally distributed
The Central Limit Theorem plays a pivotal role in the widespread application of T-tests. As previously discussed, T-tests are most effective when applied to populations that exhibit a normal
distribution. However, according the Central Limit Theorem, for any given population, if we collect a sufficiently large number of random samples from it, the cumulative distribution of sample means
tends to follow a normal distribution. This phenomenon allows us to apply T-tests to the derived sample population, even when the original population may not be normally distributed.
Understatnding T-test
We had already explored T-Test and its role in understanding the statistical significance of a distributions mean. For a t-test to have a menaingful result, the distrivutions must satisfy the
following conditions:
□ The data sets must be normally distributed, i.e, the shape must resemble a bell curve to an extent
□ are independent and continuous, i.e., the measurement scale for data should follow a continuous pattern.
□ Variance of data in both sample groups is similar, i.e., samples have almost equal standard deviation
Today, we will deep dive into these to understands why these conditions are necessary. But first, le us understand what a T-Distribution is
T Distribution
• The t-distribution, also known as the Student’s t-distribution, is a probability distribution that is similar in shape to the standard normal distribution (bell-shaped curve).
• The key feature of the t-distribution is that it has heavier tails compared to the normal distribution. The shape of the t-distribution depends on a parameter called degrees of freedom (df).
• As the sample size increases, the t-distribution approaches the standard normal distribution.
• In hypothesis testing with the t-test, the t-distribution is used as a reference distribution to determine the critical values for a specified level of significance (alpha) and degrees of
Normal distribution (z-distribution) is essentially a special case of t distribution. But whats important for us are certain properties that are common to both but is more prominent in the normal
Breusch-Pagan Test
In the world of data science and regression analysis, the Breusch-Pagan test is like a detective tool that helps us investigate an important issue called “heteroscedasticity.” Let me break it down
for you.
Heteroscedasticity is a fancy term for a situation where things are not as tidy as we’d like in a regression analysis. Specifically, it’s when the spread of your residuals)changes as you move along
the independent variables. High heteroscedasticity would mean that your prediction errors (the differences between your predictions and the actual values) vary differently across different inputs.
Some predictions might be pretty close, while others are way off.
the Breusch-Pagan test is used for detecting this variability issue. Here’s how it works:
1. Build your regression model: You start by creating a regression model that tries to predict something, like housing prices.
2. Calculate residuals: Residuals are the differences between your predictions and the actual prices for each house.
3. Squared Residuals: You square those residuals. This step emphasizes larger errors more than smaller ones.
4. Second Regression: Next, you build a new mini-regression model. This time, you use the squared residuals as your “dependent variable” (the thing you’re trying to predict), and the same predictors
you used in your original model.
5. Hypothesis Testing: You perform a hypothesis test to see if your predictors are related to the squared residuals. If they are, it’s a sign that heteroscedasticity might be present.
If the Breusch-Pagan test suggests heteroscedasticity is happening, it means our original regression model isn’t performing as well as we thought.
So, as data scientists, We would want to investigate further, maybe try different modeling techniques or transform your data to make the errors more consistent across the board. The goal is to have a
model that’s as accurate as possible for all cases, not just some.
In a nutshell, the Breusch-Pagan test helps us spot when the “scatter” of our errors isn’t the same for all data points, and that’s a signal for us to dig deeper and refine our models.
Plotting polynomial regression models
Today, we will attempt to plot polynomial regression models of differrent degrees and compare the regression lines and the R-squared values We will be using the Inactivity vs Diabetets data to
perform this analysis as it has the maximum number of data points available.
The Polynomial regression is performed using the sklearn package that provides the inbuilt function PolynomialFeatures() that allows us to model the conersion matrix with the polynomial degree as
Then we create the regression model using the LinearRegression() function and fit the model to our data. Once the model is created, it is a fairly straight forward process of using the model to
predict the values and use this prediction to calculate the r-quared values for each degree model
As expected, the r-squared values shows slight progressive improvement with each successive degree, but at the same the its is clear from the plot that the model progresively tends to be overfitted
making it less effective in predictions
T Test
Today, Im exploring T-test and its significance. T test is a type of hypothesis testing and its a very important tool in data science. A hypothesis is any testable assupmtion about the data set and
hypothesis testing allows us to validate these assumptions
T-test is predominantly used to understand whether the differrence in means of two datasets have any statistical significance. For T test to provide any meaningful insights, the datasets has to
satisfy the following conditions
☆ The data sets must be normally distributed, i.e, the shape must resemble a bell curve to an extent
☆ are independent and continuous, i.e., the measurement scale for data should follow a continuous pattern.
☆ Variance of data in both sample groups is similar, i.e., samples have almost equal standard deviation
□ H0: There is a significant differrence between the means of the data sets
□ H1: There is no significant differrence between the means of the data sets
T Test code
Reject the null hypothesis. There is a significant difference between the datasets.
T-statistic: -8.586734600367794
P-value: 1.960253729590773e-17
This T-test does not provide any meaningful insights as two of the requisite conditions are violates
1. The datasets are not normally distributes
2. the variances of the datasets are not quite similar
Today, I explored validation techniques for smaller data sets, namely K-Fold cross validation.
To start, the linear regression model was re-retrained fusing 70% of the data as a training set and 30% as the test set. Here are the results obtained
As we can see, the model shows similar performance with r-squared = 0.38 approx
Now the same model was tested again using K-fold cross-validtion with 5 folds. Here are the results for the linear and polynomial regression models
The linear and polynomial models both show similar mean r-squared values of 0.30, which is lower than the score obtained without using cross-validation.
The polynomial regression score will tend to increase with higher degrees of polynomial if we validate using the test data as it leads to overfitting
Today, we were introduced to the idea of linear regression using multiple varibale. This technique is essential when we have more than one predictors, especially if they are highly correlated like
the Obesity and Inactivty data.
The linear regression model for multi variable was built using sklearn package from scikit-lear module whcih provides many inbuilt function for linear regression
Expression: Y = B0 + B1.X_obesity + B2.X_inactivity + e
Result for 2 variable
B0: 1.6535991518559392
[('B1', 0.23246991917672563), ('B2', 0.11106296576800405)
R^2 =
As expected, there is an improvement in the R-squared value as compared to single variable model.
Now, we introduce one more predictor variable as the product of inactivitya and obesity
Y = B0 + B1*X_inactivity + B2*X_obesity + B3*X_inactivity * X_obestiy + e
B0: -10.06467453049291
Coefficients: [('INACTIVE', 1.1533595376219894), ('OBESE', 0.743039972401428), ('Obese x inactive', -0.049635909945020235)]
R-squared = 0.36458725864661756
As expected, the performace has increased again, albeit by a very small margin
Now, lets try adding two more predictors – X1^2 and X2^2
Y = B0 + B1*X_inactivity + B2*X_obesity + B3*X_inactivity * X_obestiy + B4*X1^2 + B5*X2^2 + e
B0: -11.590588857309138
Coefficients: [('INACTIVE', 0.47789170236400547), ('OBESE', 1.489656837325879), ('Obese x inactive', 0.01970764143007776), ('Inactivity sqaure', -0.01973523748870601), ('Obesity square', -0.04942722743255474)]
Score = 0.38471232946078504
The score has improved yet again.
It seems this process adding higher powers of the predictor model is an effective way of improving the accuracy of the model, although it can no longer be considered as a linear model and this is now
a quadratic model. But using this process infinitely to get a nearly perfect score can lead to overfitting redering the model inefective to predicting new data
To properly validate, we need to test the model accuracy based on a new traiing data but as we have limited data available, other validation techniques must be explored .
In the first class I was introduced to the conecpt of linear regression and how to model a simple predictor function using this technique. My first thought was to code a linear regression model for
the CDC-diabetes adat set for each of the Predictive factors, i.e, Diabetes vs Obesity and Diabetes vs Inactivity seperately.
This was more challenging than i had expected because of my limited experience in data analysis techniques with python. I spent cosiderable amount of time trying to merge the data and get it in the
form that was most suitable to apply the linear regression model.
Once Ithe data was successfully transformed, it was a straightforward task to get the summarry statistcs of each of the predictors seperately.
I was interesting to observe that the relation between Diabetes and Obesity is more heteroskedastic in nature, i.e, the as the obesity % increases, the variance of the data also increases which is
rther counter intuitive as you would expect the county with highere obesity% to have more diabetic people, wheares the relation between Diabetes and inactivity is more homoskedastic which stands to
Furthermore, there is a significant positive correlation between the predictors – 75% which is also expected as inactivity tends to cause obesity
I built two linear regression models based on each of the predictors independantly
1. Diabetes- inactivity R^2 = 0.3216066463149296
2. Diabetes – inactive R^2 = 0.148475949010913
As expected, the linear regression model built with inacticity is almost twice as good as the one build with obseity due to the more skewed nature of the obesity data
Welcome to UMassD WordPress. This is your first post. Edit or delete it, then start blogging! | {"url":"https://skoncherrymth522.sites.umassd.edu/","timestamp":"2024-11-15T03:01:57Z","content_type":"text/html","content_length":"124973","record_id":"<urn:uuid:441b73c8-7d05-4ca8-9479-44433b32a4f1>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00092.warc.gz"} |
M is INTEGER
[in] M The number of rows of the matrix A. M >= 0.
N is INTEGER
[in] N The number of columns of the matrices A and B. N >= 0.
P is INTEGER
[in] P The number of rows of the matrix B. 0 <= P <= N <= M+P.
A is COMPLEX array, dimension (LDA,N)
On entry, the M-by-N matrix A.
[in,out] A On exit, the elements on and above the diagonal of the array
contain the min(M,N)-by-N upper trapezoidal matrix T.
LDA is INTEGER
[in] LDA The leading dimension of the array A. LDA >= max(1,M).
B is COMPLEX array, dimension (LDB,N)
On entry, the P-by-N matrix B.
[in,out] B On exit, the upper triangle of the subarray B(1:P,N-P+1:N)
contains the P-by-P upper triangular matrix R.
LDB is INTEGER
[in] LDB The leading dimension of the array B. LDB >= max(1,P).
C is COMPLEX array, dimension (M)
On entry, C contains the right hand side vector for the
least squares part of the LSE problem.
[in,out] C On exit, the residual sum of squares for the solution
is given by the sum of squares of elements N-P+1 to M of
vector C.
D is COMPLEX array, dimension (P)
On entry, D contains the right hand side vector for the
[in,out] D constrained equation.
On exit, D is destroyed.
X is COMPLEX array, dimension (N)
[out] X On exit, X is the solution of the LSE problem.
WORK is COMPLEX array, dimension (MAX(1,LWORK))
[out] WORK On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
LWORK is INTEGER
The dimension of the array WORK. LWORK >= max(1,M+N+P).
For optimum performance LWORK >= P+min(M,N)+max(M,N)*NB,
where NB is an upper bound for the optimal blocksizes for
CGEQRF, CGERQF, CUNMQR and CUNMRQ.
[in] LWORK
If LWORK = -1, then a workspace query is assumed; the routine
only calculates the optimal size of the WORK array, returns
this value as the first entry of the WORK array, and no error
message related to LWORK is issued by XERBLA.
INFO is INTEGER
= 0: successful exit.
< 0: if INFO = -i, the i-th argument had an illegal value.
= 1: the upper triangular factor R associated with B in the
generalized RQ factorization of the pair (B, A) is
singular, so that rank(B) < P; the least squares
[out] INFO solution could not be computed.
= 2: the (N-P) by (N-P) part of the upper trapezoidal factor
T associated with A in the generalized RQ factorization
of the pair (B, A) is singular, so that
rank( (A) ) < N; the least squares solution could not
( (B) )
be computed. | {"url":"https://netlib.org/lapack/explore-html-3.4.2/d4/de7/cgglse_8f.html","timestamp":"2024-11-11T13:19:20Z","content_type":"application/xhtml+xml","content_length":"16129","record_id":"<urn:uuid:d8845b7f-65ba-42dc-a990-f4aa19d13cb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00233.warc.gz"} |
An Introduction To Artificial Intelligence Assignment 8
NPTEL An Introduction to Artificial Intelligence Assignment 8 Answers
NPTEL An Introduction to Artificial Intelligence Assignment 8 Answers 2022:- All the Answers provided here to help the students as a reference, You must submit your assignment at your own knowledge
What is An Introduction to Artificial Intelligence?
An Introduction to Artificial Intelligence by IIT Delhi course introduces the variety of concepts in the field of artificial intelligence. It discusses the philosophy of AI, and how to model a new
problem as an AI problem. It describes a variety of models such as search, logic, Bayes nets, and MDPs, which can be used to model a new problem. It also teaches many first algorithms to solve each
formulation. The course prepares a student to take a variety of focused, advanced courses in various subfields of AI.
Average assignment score = 25% of the average of best 8 assignments out of the total 12 assignments given in the course.
Exam score = 75% of the proctored certification exam score out of 100
Final score = Average assignment score + Exam score
YOU WILL BE ELIGIBLE FOR A CERTIFICATE ONLY IF THE AVERAGE ASSIGNMENT SCORE >=10/25 AND EXAM SCORE >= 30/75. If one of the 2 criteria is not met, you will not get the certificate even if the Final
score >= 40/100.
An Introduction to Artificial Intelligence Answers
Assignment 1 Click Here
Assignment 2 Click Here
Assignment 3 Click Here
Assignment 4 Click Here
Assignment 5 Click Here
Assignment 6 Click Here
Assignment 7 Click Here
Assignment 8 Click Here
Assignment 9 Click Here
Assignment 10 NA
Assignment 11 NA
Assignment 12 NA
NPTEL An Introduction to Artificial Intelligence Assignment 8 Answers 2022:-
Q1. Mona is really excited for her birthday and she makes a large number of her signature muffins. The muffins can either be choco-chip or blueberry and both are equal in number. Bennet receives a
pack of 20 muffins which have been randomly sampled from Mona’s kitchen. Given that at least 18 of the muffins Bennet has received are blueberry, let the probability that all the muffins Bennet
received are blueberry is p/q where p and q are co-prime.
What is p+q?
Answer:- 212
Answers will be Uploaded Shortly and it will be Notified on Telegram, So JOIN NOW
Q2. Suppose A and B are conditionally independent given C. Then which of the following are true:
• P(A,B) = P(A).P(B)
• P(A,B,C) = P(A|B,C).P(B|C).P(C)
• P(A,B,C) = P(A|C).P(B|C).P(C)
• P(A,B|C) = P(A|C).P(B|C)
Answer: b, c, d
Q3. Which of the following pairs of variables are conditionally independent given information about C?
• A and D
• D and E
• D and F
• A and F
Answer: a, c
Q4. Which of the following variables belong to the Markov blanket of C?
Answer: a, b, d
Q5. What is the minimum number of parameters required to represent the Bayesian Network assuming variables A-F are boolean?
Answer: 12
Q6. Consider the following Bayesian network with n boolean variables.
Answer:- 55
Answers will be Uploaded Shortly and it will be Notified on Telegram, So JOIN NOW
Q7. Assume that we wish to compute P(A). What is the evidence and what are the hidden variables?
Answer: b
Q8. Assume that an earthquake occurred and the alarm sounded. Then, let p be the probability that a burglary also occurred. What is the value of 100*p
Answer: 0.33
Q9. Consider the following Bayesian Network which is used for disease detection:
What is the probability of not having disease D given a negative result on test A, and a negative result on test B?
Answer: 0.983
Q10. Which of the following statement(s) is(are) true?
Answer: b, c
Disclaimer:- We do not claim 100% surety of solutions, these solutions are based on our sole expertise, and by using posting these answers we are simply looking to help students as a reference, so we
urge do your assignment on your own.
For More NPTEL Answers:- CLICK HERE
Join Our Telegram:- CLICK HERE
NPTEL An Introduction to Artificial Intelligence Assignment 8 Answers 2022:- All the Answers provided here to help the students as a reference, You must submit your assignment at your own knowledge
1 thought on “NPTEL An Introduction to Artificial Intelligence Assignment 8 Answers”
1. answers not visible
Log in to Reply
Leave a Comment
You must be logged in to post a comment. | {"url":"https://uniquejankari.in/nptel-an-introduction-to-artificial-intelligence-assignment-8-answers-2022/","timestamp":"2024-11-08T10:29:12Z","content_type":"text/html","content_length":"187545","record_id":"<urn:uuid:16ce2142-d353-49ad-a3b7-e02c635cebe6>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00655.warc.gz"} |
Kepler’s Laws of Planetary Motion - First, Second, Third Law and Solved Examples
The Kepler’s Law of planetary motion typically revolves around the periods, orbits, and areas with regard to planet and sun. The principle thus states that the planets revolve around the sun in
elliptical orbits with the sun at one focus in a sun-centred solar system.
Owing to this evaluation of the motions of the planets, Kepler introduced a series of laws, now popularly known as Kepler’s three laws. The three laws by Kepler described the behaviour of planets
depending on their paths through space. The first two Kepler's laws of motion were published in 1609 in The New Astronomy. Their discovery turned out to be an earnest approach to the development of
modern science.
Kepler's First Law
Referred to as the law of ellipses, this law describes that planets are orbiting around the sun in a path explained as an ellipse. This Kepler's first law is also popularly known as the law of
orbits. The law suggests that the orbit of any planet is an ellipse along with the Sun with Sun at one of the two focal points of an ellipse. We are already acquainted with the fact that planets move
around the Sun in a circular orbit. However, as per Kepler, it is beyond doubt that planets revolve around the Sun, but not in a circular orbit. However, it revolves around an ellipse. In an ellipse,
we have two foci. Sun is situated at one of the focus points of the ellipse.
Kepler's Second Law
Known as the law of equal areas- this law defines the speed at which any given planet will revolve while orbiting the sun. The speed at which any planet revolves through space changes constantly. A
planet revolves the fastest when it is closest to the sun and slowest when it is farthest from the sun. Still, if an imaginary line is drawn from the centre of the planet to the centre of the sun
that line would blow off the similar area in equal periods. For example, if an imaginary line is drawn from the earth to the sun, then the area swept aside by the line every 31-day month will be the
same. This is depicted in the diagram below.
Diagrammatic Representation Of Kepler's Second Law
As can be seen from the diagram below, the areas developed when the earth is closest to the sun can be estimated as a broad but short triangle. On the other hand, the areas developed when the earth
is farthest from the sun can be estimated as a slender but long triangle. These areas are of similar size. Seeing that the base of these triangles are shortest when the earth is farthest from the
sun. It is observed that the earth would have to be in motion more slowly for the sake of this imaginary area to be the same size as when the earth is closest to the sun.
[Image will be uploaded soon]
Diagrammatic Representation Of Kepler's Second Law
Kepler’s Third Law
Referred to as the law of harmonies, Kepler’s third law sets a comparison for the orbital period and radius of orbit of a planet against other planets. Distinct from Kepler's 1st and 2nd laws that
explain the motion attribute of a single planet, the 3rd law compares the motion characteristics of different planets. The comparison established is that the ratio of the squares of the periods to
the cubes of their average distances from the sun is similar for every one of the planets.
Solved Examples for Kepler’s Laws of Motion
Problem 1
a. Which scientist is attributed to the gathering of the data relevant to support the planet's elliptical motion?
b. Which scientist is credited with the correct description of the data?
c. Which scientist is acknowledged with the long and complex task of evaluating the data?
1. Tycho Brahe collected the data.
2. Johannes Isaac Newton appropriately described the data.
3. Kepler evaluated the data.
Problem 2
The average orbital distance of Mars is 1.52 times to that of the average orbital distance of the Earth. Having known that the Earth orbits around the sun in about 365 days, use Kepler’s third law
(law of harmonies) to anticipate the time that Mars would require to orbit the sun.
Given that, the ratio of Rmars / Rearth is 1.52
= Tmars will be 684 days
Problem 3
Employ the graphing competencies of your TI calculator to trace out T2 vs. R3 (T2 must be plotted about the vertical axis) and to identify the equation of the line. Express the equation in
slope-intercept form below.
T2 = [3.03 ×10-16] × R3 - 4.62 × 10+9
Given the unpredictability in the y-intercept value, it can be estimated as 0.
Hence, T2 = (3.03 × 10-16) × R3
[Image will be uploaded soon]
FAQs on Kepler’s Laws of Planetary Motion
1. What are Kepler’s Three Laws?
Ans. Kepler’s 3 laws of planetary motion can be explained as follows:
1. The Law of Ellipses: By this postulate, the path of the planets along the sun is elliptical, with the nucleus (centre) of the sun being situated at one focus.
2. The Law of Harmonies: The proportion of the squares of the periods of any two planets is equivalent to the proportion of the cubes of their mean distances from the sun.
3. The Law of Equal Areas: An imaginary segment drawn from the nucleus of the sun to the nucleus of the planet will eradicate equal areas at equal intervals of time.
2. How to manually draw an Ellipse?
Ans. You can easily construct an ellipse using a pencil, string, sheet of paper, piece of cardboard and two tacks and wrap the string tightly around the tacks. Tack the paper sheet to the cardboard
with two tacks. Then tie the string into a loop and around the two tacks. Take your pencil and tug the string until the two tacks and pencil form a triangle. Then start to draw a path with the
pencil. The resulting shape is an ellipse.
An ellipse is a unique curve in which the sum of the total distances from each point on the curve to two other points is constant. For proper understanding, refer to the illustrative image below.
[Image will be uploaded soon] | {"url":"https://www.vedantu.com/jee-main/physics-keplers-laws-of-planetary-motion","timestamp":"2024-11-02T22:05:57Z","content_type":"text/html","content_length":"196475","record_id":"<urn:uuid:879db972-d097-474f-a4f9-4734946bc0af>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00435.warc.gz"} |
A Perceptron in just a few Lines of Python Code
A Perceptron in just a few Lines of Python Code
Content created by webstudio Richter alias Mavicc on March 30. 2017.
The perceptron can be used for supervised learning. It can solve binary linear classification problems. A comprehensive description of the functionality of a perceptron is out of scope here. To
follow this tutorial you already should know what a perceptron is and understand the basics of its functionality. Additionally a fundamental understanding of stochastic gradient descent is needed. To
get in touch with the theoretical background, I advise the Wikipedia article:
Furthermore I highly advise you the book of Schölkopf & Smola. Do not let the math scare you, as they explain the basics of machine learning in a really comprehensive way:
Schölkopf & Smola (2002). Learning with Kernels. Support Vector Machines, Regularization, Optimization, and Beyond.
To better understand the internal processes of a perceptron in practice, we will step by step develop a perceptron from scratch now.
Give Me the Code!
import numpy as np
X = np.array([
[1, 6, -1],
[2, 4, -1],
[6, 2, -1],
y = np.array([-1,-1,1,1,1])
def perceptron_sgd(X, Y):
w = np.zeros(len(X[0]))
eta = 1
epochs = 20
for t in range(epochs):
for i, x in enumerate(X):
if (np.dot(X[i], w)*Y[i]) <= 0:
w = w + eta*X[i]*Y[i]
return w
w = perceptron_sgd(X,y)
Our Ingredients
First we will import numpy to easily manage linear algebra and calculus operations in python. To plot the learning progress later on, we will use matplotlib.
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
Stochastic Gradient Descent
We will implement the perceptron algorithm in python 3 and numpy. The perceptron will learn using the stochastic gradient descent algorithm (SGD). Gradient Descent minimizes a function by following
the gradients of the cost function. For further details see:
Wikipedia - stochastic gradient descent
Calculating the Error
To calculate the error of a prediction we first need to define the objective function of the perceptron.
Hinge Loss Function
To do this, we need to define the loss function, to calculate the prediction error. We will use hinge loss for our perceptron:
$c$ is the loss function, $x$ the sample, $y$ is the true label, $f(x)$ the predicted label.
This means the following: $% <![CDATA[ c(x, y, f(x))= \begin{cases} 0,& \text{if } y * f(x)\geq 1\\ 1-y*f(x), & \text{else} \end{cases} %]]>$
So consider, if y and f(x) are signed values $(+1,-1)$:
• the loss is 0, if $y*f(x)$ are positive, respective both values have the same sign.
• loss is $1-y*f(x)$ if $y*f(x)$ is negative
Objective Function
As we defined the loss function, we can now define the objective function for the perceptron:
We can write this without the dot product with a sum sign:
So the sample $x_i$ is misclassified, if $y_i \langle x_i,w \rangle \leq 0$. The general goal is, to find the global minima of this function, respectively find a parameter $w$, where the error is
Derive the Objective Function
To do this we need the gradients of the objective function. The gradient of a function $f$ is the vector of its partial derivatives. The gradient can be calculated by the partially derivative of the
objective function.
This means, if we have a misclassified sample $x_i$, respectively $ y_i \langle x_i,w \rangle \leq 0 $, update the weight vector $w$ by moving it in the direction of the misclassified sample.
With this update rule in mind, we can start writing our perceptron algorithm in python.
Our Data Set
First we need to define a labeled data set.
X = np.array([
[-2, 4],
[4, 1],
[1, 6],
[2, 4],
[6, 2]
Next we fold a bias term -1 into the data set. This is needed for the SGD to work. Details see The Perceptron algorithm
X = np.array([
[1, 6, -1],
[2, 4, -1],
[6, 2, -1],
y = np.array([-1,-1,1,1,1])
This small toy data set contains two samples labeled with $-1$ and three samples labeled with $+1$. This means we have a binary classification problem, as the data set contains two sample classes.
Lets plot the dataset to see, that is is linearly seperable:
for d, sample in enumerate(X):
# Plot the negative samples
if d < 2:
plt.scatter(sample[0], sample[1], s=120, marker='_', linewidths=2)
# Plot the positive samples
plt.scatter(sample[0], sample[1], s=120, marker='+', linewidths=2)
# Print a possible hyperplane, that is seperating the two classes.
[<matplotlib.lines.Line2D at 0x7f2d223415f8>]
Lets Start implementing Stochastic Gradient Descent
Finally we can code our SGD algorithm using our update rule. To keep it simple, we will linearly loop over the sample set. For larger data sets it makes sence, to randomly pick a sample during each
iteration in the for-loop.
def perceptron_sgd(X, Y):
w = np.zeros(len(X[0]))
eta = 1
epochs = 10
for epoch in range(epochs):
for i, x in enumerate(X):
if (np.dot(X[i], w)*Y[i]) <= 0:
w = w + eta*X[i]*Y[i]
return w
Code Description Line by Line
line 2: Initialize the weight vector for the perceptron with zeros
line 3: Set the learning rate to 1
line 4: Set the number of epochs
line 6: Iterate n times over the whole data set. line 7: Iterate over each sample in the data set
line 8: Misclassification condition $y_i \langle x_i,w \rangle \leq 0$ line 9: Update rule for the weights $w = w + y_i * x_i$ including the learning rate
Let the Perceptron learn!
Next we can execute our code and check, how many iterations are needed, until all sampels are classified right. To see the learning progress of the perceptron, we add a plotting feature to our
algorithm, counting the total error in each epoch.
def perceptron_sgd_plot(X, Y):
train perceptron and plot the total loss in each epoch.
:param X: data samples
:param Y: data labels
:return: weight vector as a numpy array
w = np.zeros(len(X[0]))
eta = 1
n = 30
errors = []
for t in range(n):
total_error = 0
for i, x in enumerate(X):
if (np.dot(X[i], w)*Y[i]) <= 0:
total_error += (np.dot(X[i], w)*Y[i])
w = w + eta*X[i]*Y[i]
plt.ylabel('Total Loss')
return w
This means, that the perceptron needed 14 epochs to classify all samples right (total error is zero). In other words, the algorithm needed to see the data set 14 times, to learn its structure.
The weight vector including the bias term is $(2,3,13)$.
We can extract the following prediction function now:
The weight vector is $(2,3)$ and the bias term is the third entry -13.
Lets classify the samples in our data set by hand now, to check if the perceptron learned properly:
First sample $(-2, 4)$, supposed to be negative:
Second sample $(4, 1)$, supposed to be negative:
Third sample $(1, 6)$, supposed to be positive:
Fourth sample $(2, 4)$, supposed to be positive:
Fifth sample $(6, 2)$, supposed to be positive:
Lets define two test samples now, to check how well our perceptron generalizes to unseen data:
First test sample $(2, 2)$, supposed to be negative:
Second test sample $(4, 3)$, supposed to be positive:
Both samples are classified right. To check this geometrically, lets plot the samples including test samples and the hyperplane.
for d, sample in enumerate(X):
# Plot the negative samples
if d < 2:
plt.scatter(sample[0], sample[1], s=120, marker='_', linewidths=2)
# Plot the positive samples
plt.scatter(sample[0], sample[1], s=120, marker='+', linewidths=2)
# Add our test samples
plt.scatter(2,2, s=120, marker='_', linewidths=2, color='yellow')
plt.scatter(4,3, s=120, marker='+', linewidths=2, color='blue')
# Print the hyperplane calculated by perceptron_sgd()
x2x3 =np.array([x2,x3])
X,Y,U,V = zip(*x2x3)
ax = plt.gca()
ax.quiver(X,Y,U,V,scale=1, color='blue')
<matplotlib.quiver.Quiver at 0x7f2d22240400>
Final Thoughts
Thats all about it. If you got so far, keep in mind, that the basic structure is the SGD applied to the objective function of the perceptron. This is just four lines of code. It contains all the
learning magic. Cool isnt it?
I am looking forward for your comments.
Greetings from webstudio, Mavicc | {"url":"https://maviccprp.github.io/a-perceptron-in-just-a-few-lines-of-python-code/","timestamp":"2024-11-05T16:50:59Z","content_type":"text/html","content_length":"39382","record_id":"<urn:uuid:a23d690c-ecc6-4559-b622-7cdeb4e3f258>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00633.warc.gz"} |
IIT JEE advanced Maths -Matrices & Determinants - Study Materials
IIT JEE advanced Maths -Matrices & Determinants – Study Materials
Notes and Study Materials -Matrices & Determinants
Notes and Study Materials
Examples and Exercise
IIT JEE (Advanced) Mathematics ,”Matrices & Determinants” Notes ,Test Papers, Sample Papers, Past Years Papers , NCERT , S. L. Loney and Hall & Knight Solutions and Help from Ex- IITian
About this unit
Matrices as a rectangular array of real numbers, equality of matrices, addition, multiplication by a scalar and product of matrices, transpose of a matrix. Determinant of a square matrix of order up
to three, the inverse of a square matrix of order up to three.Properties of these matrix operations, diagonal, symmetric and skew-symmetric matrices and their properties Solutions of simultaneous
linear equations in two or three variables.
IITian Academy Notes for IIT JEE (Advanced) Mathematics – Matrices & Determinants
The success mantra of the JEE is practice and hard work. Gone are the days when students used to spend hours in attempting one question. Now it is an era of multiple choice questions. The JEE
Mathematics questions test a student’s acquired knowledge as well as his aptitude. We ahve execellent notes prepared by Ex-IITian to best match the requirement of the exam.Focus is given on problem
solving skills and small tips and trciks to do it faster and easier. We , Ex-IITian at https://www.iitianacademy.com. will make sure you understand the concept well.
IIT JEE (Advanced) Mathematics, Matrices & Determinants Solved Examples and Practice Papers.
Get excellent practice papers and Solved examples to grasp the concept and check for speed and make you ready for big day. These Question Papers are prepared by Ex-IITIan for IIT JEE (Advanced)
Mathematics , Matrices & Determinants.
S. L. Loney IIT JEE (Advanced) Mathematics
This book is the one of the most beautifully written book by the author. Trigonometry is considered to be one of the easiest topics in mathematics by the aspirants of IIT JEE, AIEEE and other state
level engineering examination preparation. It would not be untrue to say that most of the sources have taken inspiration from this book as it is the most reliable source. The best part of this book
is its coverage in Heights and Distances and Inverse Trigonometric Functions. The book gives a very good learning experience and the exercises which follow are not only comprehensive but they have
both basic and standard questions.. I will help you online for any doubt / clarification.
Hall & Knight IIT JEE (Advanced) Mathematics
Algebra by Hall and Knight is one of the best books for JEE preparation. Students preparing for IIT JEE and other engineering entrance exams as well as students appearing for board exams should read
this everyday, especially to master Algebra and Probability. Hall and Knight have explained the concepts logically in their book.
IIT JEE (Advanced) Mathematics Assignments
Chapter wise assignments are being given by teachers to students to make them understand the chapter concepts. Its extremely critical for all CBSE students to practice all assignments which will help
them in gaining better marks in examinations. All assignments available for free download on the website are developed by the best teachers having many years of teaching experience in CBSE schools
all over the country. Students, teachers and parents are advised to contact me online incase of any doubt / clarification.
Past Many Years (40 Years) Questions IIT JEE (Advanced) Mathematics Solutions Matrices & Determinants
Past 40 Years Question Papers Solutions for IIT JEE (Advanced) Mathematics Matrices & Determinants are provided here with simple step-by-step explanations. These solutions for Matrices & Determinants
are extremely popular among IIT JEE (Advanced) students for Chemistry . Matrices & Determinants Solutions come handy for quickly completing your homework and preparing for exams. All questions and
answers from the Past Many Years Question Papers Book of IIT JEE (Advanced) Mathematics Chapter Matrices & Determinants are provided here for . I will help you online for any doubt / clarification. | {"url":"https://www.iitianacademy.com/iit-jee-advanced-maths-matrices-determinants-study-materials/","timestamp":"2024-11-09T03:29:47Z","content_type":"text/html","content_length":"266629","record_id":"<urn:uuid:830e5f6d-7a5e-4dc7-9e7e-122ce1c796d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00399.warc.gz"} |
Java BigInteger | HackerRank Solutions
Java BigInteger
Problem Statement :
In this problem, you have to add and multiply huge numbers! These numbers are so big that you can't contain them in any ordinary data types like a long integer.
Use the power of Java's BigInteger class and solve this problem.
Input Format
There will be two lines containing two numbers, a and b.
a and bare non-negative integers and can have maximum 200 digits.
Output Format
Output two lines. The first line should contain a+b, and the second line should contain a*b. Don't print any leading zeros.
Solution :
Solution in C :
import java.math.*;
import java.util.Scanner;
public class Solution {
public static void main(String []args){
Scanner input = new Scanner(System.in);
String s1,s2;
System.out.println(new BigInteger(s1).add(new BigInteger(s2)));
System.out.println(new BigInteger(s1).multiply(new BigInteger(s2)));
View More Similar Problems
This question is designed to help you get a better understanding of basic heap operations. You will be given queries of types: " 1 v " - Add an element to the heap. " 2 v " - Delete the element from
the heap. "3" - Print the minimum of all the elements in the heap. NOTE: It is guaranteed that the element to be deleted will be there in the heap. Also, at any instant, only distinct element
View Solution →
Jesse and Cookies
Jesse loves cookies. He wants the sweetness of all his cookies to be greater than value K. To do this, Jesse repeatedly mixes two cookies with the least sweetness. He creates a special combined
cookie with: sweetness Least sweet cookie 2nd least sweet cookie). He repeats this procedure until all the cookies in his collection have a sweetness > = K. You are given Jesse's cookies. Print t
View Solution →
Find the Running Median
The median of a set of integers is the midpoint value of the data set for which an equal number of integers are less than and greater than the value. To find the median, you must first sort your set
of integers in non-decreasing order, then: If your set contains an odd number of elements, the median is the middle element of the sorted sample. In the sorted set { 1, 2, 3 } , 2 is the median.
View Solution →
Minimum Average Waiting Time
Tieu owns a pizza restaurant and he manages it in his own way. While in a normal restaurant, a customer is served by following the first-come, first-served rule, Tieu simply minimizes the average
waiting time of his customers. So he gets to decide who is served first, regardless of how sooner or later a person comes. Different kinds of pizzas take different amounts of time to cook. Also, once
View Solution →
Merging Communities
People connect with each other in a social network. A connection between Person I and Person J is represented as . When two persons belonging to different communities connect, the net effect is the
merger of both communities which I and J belongs to. At the beginning, there are N people representing N communities. Suppose person 1 and 2 connected and later 2 and 3 connected, then ,1 , 2 and 3 w
View Solution →
Components in a graph
There are 2 * N nodes in an undirected graph, and a number of edges connecting some nodes. In each edge, the first value will be between 1 and N, inclusive. The second node will be between N + 1 and
, 2 * N inclusive. Given a list of edges, determine the size of the smallest and largest connected components that have or more nodes. A node can have any number of connections. The highest node valu
View Solution → | {"url":"https://hackerranksolution.in/bigintegerjava/","timestamp":"2024-11-15T03:55:21Z","content_type":"text/html","content_length":"38478","record_id":"<urn:uuid:598b0e01-4eb7-4055-9f9a-215674d4c639>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00012.warc.gz"} |
topological space pdf
1.1 Topological spaces 1.1.1 The notion of topological space The topology on a set Xis usually de ned by specifying its open subsets of X. Every path-connected space is /Rect [138.75 268.769 310.799
277.68] /Border[0 0 0]/H/I/C[1 0 0] We construct an ansatz based on knot and monopole topological vacuum structure for searching new solutions in SU(2) and SU(3) QCD. << /S /GoTo /D [106 0 R /Fit ]
>> ric space. /Filter /FlateDecode /Rect [138.75 336.57 282.432 347.418] Topological Spaces Example 1. /A << /S /GoTo /D (section.1.4) >> N such that both f and f¡1 are continuous (with respect to
the topologies of M and N). �b& L���p�%؛�p��)?qa{�&���H� �7�����P�2_��z��#酸DQ f�Y�r�Q�Qo�~~��n���ryd���7AT_TǓr[`y�!�"�M�#&r�f�t�ކ�`%⫟FT��qE@VKr_$*���&�0�.`��Z�����C �Yp���һ�=ӈ)�w��G�n�;
��7f���n��aǘ�M��qd!^���l���( S&��cϭU"� >> endobj These are the notes prepared for the course MTH 304 to be o ered to undergraduate students at IIT Kanpur. /Length 1047 >> endobj §2. << /S /GoTo /D
(section.1.12) >> << /S /GoTo /D (section.1.10) >> Such properties, which are the same on any equivalence class of homeomorphic spaces, are called topological invariants. (Products \(new spaces from
old, 2\)) By a (topological) ball, we mean the unit ball of a Banach space equipped with a second locally convex Hausdor topology, coarser than that of the norm, in which the norm is lower
semi-continuous. I want also to drive home the disparate nature of the examples to which the theory applies. (Incidentally, the plural of “TVS" is “TVS", just as the plural of “sheep" is “sheep".)
141 0 obj << << /S /GoTo /D (chapter.1) >> Topological Spaces 1. A topology on a set X is a collection Tof subsets of X such that (T1) ˚and X are in T; /Rect [138.75 429.666 316.057 441.621] /A << /S
/GoTo /D (section.1.5) >> … 89 0 obj Suppose fis a function whose domain is Xand whose range is contained in Y.Thenfis continuous if and only if the following condition is met: For every open set Oin
the topological space (Y,C),thesetf−1(O)is open in the topo- For a metric space X, (A) (D): Proof. FINDING TOPOLOGY IN A FACTORY: CONFIGURATION SPACES A. ABRAMS AND R. GHRIST It is perhaps not
universally acknowledged that an outstanding place to nd interesting topological objects is within the walls of an automated warehouse or factory. De nition 1.1.1. /Rect [138.75 280.724 300.754
289.635] /Border[0 0 0]/H/I/C[1 0 0] 124 0 obj << 48 0 obj ��syk`��t|�%@���r�@����`�� >> endobj (Review of Chapter A) Namely, we will discuss metric spaces, open sets, and closed sets. Definition 2.
Show that if A is connected, then A is connected. >> endobj /Rect [123.806 396.346 206.429 407.111] endobj /Border[0 0 0]/H/I/C[1 0 0] (1)Let X denote the set f1;2;3g, and declare the open sets to be
f1g, f2;3g, f1;2;3g, and the empty set. 130 0 obj << stream /Subtype /Link /Type /Annot merely the structure of a topological space. << /S /GoTo /D (section.2.3) >> endobj 140 0 obj << /Subtype /Link
The second part of the course is the study of these topological spaces and de ning a lot of interesting properties just in terms of open sets. �#(�ҭ�i�G�+ �,�W+ ?o�����X��b��:�5��6�!uɋ��41���3�ݩ��^
`�ރ�.��y��8xs咻�o�(����x�V�뛘��Ar��:�� >> endobj /Border[0 0 0]/H/I/C[1 0 0] /Rect [138.75 348.525 281.465 359.374] endobj endobj See Exercise 1.7. /Border[0 0 0]/H/I/C[1 0 0] /Type /Annot /Type /
Annot 76 0 obj << /S /GoTo /D (chapter.3) >> A morphism is a function, continuous in the second topology, that preserves the absolutely convex structure of the unit balls. Then the … A topological
space is an ordered pair (X, τ), where X is a set and τ is a collection of subsets of X, satisfying the following axioms: The empty set and X itself belong to τ. endobj << /S /GoTo /D (section.1.9)
>> 52 0 obj We can then formulate classical and basic theorems about continuous functions in a much broader framework. endobj (4)For each x2Xand each neighborhood V of f(x) in Y there is a
neighborhood Uof x Issues on selection functions, fixed point theory, etc. Definitions & (3.1a) Proposition Every metric space is Hausdorff, in particular R n is Hausdorff (for n ≥ 1). Symmetry 2020,
12, 2049 3 of 15 subspace X0 X in the corresponding topological base space, then the cross‐sections of an automorphic bundle within the subspace form an algebraic group structure. >> endobj A
topology on a set Xis a collection Tof subsets of Xhaving the properties ;and Xare in T. Arbitrary unions of elements of Tare in T. Finite intersections of elements of Tare in T. Xis called a
topological space. Then fis a homeomorphism. /Subtype /Link /Border[0 0 0]/H/I/C[1 0 0] 114 0 obj << 44 0 obj /Border[0 0 0]/H/I/C[1 0 0] /A << /S /GoTo /D (section.3.3) >> [Exercise 2.2] Show that
each of the following is a topological space. Any arbitrary (finite or infinite) union of members of τ still belongs to τ. /A << /S /GoTo /D (section.2.2) >> /A << /S /GoTo /D (section.1.12) >> 133 0
obj << 137 0 obj << /A << /S /GoTo /D (section.1.1) >> (Compactness and products) 105 0 obj new space. This particular topology is said to be induced by the metric. endobj For example, an important
theorem in optimization is that any continuous function f : [a;b] !R achieves its minimum at least one point x2[a;b]. /Type /Annot endobj /Border[0 0 0]/H/I/C[1 0 0] >> endobj >> endobj A finite space
is an A-space. /Border[0 0 0]/H/I/C[1 0 0] (Topological properties) We want to topologize this set in a fashion consistent with our intuition of glueing together points of X. >> endobj >> endobj
(Metrics versus topologies) %PDF-1.4 /Subtype /Link endobj endobj endobj �U��fc�ug��۠�B3Q�L�ig�4kH�f�h��F�Ǭ1�9ᠹ��rQZ��HJ���xaRZ��#qʁ�����w�p(vA7Jޘ5!��T��yZ3�Eܫh Similarly, we can de ne topological
rings and topological elds. Any group given the discrete topology, or the indiscrete topology, is a topological group. /Parent 113 0 R But usually, I will just say ‘a metric space X’, using the
letter dfor the metric unless indicated otherwise. /Subtype /Link endobj space-time has been obtained. /Subtype /Link Topological Properties §11 Connectedness §11 1 Definitions of Connectedness and
First Examples A topological space X is connected if X has only two subsets that are both open and closed: the empty set ∅ and the entire X. Topology of Metric Spaces 1 2. 4 0 obj /Type /Annot << /S
/GoTo /D (section.2.4) >> 108 0 obj << The gadget for doing this is as follows. If Y is a topological space, we could de ne a topology on Xby asking that it is the coarsest topology so that fis
continuous. This paper proposes the construction and analysis of fiber space in the non‐uniformly scalable multidimensional topological 2 ALEX GONZALEZ. /Rect [138.75 372.436 329.59 383.284] endobj /
Rect [138.75 501.95 327.099 512.798] 53 0 obj 1 0 obj endobj 37 0 obj /D [142 0 R /XYZ 124.802 586.577 null] We denote by B the endobj >> endobj << /S /GoTo /D (section.1.3) >> << /S /GoTo /D
(section.2.6) >> /Subtype /Link 101 0 obj 65 0 obj 5 0 obj The way we endobj /A << /S /GoTo /D (section.1.7) >> /Parent 113 0 R Let X be a topological space and A X be a subset. 28 0 obj Appendix A.
/Subtype /Link 29 0 obj Example 1.1.11. 92 0 obj Xbe a topological space and let ˘be an equivalence relation on X. 109 0 obj << 57 0 obj (Compactness) endobj /Rect [246.512 418.264 255.977 429.112]
endobj >> endobj To prove the converse, it will su ce to show that (E) ) (B). 3. The empty set emptyset is in T. 2. A direct calculation shows that the inverse limit of an inverse system of nite T
0-spaces is spectral. /A << /S /GoTo /D (section.1.11) >> 13 0 obj topological space (X, τ), int (A), cl(A) and C(A) represents the interior of A, the closure of A, and the complement of A in X
respectively. It is well known [Hoc69,Joy71] that pro nite T 0-spaces are exactly the spectral spaces. 104 0 obj (B1) For any U2B(x), x2U. The is not an original work of the writer. 135 0 obj << �TY$
�*��vø��#��I�O�� /Type /Annot /Type /Annot endobj << /S /GoTo /D (chapter.2) >> >> endobj (T3) The union of any collection of sets of T is again in T . 138 0 obj << 61 0 obj endobj
c���O�������k��A�o��������{��� ��Bd��0�}J�XW}ߞ6�%�KI�DB �C�]� This is called the discrete topology on X, and (X;T) is called a discrete space. Here are to be found only basic issues on continuity and
measurability of set-valued maps. We then looked at some of the most basic definitions and properties of pseudometric spaces. /Rect [138.75 453.576 317.496 465.531] We claim such S must be closed. A
topological group Gis a group which is also a topological space such that the multi-plication map (g;h) 7!ghfrom G Gto G, and the inverse map g7!g 1 from Gto G, are both continuous. (Review of metric
spaces) /A << /S /GoTo /D (section.1.2) >> A family ˝ IX of fuzzy sets is called a fuzzy topology for Xif it satis es the following three axioms: (1) 0;1 2˝. 142 0 obj << Give ve topologies on a
3-point set. Similarly, we can de ne topological rings and topological elds. 106 0 obj << Explain what is m eant by the interior Int( A ) and the closure A of A . /Border[0 0 0]/H/I/C[1 0 0] 25 0 obj
>> endobj Topological space may or may not have ( e.g topological space pdf one-to-one function ): Proof hence not. Of reasons by B the Another form of Connectedness is path-connectedness QCD in
Minkowski space-time can naturally... Often called open dfor the metric want to topologize this set in a fashion consistent with our intuition about sets... Of set-valued maps, or the indiscrete
topology, that preserves the absolutely convex structure the. Calculation shows that the inverse limit of an inverse system of nite spaces! U⊂ Xis called closed in Xfor every closed set BˆY sort of
topological spaces which are in. Looked at some of the meaning of open and closed sets for,. [ Hoc69, Joy71 ] that pro nite T 0-spaces are exactly the spaces. Which a topological space axioms are
satis ed by the collection of open sets any! We will de ne a topology on X, T ) if it is a powerful in! And so on structure defined on a set system of nite topological spaces ( a ) ne! Is an A-space
if the set U is closed if and only if it is difficult to a. T3 ) the inverse limit of an inverse system of nite topological spaces i ) be a topological may. By Proposition A.8, ( a ) ˆf ( a ) de ne
topological rings and topological elds solitons., which are open in X space ( but cf they are necessary for the starting topology! Accumulation point can de ne topological rings and topological elds
now there is a topological space is Hausdorff for. Particular topology is said to be induced by the distance function don X with due to constraints. Hoc69, Joy71 ] that pro nite T 0-spaces is spectral
properties ” that a is connected, a. Properties ” that a topological space may or may not satisfy topology on Xthat makes Xinto a topological:! Belongs to T the abstraction of the unit balls Xthat
makes Xinto a topological space function, in. Often called open particular topology is an important branch of pure mathematics, we... Two homeomor-phic spaces are homotopy equivalent equivalent but
are not homeomorphic not original. Set of equivalence classes Uof Xis called open in the topological space de nition is intuitive and easy understand. Tvs ) is a topological space determines the
topology generated by the metric unless indicated.! Homotopy equivalence f be a compact subspace of Y. Corollary 9 Compactness is a continuous mappingfromV intoitselfforeacha V.... Equivalent: ( 1 )
fis continuous in T ) topological space pdf of any finite number of definitions and that. Terminology may be somewhat confusing, but it is a point X 2 X such any. Singular knot-like solutions in QCD
in Minkowski space-time can be naturally obtained from knot solitons in integrable CP1 models Hausdor! All its limit points U of X intersects a ] that pro nite it... A fashion consistent with our
intuition of glueing together points of X by N =... Appendix a 3.1 finite dimensional topological vector space ( TVS ) is a compact topological and... Nite T 0-spaces is spectral, topological spaces
can do that metric spaces open! Pure mathematics be naturally obtained from knot solitons in integrable CP1 models for every AˆX subset! And topological elds base of open and closed sets sets as the
topology generated by distance! Open in the definition 2 for any U 1 ; U 2 2B (,. From knot solitons in integrable CP1 models want also to drive home the nature! Domains, perhaps with additional
properties, which are open in the topological space is pro T. Well known [ Hoc69, Joy71 ] that pro nite if it is list! Nite topological spaces can then formulate classical and basic theorems about
continuous in! The definition 2 f 1 ( B ) if a is connected then! Closed in the second topological space pdf, or the indiscrete topology, or the space. Clearly a topological vector space over the field
K of real or numbers! With an opaque “ point-free ” style of argument 8 ( a ) for any U ;! [ Exercise 2.2 ] show that ( E ) ) ( D ) Proof! Uof Xis called open if Uis contained in T. de nition A1.3
let Xbe a topological space and let >! Totality of open sets in any metric space are invariant under homeomorphisms, i.e are a number of definitions issues. U is closed under arbitrary intersections
an original work of the properties that sets., then the following properties spaces which are open in the second topology, that preserves the absolutely structure. They do not in general have enough
points and for all i2I let ( X ) Xin... Formulate classical and basic theorems about continuous functions in a fashion consistent with our intuition about sets... ] that pro nite T 0-spaces is
spectral for every AˆX for topological spaces, and it well... Nature of the meaning of open sets as the totality of open sets, and so on of of... | {"url":"http://portanywhere.com/class/beat-dyslexia-bsysl/viewtopic.php?c6b224=topological-space-pdf","timestamp":"2024-11-05T13:43:51Z","content_type":"text/html","content_length":"22905","record_id":"<urn:uuid:c5e64f70-862b-48ad-a256-0710f55cc52c>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00001.warc.gz"} |
□ edu.princeton.cs.algs4.TransitiveClosure
• public class TransitiveClosure
extends Object
class represents a data type for computing the transitive closure of a digraph.
This implementation runs depth-first search from each vertex. The constructor takes Θ(V(V + E)) in the worst case, where V is the number of vertices and E is the number of edges. Each instance
method takes Θ(1) time. It uses Θ(V^2) extra space (not including the digraph).
For large digraphs, you may want to consider a more sophisticated algorithm. Nuutila proposes two algorithm for the problem (based on strong components and an interval representation) that runs
in Θ(E + V) time on typical digraphs. For additional documentation, see Section 4.2 of Algorithms, 4th Edition by Robert Sedgewick and Kevin Wayne.
Robert Sedgewick, Kevin Wayne
□ Constructor Detail
☆ TransitiveClosure
public TransitiveClosure(Digraph G)
Computes the transitive closure of the digraph G.
G - the digraph
□ Method Detail
☆ reachable
public boolean reachable(int v,
int w)
Is there a directed path from vertex v to vertex w in the digraph?
v - the source vertex
w - the target vertex
true if there is a directed path from v to w, false otherwise
IllegalArgumentException - unless 0 <= v < V
IllegalArgumentException - unless 0 <= w < V
☆ main
public static void main(String[] args)
Unit tests the TransitiveClosure data type.
args - the command-line arguments | {"url":"https://algs4.cs.princeton.edu/code/javadoc/edu/princeton/cs/algs4/TransitiveClosure.html","timestamp":"2024-11-10T10:42:36Z","content_type":"text/html","content_length":"11623","record_id":"<urn:uuid:f5309faa-4f73-4980-bda7-bf32f6e7618a>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00893.warc.gz"} |
How many 60cm pieces can be cut from a ribbon 2.4 meters long? - Answers
If Chester has a ribbon that is 2 meters long he wants to cut it into 5 equal pieces how many centimeters long will each piece be?
Oh, dude, you're really making me do math right now? Okay, fine. So, 2 meters is 200 centimeters, and if Chester cuts it into 5 equal pieces, each piece would be 40 centimeters long. There you go,
Chester's got 5 pieces of ribbon, all 40 centimeters long. Happy now? | {"url":"https://math.answers.com/math-and-arithmetic/How_many_60cm_pieces_can_be_cut_from_a_ribbon_2.4_meters_long","timestamp":"2024-11-08T12:53:12Z","content_type":"text/html","content_length":"162274","record_id":"<urn:uuid:8b354e3d-0836-47c5-b707-239dc3373612>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00407.warc.gz"} |
dam-engineering of Engineering Topics | Question AI
<html lang="en" class="topic-desktop ui-chrome107 ui-chrome"><head></head><body data-leg="D" class="new-topic topic-desktop first-page-true user-ANONYMOUS user-ads md-desktop leg-d"><header id=
"header" class="bg-navy-dark"></header><main><div class="md-page-wrapper"><div id="content" class="md-content"><div class="md-article-container template-desktop infinite-pagination"><div class=
"infinite-scroll-container article last"><article class="article-content container-lg qa-content px-0 pt-0 pb-40 py-lg-20 content md-expanded" data-topic-id="150337"><div class="grid gx-0"><div class
="col-auto"><div class="topic-left-rail md-article-drawer position-relative d-flex border-right-sm border-left-sm open"><div class="drawer d-flex flex-column open"></div></div></div><div class="col">
<div class="h-100 ml-0 pr-sm-10 pr-lg-0 "><div class="h-100 grid gx-0 gx-sm-20"><div class="h-100 col-sm"><div class="h-100 infinite-pagination-container d-flex flex-column position-relative"><div
class="grey-box w-100 grey-box-top"><div class="grey-box-content mx-auto w-100"><div class="page2ref-true topic-content topic-type-REGULAR" data-student-article="false"><div class="reading-channel">
<section data-level="1" id="ref1"><p class="topic-paragraph"><strong><span id="ref592792"></span>dam</strong>, structure built across a stream, a river, or an estuary to retain <span id="ref592793">
</span><span class="md-crosslink" data-show-preview="true">water</span>. Dams are built to provide water for human <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="consumption"
data-type="MW">consumption</span>, for irrigating arid and semiarid lands, or for use in industrial processes. They are used to increase the amount of water available for generating <span class=
"md-crosslink" data-show-preview="true">hydroelectric power</span>, to reduce peak discharge of floodwater created by large storms or heavy snowmelt, or to increase the depth of water in a river in
order to improve navigation and allow barges and ships to travel more easily. Dams can also provide a lake for recreational activities such as swimming, boating, and fishing. Many dams are built for
more than one purpose; for example, water in a single <span class="md-crosslink" data-show-preview="true">reservoir</span> can be used for fishing, to generate hydroelectric power, and to support an
irrigation system. Water-control structures of this type are often designated multipurpose dams.</p><p class="topic-paragraph"><span class="md-dictionary-link md-dictionary-tt-off mw" data-term=
"Auxiliary" data-type="MW">Auxiliary</span> works that can help a dam function properly include <span class="md-crosslink" data-show-preview="true">spillways</span>, movable <span class=
"md-crosslink" data-show-preview="true">gates</span>, and <span class="md-crosslink" data-show-preview="true">valves</span> that control the release of surplus water downstream from the dam. Dams can
also include intake structures that deliver water to a power station or to <span class="md-crosslink" data-show-preview="true">canals</span>, <span class="md-crosslink" data-show-preview="true">
tunnels</span>, or <span class="md-crosslink" data-show-preview="true">pipelines</span> designed to convey the water stored by the dam to far-distant places. Other auxiliary works are systems for
evacuating or flushing out <span class="md-crosslink" data-show-preview="true">silt</span> that accumulates in the reservoir, <span class="md-crosslink" data-show-preview="true">locks</span> for
permitting the passage of ships through or around the dam site, and fish ladders (graduated steps) and other devices to assist fish seeking to swim past or around a dam.</p><p class=
"topic-paragraph">A <span class="md-dictionary-link md-dictionary-tt-off eb" data-term="dam" data-type="EB">dam</span> can be a central structure in a multipurpose scheme designed to conserve water
resources on a regional basis. Multipurpose dams can hold special importance in developing countries, where a single dam may bring significant benefits related to hydroelectric power production,
agricultural development, and industrial growth. However, dams have become a focus of environmental concern because of their impact on migrating fish and riparian <span class="md-crosslink"
data-show-preview="true">ecosystems</span>. In addition, large reservoirs can inundate vast tracts of land that are home to many people, and this has fostered opposition to dam projects by groups who
question whether the benefits of proposed projects are worth the costs.</p><p class="topic-paragraph"><span id="ref3723"></span>In terms of <span class="md-crosslink autoxref" data-show-preview=
"true">engineering</span>, <span class="md-dictionary-link md-dictionary-tt-off eb" data-term="dams" data-type="EB">dams</span> fall into several distinct classes defined by structural type and by
building material. The decision as to which type of dam to build largely depends on the <span class="md-crosslink autoxref" data-show-preview="true">foundation</span> conditions in the valley, the
<span class="md-crosslink autoxref" data-show-preview="true">construction</span> materials available, the accessibility of the site to transportation networks, and the experiences of the engineers,
financiers, and promoters responsible for the project. In modern dam engineering, the choice of materials is usually between <span class="md-crosslink autoxref" data-show-preview="true">concrete</
span>, earthfill, and rockfill. Although in the past a number of dams were built of jointed <span class="md-crosslink autoxref" data-show-preview="true">masonry</span>, this practice is now largely
obsolete and has been supplanted by concrete. Concrete is used to build massive gravity dams, thin <span class="md-crosslink" data-show-preview="true">arch dams</span>, and <span class="md-crosslink"
data-show-preview="true">buttress dams</span>. The development of roller-compacted concrete allowed high-quality concrete to be placed with the type of equipment originally developed to move,
distribute, and consolidate earthfill. Earthfill and rockfill dams are usually grouped together as embankment dams because they <span class="md-dictionary-link md-dictionary-tt-off mw" data-term=
"constitute" data-type="MW">constitute</span> huge mounds of earth and <span class="md-crosslink autoxref" data-show-preview="true">rock</span> that are assembled into imposing man-made embankments.
</p><span class="md-raw-html"><div class="md-drag md-table-wrapper"><table> <caption> World's largest dams </caption> <thead> <tr> <th colspan="6" id="h" scope="col"> By height </th> </tr> <tr>
<th id="nh" scope="col"> name </th> <th id="th" scope="col"> type<sup>1</sup> </th> <th id="dh" scope="col"> date of completion </th> <th id="rh" scope="col"> river </th> <th id="ch" scope="col">
country </th> <th id="hm" scope="col"> height (metres) </th> </tr> </thead> <tfoot> <tr> <td colspan="6" headers="t"> <sup>1</sup>Key: A, arch; B, buttress; E, earth fill; G, gravity; M, multi-arch;
R, rock fill. </td> </tr> <tr> <td colspan="6" headers="t"> <sup>2</sup>Vaiont Dam was the scene of a massive landslide and flood in 1963 and no longer operates. </td> </tr> <tr> <td colspan="6"
headers="t"> <sup>3</sup>Diversion tunnels closed and reservoir filling begun December 2002. </td> </tr> <tr> <td colspan="6" headers="t"> <sup>4</sup>Impounds settling reservoir for fine tailings in
oil sands operation near Fort McMurray, Alberta. </td> </tr> <tr> <td colspan="6" headers="t"> <sup>5</sup>Most of this reservoir is a natural lake. </td> </tr> <tr> <td colspan="6" headers="t">
Source: International Water Power and Dam Construction Yearbook (1996). </td> </tr> </tfoot> <tbody> <tr> <td headers="h nh" scope="row"> Nurek </td> <td headers="h th"> E </td> <td headers="h dh">
1980 </td> <td headers="h rh"> Vakhsh </td> <td headers="h ch"> Tajikistan </td> <td headers="h hm"> 300 </td> </tr> <tr> <td headers="h nh" scope="row"> Grande Dixence </td> <td headers="h th"> G </
td> <td headers="h dh"> 1961 </td> <td headers="h rh"> Dixence </td> <td headers="h ch"> Switzerland </td> <td headers="h hm"> 285 </td> </tr> <tr> <td headers="h nh" scope="row"> Inguri </td> <td
headers="h th"> A </td> <td headers="h dh"> 1980 </td> <td headers="h rh"> Inguri </td> <td headers="h ch"> Georgia </td> <td headers="h hm"> 272 </td> </tr> <tr> <td headers="h nh" scope="row">
Vaiont<sup>2</sup> </td> <td headers="h th"> A </td> <td headers="h dh"> 1961 </td> <td headers="h rh"> Vaiont </td> <td headers="h ch"> Italy </td> <td headers="h hm"> 262 </td> </tr> <tr> <td
headers="h nh" scope="row"> Chicoasen </td> <td headers="h th"> ER </td> <td headers="h dh"> 1980 </td> <td headers="h rh"> Grijalva </td> <td headers="h ch"> Mexico </td> <td headers="h hm"> 261 </
td> </tr> <tr> <td headers="h nh" scope="row"> Tehri </td> <td headers="h th"> ER </td> <td headers="h dh"> 2002<sup>3</sup> </td> <td headers="h rh"> Bhagirathi </td> <td headers="h ch"> India </td>
<td headers="h hm"> 261 </td> </tr> <tr> <td headers="h nh" scope="row"> Mauvoisin </td> <td headers="h th"> A </td> <td headers="h dh"> 1957 </td> <td headers="h rh"> Drance de Bagnes </td> <td
headers="h ch"> Switzerland </td> <td headers="h hm"> 250 </td> </tr> <tr> <td headers="h nh" scope="row"> Guavio </td> <td headers="h th"> ER </td> <td headers="h dh"> 1989 </td> <td headers="h rh">
Guavio </td> <td headers="h ch"> Colombia </td> <td headers="h hm"> 246 </td> </tr> <tr> <td headers="h nh" scope="row"> Sayano-Shushenskoye </td> <td headers="h th"> AG </td> <td headers="h dh">
1989 </td> <td headers="h rh"> Yenisey </td> <td headers="h ch"> Russia </td> <td headers="h hm"> 245 </td> </tr> <tr> <td headers="h nh" scope="row"> Mica </td> <td headers="h th"> ER </td> <td
headers="h dh"> 1973 </td> <td headers="h rh"> Columbia </td> <td headers="h ch"> Canada </td> <td headers="h hm"> 242 </td> </tr> <tr> <td headers="h nh" scope="row"> Ertan </td> <td headers="h th">
A </td> <td headers="h dh"> 1999 </td> <td headers="h rh"> Yalong (Ya-lung) </td> <td headers="h ch"> China </td> <td headers="h hm"> 240 </td> </tr> <tr> <td headers="h nh" scope="row"> Chivor </td>
<td headers="h th"> ER </td> <td headers="h dh"> 1957 </td> <td headers="h rh"> Batá </td> <td headers="h ch"> Colombia </td> <td headers="h hm"> 237 </td> </tr> </tbody> <thead> <tr> <th colspan="6"
id="v" scope="col"> By volume </th> </tr> <tr> <th id="nv" scope="col"> name </th> <th id="tv" scope="col"> type<sup>1</sup> </th> <th id="dv" scope="col"> date of completion </th> <th id="rv" scope=
"col"> river </th> <th id="cv" scope="col"> country </th> <th id="vm" scope="col"> volume (000 cubic metres) </th> </tr> </thead> <tbody> <tr> <td headers="v nv" scope="row"> Syncrude Tailings </td>
<td headers="v tv"> E </td> <td headers="v dv"> N/A </td> <td headers="v rv"> <sup>4</sup> </td> <td headers="v cv"> Canada </td> <td headers="v vm"> 750,000 </td> </tr> <tr> <td headers="v nv" scope
="row"> New Cornelia Tailings </td> <td headers="v tv"> E </td> <td headers="v dv"> 1973 </td> <td headers="v rv"> Ten Mile Wash </td> <td headers="v cv"> U.S. </td> <td headers="v vm"> 209,500 </td>
</tr> <tr> <td headers="v nv" scope="row"> Tarbela </td> <td headers="v tv"> ER </td> <td headers="v dv"> 1977 </td> <td headers="v rv"> Indus </td> <td headers="v cv"> Pakistan </td> <td headers="v
vm"> 106,000 </td> </tr> <tr> <td headers="v nv" scope="row"> Fort Peck </td> <td headers="v tv"> E </td> <td headers="v dv"> 1937 </td> <td headers="v rv"> Missouri </td> <td headers="v cv"> U.S. </
td> <td headers="v vm"> 96,050 </td> </tr> <tr> <td headers="v nv" scope="row"> Lower Usuma </td> <td headers="v tv"> E </td> <td headers="v dv"> 1990 </td> <td headers="v rv"> Usuma </td> <td
headers="v cv"> Nigeria </td> <td headers="v vm"> 93,000 </td> </tr> <tr> <td headers="v nv" scope="row"> Tucurui </td> <td headers="v tv"> EGR </td> <td headers="v dv"> 1984 </td> <td headers="v
rv"> Tocantins </td> <td headers="v cv"> Brazil </td> <td headers="v vm"> 85,200 </td> </tr> <tr> <td headers="v nv" scope="row"> Ataturk </td> <td headers="v tv"> ER </td> <td headers="v dv"> 1990
</td> <td headers="v rv"> Euphrates </td> <td headers="v cv"> Turkey </td> <td headers="v vm"> 84,500 </td> </tr> <tr> <td headers="v nv" scope="row"> Guri (Raúl Leoni) </td> <td headers="v tv"> EGR
</td> <td headers="v dv"> 1986 </td> <td headers="v rv"> Caroní </td> <td headers="v cv"> Venezuela </td> <td headers="v vm"> 77,971 </td> </tr> <tr> <td headers="v nv" scope="row"> Oahe </td> <td
headers="v tv"> E </td> <td headers="v dv"> 1958 </td> <td headers="v rv"> Missouri </td> <td headers="v cv"> U.S. </td> <td headers="v vm"> 66,517 </td> </tr> <tr> <td headers="v nv" scope="row">
Gardiner </td> <td headers="v tv"> E </td> <td headers="v dv"> 1968 </td> <td headers="v rv"> Saskatchewan </td> <td headers="v cv"> Canada </td> <td headers="v vm"> 65,400 </td> </tr> <tr> <td
headers="v nv" scope="row"> Mangla </td> <td headers="v tv"> E </td> <td headers="v dv"> 1967 </td> <td headers="v rv"> Jhelum </td> <td headers="v cv"> Pakistan </td> <td headers="v vm"> 65,379 </
td> </tr> <tr> <td headers="v nv" scope="row"> Afsluitdijk </td> <td headers="v tv"> E </td> <td headers="v dv"> 1932 </td> <td headers="v rv"> IJsselmeer </td> <td headers="v cv"> Netherlands </td>
<td headers="v vm"> 63,430 </td> </tr> </tbody> <thead> <tr> <th colspan="6" id="s" scope="col"> By size of reservoir </th> </tr> <tr> <th id="ns" scope="col"> name </th> <th id="ts" scope="col">
type<sup>1</sup> </th> <th id="ds" scope="col"> date of completion </th> <th id="rs" scope="col"> river </th> <th id="cs" scope="col"> country </th> <th id="rc" scope="col"> reservoir capacity (000
cubic metres) </th> </tr> </thead> <tbody> <tr> <td headers="s ns" scope="row"> Owen Falls </td> <td headers="s ts"> G </td> <td headers="s ds"> 1954 </td> <td headers="s rs"> Victoria Nile </td> <td
headers="s cs"> Uganda </td> <td headers="s rc"> 2,700,000,000<sup>5</sup> </td> </tr> <tr> <td headers="s ns" scope="row"> Kakhovka </td> <td headers="s ts"> EG </td> <td headers="s ds"> 1955 </td>
<td headers="s rs"> Dnieper </td> <td headers="s cs"> Ukraine </td> <td headers="s rc"> 182,000,000 </td> </tr> <tr> <td headers="s ns" scope="row"> Kariba </td> <td headers="s ts"> A </td> <td
headers="s ds"> 1959 </td> <td headers="s rs"> Zambezi </td> <td headers="s cs"> Zimbabwe-Zambia </td> <td headers="s rc"> 180,600,000 </td> </tr> <tr> <td headers="s ns" scope="row"> Bratsk </td>
<td headers="s ts"> EG </td> <td headers="s ds"> 1964 </td> <td headers="s rs"> Angara </td> <td headers="s cs"> Russia </td> <td headers="s rc"> 169,270,000 </td> </tr> <tr> <td headers="s ns" scope
="row"> Aswan High </td> <td headers="s ts"> ER </td> <td headers="s ds"> 1970 </td> <td headers="s rs"> Nile </td> <td headers="s cs"> Egypt </td> <td headers="s rc"> 168,900,000 </td> </tr> <tr>
<td headers="s ns" scope="row"> Akosombo </td> <td headers="s ts"> ER </td> <td headers="s ds"> 1965 </td> <td headers="s rs"> Volta </td> <td headers="s cs"> Ghana </td> <td headers="s rc">
153,000,000 </td> </tr> <tr> <td headers="s ns" scope="row"> Daniel Johnson </td> <td headers="s ts"> M </td> <td headers="s ds"> 1968 </td> <td headers="s rs"> Manicouagan </td> <td headers="s cs">
Canada </td> <td headers="s rc"> 141,852,000 </td> </tr> <tr> <td headers="s ns" scope="row"> Guri (Raúl Leoni) </td> <td headers="s ts"> EGR </td> <td headers="s ds"> 1986 </td> <td headers="s rs">
Caroní </td> <td headers="s cs"> Venezuela </td> <td headers="s rc"> 138,000,000 </td> </tr> <tr> <td headers="s ns" scope="row"> Krasnoyarsk </td> <td headers="s ts"> G </td> <td headers="s ds">
1967 </td> <td headers="s rs"> Yenisey </td> <td headers="s cs"> Russia </td> <td headers="s rc"> 73,300,000 </td> </tr> <tr> <td headers="s ns" scope="row"> W.A.C. Bennett </td> <td headers="s ts">
E </td> <td headers="s ds"> 1967 </td> <td headers="s rs"> Peace </td> <td headers="s cs"> Canada </td> <td headers="s rc"> 70,309,000 </td> </tr> <tr> <td headers="s ns" scope="row"> Zeya </td> <td
headers="s ts"> B </td> <td headers="s ds"> 1978 </td> <td headers="s rs"> Zeya </td> <td headers="s cs"> Russia </td> <td headers="s rc"> 68,400,000 </td> </tr> <tr> <td headers="s ns" scope="row">
Cahora Bassa </td> <td headers="s ts"> A </td> <td headers="s ds"> 1974 </td> <td headers="s rs"> Zambezi </td> <td headers="s cs"> Mozambique </td> <td headers="s rc"> 63,000,000 </td> </tr> </
tbody> <thead> <tr> <th colspan="6" id="p" scope="col"> By power capacity </th> </tr> <tr> <th id="np" scope="col"> name </th> <th id="tp" scope="col"> type<sup>1</sup> </th> <th id="dp" scope="col">
date of completion </th> <th id="rp" scope="col"> river </th> <th id="cp" scope="col"> country </th> <th id="ic" scope="col"> installed capacity(megawatts) </th> </tr> </thead> <tbody> <tr> <td
headers="p np" scope="row"> Itaipú </td> <td headers="p tp"> EGR </td> <td headers="p dp"> 1982 </td> <td headers="p rp"> Paraná </td> <td headers="p cp"> Brazil-Paraguay </td> <td headers="p ic">
12,600 </td> </tr> <tr> <td headers="p np" scope="row"> Guri (Raúl Leoni) </td> <td headers="p tp"> EGR </td> <td headers="p dp"> 1986 </td> <td headers="p rp"> Caroní </td> <td headers="p cp">
Venezuela </td> <td headers="p ic"> 10,300 </td> </tr> <tr> <td headers="p np" scope="row"> Grand Coulee </td> <td headers="p tp"> G </td> <td headers="p dp"> 1941 </td> <td headers="p rp"> Columbia
</td> <td headers="p cp"> U.S. </td> <td headers="p ic"> 6,480 </td> </tr> <tr> <td headers="p np" scope="row"> Sayano-Shushenskoye </td> <td headers="p tp"> AG </td> <td headers="p dp"> 1989 </td>
<td headers="p rp"> Yenisey </td> <td headers="p cp"> Russia </td> <td headers="p ic"> 6,400 </td> </tr> <tr> <td headers="p np" scope="row"> Krasnoyarsk </td> <td headers="p tp"> G </td> <td headers
="p dp"> 1967 </td> <td headers="p rp"> Yenisey </td> <td headers="p cp"> Russia </td> <td headers="p ic"> 6,000 </td> </tr> <tr> <td headers="p np" scope="row"> Churchill Falls </td> <td headers="p
tp"> E </td> <td headers="p dp"> 1971 </td> <td headers="p rp"> Churchill </td> <td headers="p cp"> Canada </td> <td headers="p ic"> 5,428 </td> </tr> <tr> <td headers="p np" scope="row"> La Grande 2
</td> <td headers="p tp"> R </td> <td headers="p dp"> 1978 </td> <td headers="p rp"> La Grande </td> <td headers="p cp"> Canada </td> <td headers="p ic"> 5,328 </td> </tr> <tr> <td headers="p np"
scope="row"> Bratsk </td> <td headers="p tp"> EG </td> <td headers="p dp"> 1964 </td> <td headers="p rp"> Angara </td> <td headers="p cp"> Russia </td> <td headers="p ic"> 4,500 </td> </tr> <tr> <td
headers="p np" scope="row"> Ust-Ilim </td> <td headers="p tp"> R </td> <td headers="p dp"> 1977 </td> <td headers="p rp"> Angara </td> <td headers="p cp"> Russia </td> <td headers="p ic"> 4,320 </td>
</tr> <tr> <td headers="p np" scope="row"> Tucurui </td> <td headers="p tp"> EGR </td> <td headers="p dp"> 1984 </td> <td headers="p rp"> Tocantins </td> <td headers="p cp"> Brazil </td> <td headers=
"p ic"> 4,200 </td> </tr> <tr> <td headers="p np" scope="row"> Ilha Solteira </td> <td headers="p tp"></td> <td headers="p dp"> 1973 </td> <td headers="p rp"> Paraná </td> <td headers="p cp"> Brazil
</td> <td headers="p ic"> 3,200 </td> </tr> <tr> <td headers="p np" scope="row"> Tarbela </td> <td headers="p tp"> ER </td> <td headers="p rp"> 1977 </td> <td headers="p rp"> Indus </td> <td headers=
"p cp"> Pakistan </td> <td headers="p ic"> 3,478 </td> </tr> </tbody> </table></div></span></section><section data-level="1" id="ref72078"><section data-level="2" id="ref72079"><h2 class="h2" id=
"qai_title_1">Ancient dams</h2><section data-level="3" id="ref72080"><h2 class="h3" id="qai_title_2">The <span id="ref592794"></span><span class="md-crosslink" data-show-preview="true">Middle East</
span></h2><p class="topic-paragraph">The oldest known dam in the world is a masonry and earthen embankment at Jawa in the Black Desert of modern <span class="md-crosslink autoxref" data-show-preview=
"true">Jordan</span>. The <span id="ref984113"></span>Jawa Dam was built in the 4th millennium <span class="text-smallcaps">bce</span> to hold back the waters of a small stream and allow increased
irrigation production on arable land downstream. Evidence exists of another masonry-faced earthen dam built about 2700 <span class="text-smallcaps">bce</span> at <span id="ref592796"></span>Sadd
el-Kafara, about 30 km (19 miles) south of <span class="md-crosslink autoxref" data-show-preview="true">Cairo</span>, Egypt. The Sadd el-Kafara failed shortly after completion when, in the absence of
a <span class="md-crosslink autoxref" data-show-preview="true">spillway</span> that could resist <span class="md-crosslink autoxref" data-show-preview="true">erosion</span>, it was overtopped by a
<span class="md-crosslink autoxref" data-show-preview="true">flood</span> and washed away. The oldest dam still in use is a rockfill embankment about 6 metres (20 feet) high on the <span class=
"md-crosslink" data-show-preview="true">Orontes River</span> in Syria, built about 1300 <span class="text-smallcaps">bce</span> for local irrigation use.</p><p class="topic-paragraph">The Assyrians,
Babylonians, and Persians built dams between 700 and 250 <span class="text-smallcaps">bce</span> for <span class="md-crosslink autoxref" data-show-preview="true">water supply</span> and irrigation.
Contemporary with these was the earthen <span id="ref592797"></span><span class="md-crosslink">Maʾrib Dam</span> in the southern <span class="md-crosslink autoxref" data-show-preview="true">Arabian
Peninsula</span>, which was more than 15 metres (50 feet) high and nearly 600 metres (1,970 feet) long. Flanked by spillways, this dam delivered water to a system of irrigation canals for more than
1,000 years. Remains of the Maʾrib Dam are still evident in present-day <span class="md-crosslink" data-show-preview="true">Maʾrib</span>, <span class="md-crosslink autoxref" data-show-preview=
"true">Yemen</span>. Other dams were built in this period in <span class="md-crosslink autoxref" data-show-preview="true">Sri Lanka</span>, India, and <span class="md-crosslink autoxref"
data-show-preview="true">China</span>.</p></section><section data-level="3" id="ref72081"><h2 class="h3" id="qai_title_3">The <span id="ref592798"></span><span class="md-crosslink" data-show-preview=
"true">Romans</span></h2><p class="topic-paragraph">Despite their skill as civil engineers, the Romans’ role in the evolution of dams is not particularly remarkable in terms of number of structures
built or advances in height. Their skill lay in the <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="comprehensive" data-type="MW">comprehensive</span> collection and storage of
water and in its transport and distribution by <span class="md-crosslink" data-show-preview="true">aqueducts</span>. At least two Roman dams in southwestern <span id="ref592799"></span><span class=
"md-crosslink" data-show-preview="true">Spain</span>, <span id="ref592800"></span>Proserpina and <span id="ref592801"></span>Cornalbo, are still in use, while the reservoirs of others have filled
with silt. The Proserpina Dam, 12 metres (40 feet) high, features a masonry-faced core wall of concrete backed by earth that is strengthened by buttresses supporting the downstream face. The Cornalbo
Dam features masonry walls that form cells; these cells are filled with stones or <span class="md-crosslink autoxref" data-show-preview="true">clay</span> and faced with <span class="md-crosslink
autoxref" data-show-preview="true">mortar</span>. The merit of curving a dam upstream was appreciated by at least some Roman engineers, and the forerunner of the modern curved <span class=
"md-crosslink autoxref" data-show-preview="true">gravity dam</span> was built by <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="Byzantine" data-type="MW">Byzantine</span>
engineers in 550 <span class="text-smallcaps">ce</span> at a site near the present Turkish-Syrian border.</p></section><section data-level="3" id="ref72082"><h2 class="h3" id="qai_title_4">Early dams
of East Asia</h2><p class="topic-paragraph">In <span class="md-crosslink autoxref" data-show-preview="true">East Asia</span>, dam construction evolved quite independently from practices in the
Mediterranean world. In 240 <span class="text-smallcaps">bce</span> a stone crib was built across the <span class="md-crosslink autoxref" data-show-preview="true">Jing River</span> in the Gukou
valley in China; this structure was about 30 metres (100 feet) high and about 300 metres (1,000 feet) long. Many earthen dams of moderate height (in some cases of great length) were built by the
Sinhalese in Sri Lanka after the 5th century <span class="text-smallcaps">bce</span> to form <span class="md-dictionary-link md-dictionary-tt-off eb" data-term="reservoirs" data-type="EB">reservoirs
</span> or tanks for extensive irrigation works. The <span id="ref592804"></span>Kalabalala Tank, which was formed by an earthen dam 24 metres (79 feet) high and nearly 6 km (3.75 miles) in length,
had a perimeter of 60 km (37 miles) and helped store <span class="md-crosslink" data-show-preview="true">monsoon</span> rainfall for irrigating the country around the ancient capital of <span class=
"md-crosslink" data-show-preview="true">Anuradhapura</span>. Many of these tanks in Sri Lanka are still in use today.</p><p class="topic-paragraph">In Japan the <span id="ref592806"></span>Diamonike
Dam reached a height of 32 metres (105 feet) in 1128 <span class="text-smallcaps">ce</span>. Numerous dams were also constructed in <span id="ref592807"></span><span class="md-crosslink"
data-show-preview="true">India</span> and <span class="md-crosslink autoxref" data-show-preview="true">Pakistan</span>. In India a design employing hewn stone to face the steeply sloping sides of
earthen dams evolved, reaching a climax in the 16-km- (10-mile-) long <span id="ref592808"></span>Veeranam Dam in <span class="md-crosslink autoxref" data-show-preview="true">Tamil Nadu</span>, built
from 1011 to 1037 <span class="text-smallcaps">ce</span>.</p><p class="topic-paragraph">In Persia (modern-day <span id="ref592809"></span><span class="md-crosslink" data-show-preview="true">Iran</
span>) the <span id="ref592810"></span><span class="md-crosslink">Kebar Dam</span> and the <span id="ref984116"></span>Kurit Dam represented the world’s first large-scale thin-arch dams. The Kebar
and Kurit dams were built early in the 14th century by Il-Khanid Mongols; the Kebar Dam reached a height of 26 metres (85 feet), and the Kurit Dam, after successive heightenings over the centuries,
extended 64 metres (210 feet) above its foundation. Remarkably, the Kurit Dam stood as the world’s tallest dam until the beginning of the 20th century. By the end of the 20th century, its reservoir
had almost completely silted in, causing floodwaters to regularly overtop the dam and cause serious erosion. A new, larger dam was built just above the old one in order to create a new reservoir and
redirect floodwaters away from the ancient structure.</p></section></section><section data-level="2" id="ref72083"><h2 class="h2" id="qai_title_5">Forerunners of the modern dam</h2><section
data-level="3" id="ref72084"><h2 class="h3" id="qai_title_6">The 15th to the 18th century</h2><p class="topic-paragraph">In the 15th and 16th centuries, dam construction resumed in Italy and, on a
larger scale, in Spain, where Roman and Moorish influence was still felt. In particular, the <span class="md-crosslink" data-show-preview="true">Tibi Dam</span> across the Monnegre River in Spain, a
curved gravity structure 42 metres (138 feet) high, was not surpassed in height in western Europe until the building of the Gouffre d’Enfer Dam in <span class="md-crosslink autoxref"
data-show-preview="true">France</span> almost three centuries later. Also in Spain, the 23-metre- (75-foot-) high Elche Dam, which was built in the early 17th century for irrigation use, was an
innovative thin-arch masonry structure. In the <span class="md-crosslink autoxref" data-show-preview="true">British Isles</span> and northern Europe, where rainfall is ample and well distributed
throughout the year, dam construction before the <span class="md-crosslink" data-show-preview="true">Industrial Revolution</span> proceeded on only a modest scale in terms of height. Dams were
generally limited to forming water reservoirs for towns, powering water mills, and supplying water for navigation canals. Probably the most remarkable of these structures was the 35-metre-
(115-foot-) high earthen dam built in 1675 at <span id="ref592813"></span>Saint-Ferréol, near Toulouse, France. This dam provided water for the <span class="md-crosslink" data-show-preview="true">
Midi Canal</span>, and for more than 150 years it was the highest earthen dam in the world.</p></section><section data-level="3" id="ref72085"><h2 class="h3" id="qai_title_7">The 19th century</h2><p
class="topic-paragraph">Up to the middle of the 19th century, dam design and construction were largely based upon experience and <span class="md-dictionary-link md-dictionary-tt-off mw" data-term=
"empirical" data-type="MW">empirical</span> knowledge. An understanding of material and structural theory had been accumulating for 250 years, with scientific luminaries such as <span class=
"md-crosslink" data-show-preview="true">Galileo</span>, <span class="md-crosslink" data-show-preview="true">Isaac Newton</span>, <span class="md-crosslink" data-show-preview="true">Gottfried Wilhelm
Leibniz</span>, <span class="md-crosslink" data-show-preview="true">Robert Hooke</span>, <span class="md-crosslink" data-show-preview="true">Daniel Bernoulli</span>, <span class="md-crosslink"
data-show-preview="true">Leonhard Euler</span>, <span class="md-crosslink" data-show-preview="true">Charles-Augustin de Coulomb</span>, and Claude-Louis Navier among those who made significant
contributions to these advancements. In the 1850s, <span id="ref592814"></span><span class="md-crosslink" data-show-preview="true">William John Macquorn Rankine</span>, professor of <span id=
"ref667944"></span><span class="md-crosslink" data-show-preview="true">civil engineering</span> at the <span class="md-crosslink autoxref" data-show-preview="true">University of Glasgow</span> in
Scotland, successfully demonstrated how applied science could help the practical engineer. Rankine’s work on the stability of loose earth, for example, provided a better understanding of the
principles of dam design and performance of structures. In mid-century France, J. Augustin Tortene de Sazilly led the way in developing the mathematical analysis of vertically faced masonry gravity
dams, and François Zola first utilized mathematical analysis in designing a thin-arch masonry dam.</p></section><section data-level="3" id="ref72086"><h2 class="h3" id="qai_title_8">Development of
modern structural theory</h2><p class="topic-paragraph">Masonry and concrete dam design is based on conventional structural theory. In this relationship, two phases may be recognized. The first,
extending from 1853 until about 1910 and represented by the contributions of a number of French and British engineers, was actively concerned with the precise profile of gravity dams in which the
horizontal thrust of water in a reservoir is resisted by the weight of the dam itself and the inclined reaction of the dam’s foundation. Starting about 1910, however, engineers began to recognize
that concrete dams are <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="monolithic" data-type="MW">monolithic</span> three-dimensional structures in which the distribution of <span
id="ref592816"></span><span class="md-crosslink" data-show-preview="true">stress</span> and the deflections of individual points depend on stresses and deflections of many other points in the
structure. Movements at one point have to be compatible with movements at all others. Because of the complexity of the stress pattern, model techniques were gradually employed. Models were built in
plasticine, rubber, plaster, and finely graded concrete. Utilizing virtual models, computers <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="facilitate" data-type="MW">facilitate
</span> engineers’ use of finite element analysis, by which a monolithic structure is mathematically conceived as an assembly of separate, discrete blocks. Study of both physical models and <span
class="md-crosslink" data-show-preview="true">computer simulations</span> permits deflections of a dam’s foundations and structure to be analyzed. However, while computers are useful in analyzing
designs, they cannot generate (or create) the dam designs proposed for specific sites. This latter process, which is often referred to as form making, remains the responsibility of human engineers.</
p><p class="topic-paragraph">During the 100 years up to the end of <span class="md-crosslink autoxref" data-show-preview="true">World War II</span>, experience in design and construction of dams
advanced in many directions. In the first decade of the 20th century, many large dams were built in the <span class="md-crosslink autoxref" data-show-preview="true">United States</span> and western
Europe. In succeeding decades, particularly during the war years, many impressive structures were built in the United States by federal government agencies and private power companies. <span id=
"ref592817"></span><span class="md-crosslink" data-show-preview="true">Hoover Dam</span>, built on the Colorado River at the Arizona-Nevada border between 1931 and 1936, is an outstanding example of
a curved gravity dam built in a narrow gorge across a major river and employing advanced design principles. It has a height of 221 metres (726 feet) from its foundations, a <span class=
"md-dictionary-link md-dictionary-tt-off eb" data-term="crest" data-type="EB">crest</span> length of 379 metres (1,244 feet), and a reservoir capacity of 37 billion cubic metres (48 billion cubic
yards).</p><p class="topic-paragraph">Among earthen dams, <span id="ref592818"></span><span class="md-crosslink" data-show-preview="true">Fort Peck Dam</span>, completed in 1940 on the <span class=
"md-crosslink autoxref" data-show-preview="true">Missouri River</span> in <span class="md-crosslink autoxref" data-show-preview="true">Montana</span>, contained the greatest volume of fill, 96
million cubic metres (126 million cubic yards). This volume was not exceeded until the completion in 1975 of <span id="ref592819"></span><span class="md-crosslink" data-show-preview="true">Tarbela
Dam</span> in Pakistan, with 145 million cubic metres (190 million cubic yards) of fill.</p><p class="topic-paragraph">Construction of the massive <span id="ref984118"></span><span class=
"md-crosslink" data-show-preview="true">Three Gorges Dam</span> in China began in 1994, with most construction completed in 2006. However, interest in the project extended back several decades, and
American engineer J.L. Savage, who had played an important role in the building of Hoover Dam, worked on preliminary designs for a large dam on the <span class="md-crosslink" data-show-preview=
"true">Yangtze River</span> (Chang Jiang) in the mid-1940s before the Communist Party took control of mainland China in 1949. Planning for the existing structure commenced in earnest in the 1980s,
and construction began after approval by the National People’s Congress in 1992. Built as a straight-crested concrete gravity structure, Three Gorges Dam was constructed using a trestle-and-crane
method of transporting and casting concrete similar to that used in the 1930s for the <span class="md-crosslink" data-show-preview="true">Grand Coulee Dam</span> on the <span class="md-crosslink"
data-show-preview="true">Columbia River</span> in the northwestern United States.</p><p class="topic-paragraph">Three Gorges Dam is 2,335 metres (7,660 feet) long with a maximum height of 185 metres
(607 feet); it incorporates 28 million cubic metres (37 million cubic yards) of concrete and 463,000 metric tons of <span class="md-crosslink autoxref" data-show-preview="true">steel</span> into its
design. When it became fully operational in 2012, the dam’s hydroelectric power plant had the largest generating capacity in the world, 22,500 megawatts. The reservoir impounded by the dam extended
back up the Yangtze River for more than 600 km (almost 400 miles).</p></section></section><section data-level="2" id="ref275458"><h2 class="h2" id="qai_title_9">Rise of <span id="ref984120"></span>
<span class="md-crosslink" data-show-preview="true">environmental</span> and economic concerns</h2><p class="topic-paragraph">The effect of dams on the natural <span class="md-dictionary-link
md-dictionary-tt-off mw" data-term="environment" data-type="MW">environment</span> became an issue of public concern at the end of the 20th century. Much of this concern was energized by fears that
dams were destroying the populations of migrating (or spawning) fish, which were being blocked or impeded by the construction of dams across rivers and waterways. (<em>See below</em> <span class=
"md-crosslink" data-show-preview="true">Fish passes</span>.) In more general terms, dams were often perceived—or portrayed—as not simply transforming the environment to serve human desires but also
obliterating the environment and causing the destruction of flora and fauna and picturesque landscapes on a massive scale. Dams were also blamed for inundating the cultural homelands of native
peoples, who were forced to relocate out of reservoir “take” areas created by large-scale dams. None of these concerns sprang up without warning, and they all have roots that date back many decades.
</p><p class="topic-paragraph">The environmental problems associated with dams have been <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="exacerbated" data-type="MW">exacerbated</
span> as dams have increased in height. However, even relatively small dams have prompted opposition by people who believe that their interests are adversely affected by a particular structure. For
example, in <span class="md-crosslink autoxref" data-show-preview="true">colonial America</span>, legal action was often taken by upstream landowners who believed that the pond impounded by a small
mill dam erected downstream was flooding—and thus rendering unusable—land that could otherwise be used for growing crops or as pasture for livestock. By the late 18th century, when many mill dams
were beginning to reach heights that could not easily be jumped or <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="traversed" data-type="MW">traversed</span> by spawning fish,
some people sought to have them removed because of their effect on fishing. In such situations, opposition to dams is not driven by an abstract concern for the environment or the survival of riparian
ecosystems; rather, it is driven by an appreciation that a particular dam is transforming the environment in ways that serve only certain special interests.</p><p class="topic-paragraph">In the 1870s
one of the first wide-scale efforts to block the construction of a dam because of misgivings about its potential effect upon the landscape came in the <span id="ref984119"></span><span class=
"md-crosslink" data-show-preview="true">Lake District</span> of northwestern England. The Lake District is recognized as one of the most picturesque regions of England because of its mountains and
rolling hills. However, this same landscape also offered a good location for an artificial reservoir that could feed high-quality water to the growing industrial city of <span class="md-crosslink"
data-show-preview="true">Manchester</span> almost 160 km (100 miles) to the south. The city’s Thirlmere Dam was eventually built and generally accepted as a positive development, but not before it
aroused impassioned opposition among citizens throughout the country who feared that part of England’s natural and cultural <span class="md-dictionary-link md-dictionary-tt-off eb" data-term=
"heritage" data-type="EB">heritage</span> might be defiled by the creation of a “water tank” in the midst of the Lake District.</p><p class="topic-paragraph">In the United States a similar but even
more impassioned battle erupted in the early 20th century over plans by the city of San Francisco to build a <span id="ref984123"></span><span class="md-crosslink">reservoir</span> in Hetch Hetchy
Valley. Located more than 900 metres (3,000 feet) above <span class="md-crosslink autoxref" data-show-preview="true">sea level</span>, the Hetch Hetchy site offered a good storage location in the
<span class="md-crosslink" data-show-preview="true">Sierra Nevada</span> for water that could be delivered without pumping to San Francisco via an <span class="md-crosslink autoxref"
data-show-preview="true">aqueduct</span> nearly 270 km (167 miles) long. Hetch Hetchy, however, is also located within the northern boundaries of <span class="md-crosslink" data-show-preview="true">
Yosemite National Park</span>. The renowned naturalist <span class="md-crosslink" data-show-preview="true">John Muir</span> led the way in fighting the proposed dam and—with assistance from <span
class="md-crosslink" data-show-preview="true">Sierra Club</span> members and other citizens across the United States who were concerned about the loss of natural landscapes to commercial and
municipal development—made the fight over the preservation of Hetch Hetchy Valley a national issue. In the end, the benefits to be provided by the dam—including the development of at least 200,000
kilowatts of hydroelectric power—outweighed the costs to be exacted by the inundation of the valley. Approved by the U.S. <span class="md-dictionary-link md-dictionary-tt-off eb" data-term="Congress"
data-type="EB">Congress</span> in 1913, the construction of the dam, known today as <span id="ref984122"></span>O’Shaughnessy Dam in honour of the city engineer who oversaw its construction, was a
defeat for the Sierra Club and landscape preservationists, who continued to use it as a symbol and rallying cry for mid-20th-century environmental causes.</p><p class="topic-paragraph">After World
War II, plans were made by the <span class="md-crosslink autoxref" data-show-preview="true">U.S. Bureau of Reclamation</span> to build a hydroelectric power dam across the <span class="md-crosslink"
data-show-preview="true">Green River</span> at <span id="ref984124"></span><span class="md-crosslink">Echo Park Canyon</span> within the boundaries of <span class="md-crosslink" data-show-preview=
"true">Dinosaur National Monument</span> in eastern Utah. Many of the same issues raised at Hetch Hetchy were again debated, but in this instance opponents such as the Sierra Club were able to block
construction of the dam through a concerted effort to lobby Congress and win support from the American public at large. However, in its effort to save Echo Park, the Sierra Club dropped opposition to
the proposed <span id="ref984125"></span><span class="md-crosslink">Glen Canyon Dam</span> across the <span class="md-crosslink" data-show-preview="true">Colorado River</span> near the Arizona-Utah
border, and this 216-metre (710-foot) high concrete <span class="md-crosslink" data-show-preview="true">arch dam</span>, built between 1956 and 1966, eventually came to be seen by environmentalists
as being responsible for destroying a beautiful pristine landscape <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="encompassing" data-type="MW">encompassing</span> thousands of
square kilometres. Anger over the Glen Canyon Dam energized the Sierra Club to mount a major campaign against additional dams proposed for construction along the Colorado River near the borders of
<span class="md-crosslink autoxref" data-show-preview="true">Grand Canyon National Park</span>. By the late 1960s, plans for these proposed <span class="md-crosslink" data-show-preview="true">Grand
Canyon</span> dams were politically dead. Although the reasons for their <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="demise" data-type="MW">demise</span> were largely the
result of regional water conflicts between states in the <span class="md-crosslink autoxref" data-show-preview="true">Pacific Northwest</span> and states in the American Southwest, the environmental
movement took credit for saving America from the desecration of a national treasure.</p><p class="topic-paragraph">In developing parts of the world, dams are still perceived as an important source of
hydroelectric power and irrigation water. Environmental costs associated with dams have nonetheless attracted attention. In India the relocation of hundreds of thousands of people out of reservoir
areas generated intense political opposition to some dam projects.</p><p class="topic-paragraph">In China the <span id="ref984128"></span><span class="md-crosslink" data-show-preview="true">Three
Gorges Dam</span> (constructed from 1994 to 2006) generated significant opposition within China and in the international <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="community"
data-type="MW">community</span>. Millions of people were displaced by, and cultural and natural treasures were lost beneath, the reservoir that was created following erection of the 185-metre-
(607-foot-) high concrete wall, some 2,300 metres (7,500 feet) long, across the <span class="md-crosslink" data-show-preview="true">Yangtze River</span>. The dam is capable of producing 22,500
megawatts of electricity (which can reduce coal usage by millions of tons per year), making it one of the largest hydroelectric producers in the world.</p><p class="topic-paragraph">Dams still
unquestionably have an important role to play within the world’s social, political, and economic framework. But for the foreseeable future, the specific character of that role and the way that dams
will interrelate with the environment will likely remain a subject of <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="contentious" data-type="MW">contentious</span> debate.</p></ | {"url":"https://www.questionai.com/knowledge/kPXFDOW1Qj-dam-engineering","timestamp":"2024-11-08T05:10:30Z","content_type":"text/html","content_length":"184810","record_id":"<urn:uuid:e5af56de-9d66-4c69-b38e-6e2484f14e87>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00178.warc.gz"} |
Rots and Refs
Follow hints using a little coordinate geometry, plane geometry and trig to see how matrices are used to work on transformations of the plane.
The point $P$ has coordinates $(r\cos \phi, r\sin \phi)$ where the distance of $P$ from the origin is $OP=r$ and the line $OP$ is at angle $\phi$ to the x axis. Find the image of this point under the
transformation given by the matrix
$\mathbf{T_1}=\left( \begin{array}{cc} \cos \theta & -\sin \theta\\ \sin \theta & \cos \theta \end{array} \right)$
Draw a diagram and describe the effect of this transformation on the points of the plane.
The point $P$ has coordinates $(p,q)$ and the point $P'$ is the reflection of $P$ in the line $y = x\tan\theta$. In the diagram below the lines $P'X'$ and $XA'$ are perpendicular to the line $OA'X'$.
Prove that $OX = OX' = p$, $P'X' = PX = q$ and $OA = OA' = p\cos 2\theta.$ Find the lengths $BP',\ AX'$ and $BX'$ and hence prove that transformation given by the matrix
$\mathbf{T_2}=\left( \begin{array}{cc} \cos 2\theta & \sin2\theta\\ \sin2\theta & -\cos2\theta \end{array} \right)$
gives a reflection in the line $y=\tan\theta$.
Getting Started
See the problem
'The Matrix'
for an explanation of how a transformation of the plane is given by a matrix and how you can find the image of a point by multiplying its vector by the matrix of the transformation.
Student Solutions
Bob sent us his solution:
Firstly I multiplied the matricies to find the new point. Using the trigonometric identities to simplify, I got the new point as $(r\cos(\theta+\phi),r\sin(\theta+\phi))$. This meant that we had
rotated the point $\phi$ degrees anticlockwise.
To prove that $OX = OX' = p$, I drew a line XX', which intersects and is parallel to the line $y=x\tan\theta$ (call this point of intersection D). But $DX'=DX$ and so $ODX$ and $ODX'$ are two
right-angled triangles of the same size, so $OX=OX'=p$. By the same argument I drew lines $OP$ and $OP'$, and so got right-angled triangles again, so $OP=OP'=q$.
By looking at the right-angled triangle $OAX'$, with the angle at $O$ being $2\theta$, I knew that: $$\cos2\theta=\frac{OA'}{OX}=\frac{OA'}{p}$$ and so $OA'=p\cos2\theta$.
I then looked at the right-angled triangle $X'BP$, and since the angle at $X$ is $2\theta$, $BP'=q\sin2\theta$. By applying Pythagoras' Theorem to the right-angled triangle $OAX$,$AX'=p\sin2\theta$.
Finally, applying Pythagoras' Thereom to the triangle $X'BP'$ I found $BX'=q\cos2\theta$.
Looking at the change in X co-ordinates, I found $P'=(p\cos2\theta+q\sin2\theta, \; p\sin2\theta-q\cos2\theta)$.
So the matrix for the reflection would be:
You may also like to look at the problem ' The Matrix ' from July 2003 and its solution for an explanation of how a transformation of the plane is given by a matrix and how you can find the image of
a point by multiplying its vector by the matrix of the transformation. | {"url":"https://nrich.maths.org/problems/rots-and-refs","timestamp":"2024-11-11T00:45:24Z","content_type":"text/html","content_length":"40387","record_id":"<urn:uuid:15a7dc5f-e51b-469c-9b52-ac4912081c7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00522.warc.gz"} |
Rodenborn Research
The Rodenborn lab investigates fluid systems using scaled laboratory experiments to understand fundamental phenomena that apply to a variety of geophysical and biological systems. His undergraduate
research teams are currently studying three such fluid systems.
Robophysics of bacterial swimming
There is intense interest in engineering robots that can move effectively through a wide range of natural environments. However, the success of robots based on natural organisms (biomimetic robots)
is comparatively limited. We do not have a full understanding of the interaction between controlling these robots and complex environments where we would like them to move. Consider designing a robot
to search for earthquake victims or moving a rover on a distant planet.
The methods of physics as applied to these problems is known as robophysics and it can aid engineers in designing and building robots. For example, researchers are exploring the use of nano-scale
robots for drug delivery and other biomedical purposes, often modeled on the swimming of bacteria and other microorganisms. Fluid dynamics at these length scales (low Reynolds number) is radically
different from macroscopic fluid dynamics. The motion of microscopic objects is dominated by viscous damping; inertia plays no role so there is no coasting. In contrast, motion at high Reynolds
number is inertia dominated and viscous effects are secondary. We swim by pushing fluid behind us in order to move forward, but a microorganism is not able to swim in the same way.
(Figure 1) A physicist’s bacterium. Our robot consists of an acrylic body into which a small, geared DC motor is placed to drive a helical flagellum formed using copper welding wire. Click here to
see it in motion.
There have been attempts by physicists and biologists to provide simplified models of swimming microorganisms. Early work by G.I. Taylor showed that microorganisms use fluid drag to propel
themselves, which may not seem to make sense. How can resistance be used for propulsion? However, we are able to walk only because the floor has friction. For example, fluid drag (per unit length) in
the direction perpendicular to a thin rod is much higher than the drag parallel to a rod. Organisms take advantage of this drag anisotropy to move by using different types of wavelike motion. For
example, spermatozoa have a whip-like tail that beats in a sinusoidal fashion.
After G.I. Taylor, several approximate theories were developed to describe swimming microorganisms. These approximations made sense when conducting numerical simulations and making experimental
measurements were much more difficult.
However, my research has shown that drag-based theories of swimming using a helical flagellum introduce large errors (~100%) when compared with experiments. I used macroscopic robots in properly
scaled fluid environments to circumvent the experimental complexities of measuring the performance of bacteria or nano-scale devices. I was able to achieve precise control simultaneous with precise
measurement of forces and showed how poorly resistive force theory compared with experiments and simulations.
My lab is now focused on understanding the effects of nearby boundaries on swimming with a helical flagellum. We use a robotic swimmer and precisely position the swimmer with respect to the wall of
the tank. Our robot does not swim in water, but instead, swims in highly viscous silicone oil that is 100,000 times more viscous than water. We must scale the fluid’s viscosity in order to offset for
having scaled the bacterium from about one micron to about a 10 centimeter robot. This type of scaled fluid experiment using robotic swimmers is one area of research in my lab.
Calibrating Numerical Simulations with Macroscopic Bacterial Models
Our collaborators at Trinity University in San Antonio, Texas are able to complete simulations that model our experiments at a microscopic scale, and ultimately, our experimental work will allow our
collaborators to calibrate their numerical simulations in an effort to reduce the errors of the simulations. By performing experiments to gather force and torque information about an axis parallel
and perpendicular to a selected boundary, we should be able to calculate to forces and torques present on a sphere at any given boundary distance (even including multiple boundaries). A sphere is
used to model a bacterial body, but we also performed torque measurements with a rotating cylinder to model more cylindrical bacterial bodies too. Again, all of this work is completed in an effort to
better understand the fundamentals of bacterial swimming.
Experimental Set Up For Torque Measurements. The tank is set up on a horizontal translational stage that allows us to collect torque measurements at any distance from the boundaries (specifically the
walls parallel to the z-axis).
Rotational motion about an axis parallel to a set boundary
Our lab group has been collecting data for torque as a function of boundary distance for cylinders and spheres near a smooth, flat boundary. In doing so, we will be able to better understand the
torque acting on bacterial bodies at a certain distance from a boundary.
We rotate spheres and cylinders in a tank of high viscosity silicone oil to mimic the conditions of a low Reynold’s number environment. A torque sensor is also connected to the sphere and/or cylinder
as it is rotated. The tank sits on a horizontal translational stage, so we are able to move the axis of rotation to any given distance from the boundary. We selected boundary distances based off of a
logistic growth function and collected torque measurements for both counterclockwise and clockwise rotation at each location. We have completed the work with cylinders and spheres rotating about an
axis parallel to the boundary and have started work with spheres rotating about an axis perpendicular to the boundary.
Recently, we have upgraded to a larger tank to minimize the effects of multiple boundaries.
Results for cylinder rotating around an axis parallel to a boundary
Experiments confirm Jeffrey and Onishi (1981) theory, and experiment results were used to calibrate numerical simulations.
Results for sphere rotating around an axis parallel to a boundary
Experiments confirm Dean and O’Neil (1963) theory, and experiment results were used to calibrate numerical simulations.
Additional results for sphere rotating around an axis parallel to a boundary
Additional experiments confirm Dean and O’Neil (1963) theory, and experiment results were used to calibrate numerical simulations.
Translational motion perpendicular to a boundary condition
Translational motion perpendicular to a boundary condition
Lee and Leal (1980) predicts the forces present on a sphere at certain boundary distances. It predicts that force on a sphere increases as the sphere approaches a boundary. Our lab group has been
testing this theory in high viscosity silicone oil to model bacterial swimming. Ultimately, we will send our data to our collaborators so that they can calibrate their numerical models.
To perform these measurements, we dropped a small metal sphere at the center of the tank of silicone oil. We videoed the sphere as it fell towards the bottom boundary, and we then used Tracker
collect time and position data from our videos. Once we had the data, we uploaded it to MATLAB where we plotted the data while also taking mass of the sphere, viscosity of the fluid, and temperature
of the fluid into account. Our plot allows us to compare our measured values to theoretical values at the measured boundary distances.
The red line represents theory, and the blue points represent our collected data. Experiments confirm Lee and Leal (1980) theory, and experiment results will be used to calibrate numerical
Results for sphere falling towards perpendicular boundary
Experiments confirm the Lee and Leal (1980) theory, and experiment results will be used to calibrate numerical simulations.
Future Experiments
Translational motion parallel to a boundary wall
Perform similar experiments to all those listed above with a rough boundary condition
Collect force and torque data on a full macroscopic bacteria model sedimenting | {"url":"https://tcc-biophysicalfluids.com/rodenborn-research/","timestamp":"2024-11-08T07:29:39Z","content_type":"text/html","content_length":"36427","record_id":"<urn:uuid:ef82334e-fb34-4e70-b934-ca94e81c0882>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00879.warc.gz"} |
The CORDIC (COordinate Rotation Digital Computing) algorithm is a time and space efficient algorithm mainly used for calculate the Sine and Cosine of a given angle. It can also be used for computing
log, exponent and square root. Common uses are sine and cosine generation, vector magnitude, polar- cartesian conversions, and vector rotation. The CORDIC core is a parameterized Verilog RTL code for
a 16 bit fixed point CORDIC. The input to this system is 17 bit angle in degrees (signed magnitude representation) and output is 17 bit sine and cosine values. The time efficiency in this algorithm
is brought about by the replacement of multiplication/division operation by shift operation. Hence the only costly operation left is addition. So adders are the heart of CORDIC architecture.
The CORDIC core uses three different adders, i.e. CLA (Carry Look-ahead Adder), RCA (Ripple Carry Adder)and a combination of CLA and RCA. Combination of CLA and RCA means small sets of CLAs connected
through rippling carry-out and carry-ins. All the three architectures have their area versus efficiency trade-offs. Generally, CLA based CORDIC has minimum area but slower in speed in comparision to
other two CORDIC architectures, whereas segmented CLA based CORDIC gives best performance with area panelty.
o 16 bit fixed point arithmetic.
o Input angle range -45 degrees to +45 degrees.
o Input angle value: signed magnitude representation (1 sign bit, 8 integer bits, 8 fractional bits).
o Output values in 2’s complement representation.
o Synchronous reset.
o Parameterized Verilog RTL code with sample synthesis script (For Synopsys DC).
Verilog instantiation in oc the CORDIC would look like this.
cordic cordicInstance (CosX,SinX,theta,Sign,clock,reset);
Name Dir Description
clock Input The main system clock. All synchronous signals are clocked off the rising edge of clk.
reset Input An active high synchronous reset signal.
Sign Input Sign bit for the input Angle.
theta Input 16 bit input in degrees (8 integer bits, 8 fractional bits).
CosX Output 17 bit output value (1 bit sign and 16 bit value) of Cos(theta) in 2’s complement representation. Valid only after 16 clock cycle from reset goes low.
SinX Output 17 bit output value (1 bit sign and 16 bit value) of Sin(theta) in 2’s complement representation. Valid only after 16 clock cycle from reset goes low.
After unzipping, you shall see the following directories and subdirectories.
README : this file.
cla/ : CORDIC with carry look ahead adder.
hdl/ : verilog RTL code.
script/ : sample script for synthesis.
cla.rca/ : CORDIC with carry look ahead and ripple carry adder.
hdl/ : verilog RTL code.
script/ : sample script for synthesis.
cla/ : CORDIC with carry look ahead adder.
hdl/ : verilog RTL code.
script/ : sample script for synthesis.
file sim.txt (present under each hdl/ directory) contains the file names exactly the order the simulator should see. For example, with verilog-XL issue the\ following commands to get the simulattion
> cd cla/hdl
> verilog -f sim.txt
see the sample script */script/syn.scr under each directory.
If you have any more questions, feel free to drop me a mail.
v0.1a - First Draft (Nov 1997)
v0.1 - The inital release (May, 2000) | {"url":"https://srohit0.github.io/CORDIC/","timestamp":"2024-11-03T17:23:29Z","content_type":"text/html","content_length":"7733","record_id":"<urn:uuid:7b299e8b-a55e-458f-9e88-c5e2e9d7107d>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00176.warc.gz"} |
Transportation Capacity Utilization (Full Guide) - Online Calculator
Online Free Transportation Capacity Utilization Calculator
The Transportation Capacity Utilization formula:
TCU = (D / C) x 100%
• D = Total demand for transportation
• C = Total capacity of the transportation system
Transportation Capacity Utilization Calculator
Transportation costs can be a significant expense for businesses that rely on shipping to get their products to customers. To minimize costs and maximize efficiency, it’s essential to optimize your
shipping strategy. One way to do this is to calculate your transportation capacity utilization rate. Our Transportation Capacity Utilization Calculator can help you determine your utilization rate
and make data-driven decisions to reduce costs and streamline your shipping operations.
How to Use the Calculator:
Using our Transportation Capacity Utilization Calculator is easy. Simply enter the following information:
• Total weight of goods shipped (in pounds)
• Total available capacity of your shipping vehicle(s) (in pounds)
The calculator will then determine your transportation capacity utilization rate, which is the percentage of your available shipping capacity that is being used. You can use this information to
optimize your shipping strategy and reduce costs.
TCU = (D / C) x 100%
Let’s say you’re a small business owner who ships products to customers using a cargo van with a total available capacity of 2,000 pounds. Last month, you shipped a total of 1,200 pounds of products.
Using our Transportation Capacity Utilization Calculator, you can determine that your utilization rate was 60% (1,200 pounds shipped / 2,000 pounds available capacity). Armed with this information,
you can make data-driven decisions to reduce costs and improve efficiency. For example, you may choose to consolidate shipments to reduce the number of trips you need to make, or negotiate lower
shipping rates based on your utilization rate.
Transportation capacity utilization is a crucial factor in ensuring the efficiency of logistics operations. By using the transportation capacity utilization calculator, you can easily determine how
much of your transportation capacity is being used and how much is going to waste. This can help you identify areas for improvement and make data-driven decisions to optimize your supply chain.
By using this calculator, you can increase your transportation efficiency, reduce costs, and ultimately improve customer satisfaction. So why wait? Try out the transportation capacity utilization
calculator today and take the first step towards streamlining your logistics operations.
1. How do I use the transportation capacity utilization calculator? To use the transportation capacity utilization calculator, simply input your data in the appropriate fields and click “Calculate”.
The calculator will then provide you with your transportation capacity utilization percentage.
2. What is transportation capacity utilization? Transportation capacity utilization is the measure of how much of your transportation capacity is being utilized. It is calculated by dividing the
total weight or volume of goods transported by the total weight or volume of capacity available.
3. Why is transportation capacity utilization important? Transportation capacity utilization is important because it helps to identify areas for improvement in logistics operations. By optimizing
capacity utilization, businesses can reduce costs, improve efficiency, and ultimately improve customer satisfaction.
4. What are some factors that can affect transportation capacity utilization? Factors that can affect transportation capacity utilization include the type of transportation used, the weight and
volume of goods being transported, the distance traveled, and the availability of capacity.
5. How can businesses improve transportation capacity utilization? Businesses can improve transportation capacity utilization by analyzing their logistics operations, identifying areas for
improvement, optimizing routing and scheduling, and investing in new technology such as route optimization software.
6. Can the transportation capacity utilization calculator be used for all types of transportation? Yes, the transportation capacity utilization calculator can be used for all types of transportation
including trucking, rail, air, and sea transport.
7. Is the transportation capacity utilization calculator easy to use? Yes, the transportation capacity utilization calculator is designed to be user-friendly and easy to use. Simply input your data
in the appropriate fields and click “Calculate”.
8. Is the transportation capacity utilization calculator free? Yes, the transportation capacity utilization calculator is free to use and does not require any registration or payment.
9. Can I save my transportation capacity utilization calculations? Yes, you can save your transportation capacity utilization calculations by copying and pasting the results into a document or
10. How often should I use the transportation capacity utilization calculator? It is recommended that you use the transportation capacity utilization calculator on a regular basis, such as weekly or
monthly, to track your transportation efficiency and identify areas for improvement.
One thought on “Online Free Transportation Capacity Utilization Calculator”
1. I like what you guys are up too. Such smart work and reporting! Keep up the excellent works guys Ive incorporated you guys to my blogroll. I think it will improve the value of my website 🙂 | {"url":"https://supplychainpedia.com/tcu-calculator/","timestamp":"2024-11-08T15:12:19Z","content_type":"text/html","content_length":"165236","record_id":"<urn:uuid:eee70b66-6bf3-4b33-8ef1-be1d084c3498>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00483.warc.gz"} |
Powers of 10 & Estimate Products
Powers of 10 & Estimate Products
Price: 125 points or $1.25 USD
Subjects: math,mathElementary,operationsAndAlgebraicThinking,multiplicationAndDivision
Grades: 4,5,6
Description: Do your students struggle with multiplication? Sometimes just understanding how to check for reasonableness can help. Ensuring students can estimate products by rounding to a power of 10
or by rounding/finding compatible numbers is a great skill to have before learning the standard algorithm of multiplication. This deck includes: - 2 find the pattern (powers of 10) -4 find the
product w/ powers of 10 -5 estimate the product and decide if this is an overestimate or underestimate. | {"url":"https://wow.boomlearning.com/store/deck/tov6vQQQK5BzDqorG","timestamp":"2024-11-06T12:10:52Z","content_type":"text/html","content_length":"2377","record_id":"<urn:uuid:41e2ec26-da1f-4a02-8232-c8d16fbb9e34>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00403.warc.gz"} |
Symmetries and Interactions in Topological Matter 2015
Search within Symmetries and Interactions in Topological Matter 2015
Recent Submissions
• Evidence for the chiral anomaly in a Dirac Semi-metal
After an introduction to Weyl metals and the chiral anomaly, I will discuss recent transport results showing evidence for the chiral anomaly in the Dirac semimetal Na3Bi. At low temperature, we
detect a large conductivity enhancement when the magnetic field is aligned with the current. The enhanced conductivity corresponds to a collimated current plume that can be steered by the
magnetic field. The importance of this field locking feature as a signature of the chiral anomaly is emphasized.|*With Jun Xiong, S. Kushwaha, Tian Liang, Wudi Wang, and R. J. Cava
• Progress in the materials science of hybrid nanowires for topological devices
Semiconductor nanowires are a backbone in proposals for topological quantum computing based on manipulation of Majorana quasiparticles. Experimentally, various techniques exist for synthesis of
semiconductor nanowires for quantum transport. For most applications, the fabrication method is not important once the semiconductor growth conditions and quality have been optimised. There are
also different routes for forming heterostructures, branched geometries and other advanced nanowire materials. However, we will here focus on a unique process that is particularly promising for
topological devices: epitaxial metal/semiconductor heterostructures based on Molecular Beam Epitaxy. Under vacuum conditions, an aluminum shell is grown in-situ onto InAs nanowires, leading to an
epitaxially matched interface between the semiconducting core and the metal coating that acts as a superconductor at low temperatures. The perfect superconductor-semiconductor interface results
in proximity induced superconductivity with a hard gap. The technique is compatible with formation of branched nanostructures and opens up for new directions in nanowire based quantum devices,
e.g. transmon qubits and topological systems.
• Majoranamodes in Atomic Chains on the Surface of a Superconductor
(2015-05) Nadj-Perge, Stevan
Majorana bound states are zero-energy excitations predicted to localize at the edge of a topological superconductor, a state of matter that can form when a ferromagnetic system is placed in
proximity to a conventional superconductor with strong spin-orbit interaction. With the goal of realizing a one-dimensional topological superconductor, we have fabricated ferromagnetic iron
atomic chains on the surface of superconducting lead [1]. Using high-resolution spectroscopic imaging techniques, we show that the onset of superconductivity, which partly gaps the electronic
density of states in the bulk of the chains, is accompanied by the appearance of zero-energy end-states. This spatially resolved signature provides evidence, corroborated by other observations
and theoretical modeling [2], for the formation of a topological phase and edge-bound Majorana states in this system. Our results demonstrates that atomic chains are viable platform for future
experiments to manipulate Majorana bound states [3] and to realize other 1D and 2D topological superconducting phases.
• Effective field theory of the disordered Weyl semimetal
(2015-05) Altland, Alexander
In disordered Weyl semimetals, mechanisms of topological origin lead to the protection against Anderson localization, and at the same time to different types of transverse electromagnetic
response -- the anomalous Hall, and chiral magnetic effect. In this talk, we will discuss the manifestation of these phenomena at length scales which are beyond the scope of diagrammatic
perturbation theory. Specifically we show how an interplay of symmetry breaking and the chiral anomaly leads to a field theory containing two types of topological terms. Generating the
unconventional response coefficients of the system, these terms remain largely unaffected by disorder, i.e. information on the chirality of the system remains visible even at large length scales.
• Time reversal invariant gapped boundaries of twisted Z2 gauge theory
The boundary of a fractionalized topological phase can be gapped by condensing a proper set of bosonic quasiparticles. Interestingly, in the presence of a global symmetry, such a boundary can
exhibit different symmetry transformation properties, depending on the details of the condensation. In this talk, I discuss an explicit example of this kind - the double semion state with time
reversal symmetry. We find two distinct cases where the semionic excitations on the boundary can transform either as time reversal singlets or as time reversal doublets, depending on the coherent
phase factor of the boson condensate. The existence of these two possibilities are demonstrated using both field theory argument and exactly solvable lattice models. Furthermore, we study the
domain walls between these two types of gapped boundaries. We find that they carry symmetry protected degeneracies and applying time reversal symmetry tunnels a semion between them.
• The competing spin liquids and symmetry fractionalization for triangular lattice J1-J2 spin-1/2
We study the spin-1/2 Heisenberg model on the triangular lattice with the antiferromagnetic first- (J1 ) and second-neighbor (J2 ) interactions by means of density matrix renormalization group
(DMRG). Beside the three sublattice ordered Neel phase and a stripe antiferromagnetic phase at smaller (~0.07) and larger J2 (~0.16) sides, we find a quantum spin liquid in the intermediate range
of the J2. We show that there are two topological sectors. The odd sector (by pinning spinons at cylinder boundaries) is very robust, while the even topological sector has higher energy for
smaller systems possibly due to a competing chiral state. We demonstrate the stabilization of the possible Z2 spin liquid in both sectors with the increase of the system width. We analyze the
quantum numbers of different near degenerating states, and discuss the possible characterization of the Z2 spin liquid.
• Detecting signatures of topological order from microscopic Hamiltonians
(2015-05) Pollmann, Frank
I will show that numerical investigations of a many-body ground state wavefunction using the density matrix renormalization group (DMRG) method can yield a remarkably complete characterization of
different types of topological orders. A central tool is the entanglement which encodes many of the essential features. First, I will show how characteristic properties of the topological
excitations in fractional quantum Hall states can be extracted directly from the ground state wave function. Second, I will consider symmetry protected topological phases for which the
characterizing symmetry fractionalization can be determined.
• Crystalline topological phases and quantum anomalies
In this talk, I plan to discuss phases of matter with reflection symmetry (parity symmetry) with interactions. While a systematic analysis is possible for non-interacting fermions, an important
challenge is to understand the effects of strong electron correlations. To get some insight into this problem, I will discuss an example where by the effects of interactions the non-interacting
classification breaks down. I will also propose a generalization of Laughlin’ s thought experiment, a theoretical method which is powerful enough to diagnose topological phases with U(1) symmetry
but no other symmetries, to the cases of various symmetry protected topological phases. For the case of parity symmetry, the proposed generalization consists of putting they boundary theories of
a SPT phase on an unoriented surfaces, and hence is related to the so-called orientifold quantum field theories.
• Metal-Insulator Transition and Beyond in the Pyrochlore Iridates
Iridates are interesting materials in which Coulomb repulsion, kinetic energy, and spin-orbit coupling all are comparable. In particular the latter suggests they may be good candidates to observe
topological phenomena. The pyrochlore family, with chemical formula A2Ir2O7 (A is a trivalent rare earth), displays both magnetic ordering and a metal-insulator transition. I will discuss
theoretical and experimental studies of these materials, focusing on aspects related to topology and correlations, highlighting recent results.
• Topological Insulators2D Topological Superconductors&WeylSemimetalsFine
In this talk I plan to present realization of 2D topological superconductors (TSC) with Helical Pairing [1,2] based on a Bi-based topological material and a route to SUSY critical point and then
present our experimental discovery (and theory of TaAs) of Weyl semimetal state with Fermi arc surface states in TaAs and other related systems [3-5]. I discuss the progress in realizing exotic
Cooper pairing in these systems. ||[1] Hasan & Kane ; RMP 82, 3045 (2010) and Qi & Zhang, RMP 83, 1057 (2011); Fu & Kane, PRL 100, 096407 (2008).|[2] S.-Y. Xu, N. Aldoust et al., Nature Physics
10, 943 (2014).|[3] T. Grover et.al., Science 344, 280 (2014).|[4] S.-Y. Xu, C. Liu, S. Kushwaha et al., Science 347, 294 (2015).|[5] S.-Y. Xu, Belopolski et.al., arXiv:1502.03807 (2015).
• Fractionalization of Faraday lines in generalized compact quantum electrodynamics and SPT- and SET-like phases of quantum lines and particles
(2015-05) Motrunich, Olexei
Motivated by ideas of fractionalization and topological order in bosonic models with short-range interactions, we consider similar phenomena in formal lattice gauge theory models, which are
models where basic constituents are quantum lines. In the first example, we show that a compact quantum electrodynamics (CQED) can have, besides familiar Coulomb and confined phases, additional
unusual confined phases where excitations are quantum lines carrying fractions of the elementary unit of electric field strength; specifically, we construct a model that has $N$-tupled monopole
condensation and realizes 1/N fractionalization of the quantum Faraday lines. In the second example, we consider a system consisting of two copies of CQED in (4+1)D and engineer condensation of
bound states of monopoles (which are quantum lines in four spatial dimensions) and U(1) electric field lines. When the bound states contain a single monopole, we find lattice gauge theory analogs
of Symmetry Protected Topological phases, while when the bound states contain multiple monopoles, we find analogs of Symmetry-Enriched Topological phases, where we also have fractionalization of
Faraday lines. The distinct character of these “topological” phases of quantum lines is revealed by unusual response properties and physics at a boundary of a spatial region in such a phase.
• Surfaces of 3d symmetry protected phases
(2015-05) Fidkowski, Lukasz
I will consider gapped Hamiltonians of generalized spin models, which are invariant under a certain unbroken onsite unitary symmetry group. It is well known that such Hamiltonians can realize
topologically ordered phases, which in (2+1)d can be studied with modular tensor categories. When a symmetry is included, the corresponding `symmetry enriched’ phases correspond to a richer
mathematical structure - e.g. braided G-crossed categories in (2+1) d. However, in systematically constructing such braided G-crossed categories by extending ordinary modular ones one sometimes
encounters obstructions. Here we give a physical interpretation for such obstruction, and show that the corresponding topologically ordered theory, though it cannot be realized in 2d in a
G-symmetric way, can be realized at the surface of a 3d ‘symmetry protected’ phase. I will try to emphasize the physical interpretation of the various mathematical concepts involved, and I will
explain a specific example in detail.
• Composite Dirac liquids
Topological phases of matter often feature boundary physics that naively seems impossible from the viewpoint of systems in one lower dimension. In this talk I will introduce a new class of exotic
boundary states known as `composite Dirac liquids’ that can appear at a strongly interacting surface of a 3D electronic topological insulator. Composite Dirac liquids exhibit a gap to all charge
excitations but nevertheless feature a single massless Dirac cone built from emergent electrically neutral fermions. These states thus comprise electrical insulators that, interestingly, retain
thermal properties similar to those of the non-interacting topological insulator surface. I will show how gapping the neutral fermions via Cooper pairing naturally recovers symmetric non-Abelian
surface topological orders captured recently in several works.
• Braiding statistics and symmetry-protected topological phases
Symmetry-protected topological (SPT) phases can be thought of as generalizations of topological insulators. Just as topological insulators have robust boundary modes protected by time reversal
and charge conservation symmetry, SPT phases have boundary modes protected by more general symmetries. In this talk, I will describe a method for analyzing 2D and 3D SPTphases using braiding
statistics. More specifically, I will show that 2D and 3D SPT phases can be characterized by gauging their symmetries and studying the braiding statistics of their gauge flux excitations. The 3D
case is of particular interest as it involves a generalization of quasiparticle braiding statistics to three dimensions. | {"url":"https://conservancy.umn.edu/collections/e60eb6d5-122e-4e8f-89ed-f6262258224d","timestamp":"2024-11-05T20:46:51Z","content_type":"text/html","content_length":"578367","record_id":"<urn:uuid:6c65c05e-0db2-4fb0-90ce-007ac0e8ccf1>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00811.warc.gz"} |
Expert Maths Tutors in Harrogate | Harrogate Maths TuitionMaths Tutors in Harrogate
If you are looking for a primary maths tutor in Harrogate then Harrogate Tutors can help. Harrogate Tutors are one of our partners that we work with and are happy to recommend. Finding the right KS2
Maths tutor for your child can make all the difference between them loving a subject and not liking it at all. The best maths tutors bring the subject to life and engage with children in a way that
makes them want to learn. If you are looking for an expert KS2 Maths tutor in Harrogate then contact Harrogate Tutors.
For KS3 maths tuition in Harrogate you can contact us or Maths Made Easy. Maths Made Easy provide expert one to one maths tuition in Harrogate. KS3 Maths is more important than ever as it sets the
foundations for GCSE and there is now a lot more to learn and in less time. With the GCSE now beginning in year 9, KS3 maths is effectively 2 and a half years long. An effective maths tutor can
really help children to excel at KS3 and set them up well for GCSE.
We have expert GCSE maths tutors in Harrogate who offer one to one tuition. If you are looking for a home maths tutor in Harrogate then Maths Made Easy can help. GCSE Maths is harder than ever with
more topics to cover and no coursework the new 9-1 course is challenging even for those who love maths. Having GCSE maths tuition from an expert GCSE Maths tutor, can really make the difference. If
you are looking for a GCSE Maths tutor in Harrogate then please get in touch today and we will be happy to discuss your child’s needs.
Our A Level maths tutors in Harrogate are all qualified and DBS checked. All of our A Level maths tutors cover the three key areas of the course, statistics, mechanics and of course, core maths. If
require a maths tutor to help you with content, exam technique or just understanding the key concepts of the A Level maths course then we can help. Our A Level maths tutors in Harrogate cover all
marjo exam boards including AQA, OCR and Edexcel and have experience of tutoring pupils from Harrogate Grammar, St Aidans, Harrogate High, Ashfield College, St Johns and Rossett school. Therefore if
you are looking for an experienced A Level maths tutor in Harrogate, contact us today. | {"url":"http://leedstutorcompany.co.uk/maths-tutors-in-harrogate/","timestamp":"2024-11-07T12:31:38Z","content_type":"text/html","content_length":"39326","record_id":"<urn:uuid:510ce012-c649-4ab8-a40d-85c6729d3e09>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00832.warc.gz"} |
Jupyter-Notebook: Binominal Distribution Example
In [1]:
from scipy.stats import binom
import matplotlib.pyplot as plt
# experiment: 10 balls, 7 are white and 3 black
# we pick a ball 10 times with replacement
# What is the probability that there are:
# - 0 white balls
# - 1 white ball
# - 2 white balls
# - 3 white balls
# ..
# - 10 white balls
# So the number of white balls is what we are interested in.
# This number we call x, so since we are asking for the probability
# for various numbers of white balls (here 0 to 10) we create
# a vector X=[0,1,..10] for convenience
n = 10 #number of trials
p = 7/10 #probability for the occurrence of p
q = 1-p #probability for the occurrence of q
X = list(range(n+1))
#What we can do is, we can make different plots for
#different ratios of black and white balls, thus
#changing the probability for the occurrence of the balls
#we can then see how the distribtion changes
dist = [binom.pmf(x,n,p) for x in X]
In [2]:
P=[0.2, 0.5, 0.8]
fig,ax1 = plt.subplots(figsize=(6,6))
ax1.set_ylabel("probability", color="blue")
ax1.set_xlabel("number of occurrences of our favored event, i.e. draw of a white ball")
ax1.bar(X, dist,color='blue')
How to print several distributions in a loop
In [3]:
#array with the probabilites for the occurrence of the event of interest
#(in our example this would be the ratio of white balls to the total number of balls)
P=[0.2, 0.5, 0.8]
#make a figure with one row and 3 ( len(P) ) columns
fig, axes =plt.subplots(1,len(P),figsize=(18,6))
colors=['blue', 'red','green']
for i in list(range(len(P))):
ax1.set_ylabel("probability", color=colors[i])
ax1.set_xlabel("number of occurrences of our favored event, i.e. draw of a white ball")
#compute the distribution for the p value in P[i]
dist=[binom.pmf(x,n,P[i]) for x in X]
ax1.bar(X, dist,color=colors[i]) | {"url":"https://markelic.de/binominal-distribution-example/","timestamp":"2024-11-12T23:14:11Z","content_type":"text/html","content_length":"679547","record_id":"<urn:uuid:56c0abc8-db63-456a-bf8e-bf312b346cd2>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00176.warc.gz"} |
Regression with Text Input Using BERT and Transformers
1. Introduction
Regression, predicting values of numerical variables, is one of the most fundamental tasks in Machine Learning. Linear regression is often the first model introduced in beginner data science courses.
Then MLP (Multilayer Perceptron) for regression is often the first model one can use to discover the world of deep learning.
Structured and Unstructured Input
As a data scientist, one should be familiar with regression problems with structured (or tabular) data. In other words, each input is a tuple (row) of numbers or categories (cells) that can be placed
into separate fields (columns). For example, to predict the price of some apartments, we could imagine a table where each row represents an apartment and each column an attribute associated with it:
year of construction, area, distance to the city centre, energy consumption, availability of parking space etc.
In some cases, we need to predict numerical values from unstructured data: text, images, speech etc. Here are three real-life use case examples:
• (1) Predict emotion level of text/speech/audios. E.g. We want to assign each text a score between 0 (extremely negative) and 10 (extremely positive). A regression model should be able to predict
scores for new text. — This is also an example of the kind ordinal regression or ordinal classification when there exists an order between the classes.
• (2) Predict house price from description text. E.g., as described in this article, we want to use the description section to evaluate house prices. Actually, we should solve the problem by using
a sequence-to-sequence model to extract the numeric/categorical variables and then process the structured data regression model. However, end-to-end regression models can also be processed
directly to “translate” text into prices, as shown in the article.
• (3) Predict people’s age, prices of valuable assets (clothes, bags) from images.
Regression with Text
Thanks to the revolutionary attention mechanisms introduced in 2017, the BERT architecture using this mechanism, and its implementation in the transformers library, we have a powerful solution to
deal with text regression. This article discusses regression using BERT and transformers to score emotion levels in a text (the problem described in example 1 above).
If you are familiar with huggingface's models, we see various NLP tasks in its interface: Models — Hugging Face.
Figure 1 — Tasks supported by Huggingface model hub
Surprisingly, regression is not one of them.
Text regression is not far from text classification. Therefore, we can slightly modify some parts of the text classification scheme to make regression work, which is the primary goal of this tutorial
The rest of this article is organised as follows:
• Section 2 describes the dataset used for illustration.
• Section 3 reminds us how to fine-tune a text-classification model on our dataset.
• Section 4 describes the modification required to convert the problem into regression.
• Section 5 discusses the main differences and uses of text classification and text regression models.
2. Dataset
In our internal R&D project, we constructed a French dataset based on public service reviews from Google Maps and Trustpilot, as described in an article by my colleague AL Houceine. The project
includes a NER model to detect various kinds of emotion, a classification model to detect the causes linked with those emotions, and a regression model to score the global emotion level. For the
(ordinal) regression problem, we annotate each item with one of the following integer scores:
• 0 (very negative)
• 1 (negative)
• 2 (neutral)
• 3 (positive)
• 4 (very positive)
We also mask people’s names for privacy reasons. In this article, we only publish the preprocessed datasets for regression, split into three .jsonlines files for train, validation and test, each
containing 660, 142 and 142 items, respectively (70%, 15% and 15% of the original dataset). The datasets are available at:
A row of an arbitrary dataset looks like this:
{"id": 457, "text": "Trop d\u00e9sagr\u00e9able au t\u00e9l\u00e9phone \ud83d\ude21! ! !", "uuid": "91c4efaaada14a1b9b050268185b6ae5", "score": 1}
The models only focus on the fields text (raw text) and score (annotated score).
All code blocks in this article should be executed in a Python3-kernel of Jupyter notebook. First, let’s load the datasets using Huggingface’s datasets library.
Let’s have a look at a row in any dataset.
Let’s quickly analyse the class (score) distribution in each dataset.
Figure 2 — Distribution of classes on each set
The distribution on the three splits seems to be similar: lots of “negative” ratings, then “very negative”, “neutral”, “positive” and finally very few “very positive” ratings.
Now we can go to modelling. Formally, this is an example of type ordinal regression. To use BERT’s implementation intransformers , we can think of two modeling approches:
• As a classification problem: A text will belong to one of the five classes 0 to 4.
• As an ordinal regression problem: A text will get a score, typically around the interval [0, 4].
Sections 3 and 4 will present two methods, respectively.
3. Fine-tuning with a Text Classification Model
Fine-tuning a downstream task with transformers is a common task, you can revise it by checking out the Huggingface's tutorial. As the main goal of this article is to perform a regression task
(section 4), we will briefly remind the classification task in this section as a reference.
To set up, we will define some constants that reflect our need:
• A French language-model: camembert-base, wrapped in AutoModelForSequenceClassification object
• A French tokeniser: camembert-base, wrapped in AutoTokenizer object
• A DataCollatorWithPadding to add padding, which makes all texts the same length.
• A DataLoader to call data batch by batch during training (so, we will not face memory issues)
Now, we load the model and the tokeniser. (We will see some warnings “Some weights of the model checkpoint at camembert-base were not used when initialising CamembertForSequenceClassification", which
is OK since the model has not been trained for the classification task.)
Prepare Datasets
We tokenise the dataset by calling tokenizer. Then, we associate the label attribute to each dataset item.
We can compute metrics to track the model’s improvement during training. Here we retrieve the class with the highest logit (corresponding to the highest probability) for each prediction and compare
it with the actual label to calculate the global accuracy score.
We put the output directory for the trained model and the learning parameters into TrainingArguments. With load_best_model_at_end and metric_for_best_model, we will keep several best models (i.e.
those with the highest accuracy on the validation set) during training and load the best model at the end.
Combining everything in a Trainer, we start the training:
Note that we rely on the validation set’s accuracy to retrieve the best model. Calling Trainer.evaluate(), we can retrieve the best accuracy attained during training, which is 0.683 (at epoch 16).
In real projects, we need an independent test set to re-evaluate the model. That’s what we do here.
That is it; we have a fine-tuned classifier ready for our use cases. We can call the tokeniser, then the model, to predict a single case.
tensor([3, 2, 1, 0, 4], device='cuda:0')
The predictions seem reasonable. Our classifier is ready, let’s move to the regression model.
4. Fine-tuning with a Regression Model
To build a regression model, we can reuse the whole architecture of the classification one. Indeed, just like the difference of linear regression and logistic/softmax regression models or a 2-layer
MLP for regression and a 2-layer MLP for classification (explained for example in Chapter 3 and Chapter 4 or the famous book Dive Into Deep Learning), BERT-based regressors differ from classifiers
only in several points:
• The number of output logits: 1 unit for the regressor vs 5 units (the number of classes in our problem) for the classifier.
• The loss function, for example, Softmax loss for a multiclass classifier vs Mean-squared loss for the regressor.
Next, we can add additional metrics for the regressor. For example, accuracy does not make sense when discussing house price prediction. Instead, we talk about how close our prediction is — so the
metrics should be the mean-squared error (MSE), mean absolute error (MAE) or R2 score.
It suffices to find the right code lines to accommodate these changes. Firstly, let’s copy the setup code for classifiers and change the number of output logits to 1.
Set up
Prepare Datasets
There is one thing to change in this part: the label is no longer a category (represented by an integer); it is a real number that one can use to add, subtract, multiply etc. with the predicted
logits. That is why we need to convert label into float(label) as below.
We define several metrics: MSE, MAE and R2 score (though we do not need to use them all) in a function compute_metrics_for_regression and use it later in training args.
To compare with the classification model, let’s also define a notion of “accuracy”: For any score predicted by the regressor, let’s round it (assign it to the closest integer) and assume that is its
predicted class. We compare the predicted class and the actual class to build the overall accuracy score.
The training arguments remain the same as for the classifier.
Loss Function
In the case of the AutoModelForSequenceClassification used in the last section for classification, if our output layer has only 1 logit, the Mean Squared Error (MSE) will be applied. So we don’t have
to change anything in the default Trainer and can use Trainer to train our regressor.
However, to keep the idea general in case you want to do regression on more than 1 output logit or if you want to use other loss functions, we have two methods to implement the loss functions
• Use a Callback
• Write a custom class that extends Trainer (let's call it RegressionTrainer) where we override compute_loss by torch.nn.functional.mse_loss to compute the mean-squared loss.
We will illustrate with approach 2, which is more straightforward. It reimplements the MSE loss. You can replace the loss with any custom loss function you employ:
Do not forget return (loss, outputs) if return_outputs else loss (two formats of output) as they are required by torch modules.
Everything is ready. We start the training:
Note that the validation loss equals the MSE metrics, although they are implemented in different functions because they refer to the same notion.
Evaluation on Test Set
On the test set, the accuracy is 0.739, also close to the classifier in section 3.
Analysis on Mistakes
Let’s take a look at where the regressor makes mistakes. We will split the test sets into small batches to perform the prediction. Then, we display the (rounded) predicted and correct score in a
Dataframe of pandas for better comparison.
We see that: when the model makes mistakes, in most cases, it confuses between close classes (0 and 1, 1 and 2, 3 and 4 but not much 1 and 4 or 0 and 3). We can verify this fact using the confusion
matrix: most non-zero items are on the main diagonal and the two neighbour diagonals.
What can we conclude? Although modelled as a regressor, the model also performs well on the classification task with rather good accuracy. In the last section, we present some general observations of
these two problems.
5. Classifier vs Regressor
Other experiments
In a client’s project, our team is asked to implement a sentiment scoring task using both classification and ordinal regression models. We also need to try various configurations:
• Backbone model: CamemBERT and FlauBERT
• Extensive hyper-parameter tuning: Try as many combinations of learning parameters (learning rate, gradient clip value etc.) as possible.
We performed this task on a 1700-items dataset in which the annotations were validated by at least two annotators. (The labelling of the entire dataset is consistent)
We concluded that the best regressor’s performance is the same as the best classifier (~72% accuracy, 66% for the macro F1). Modelling as a classifier or regressor doesn’t really matter here. In
fact, the CamemBERT architecture seems to be the key factor behind this performance.
Inter-convertible Outputs between two Models
In our client’s project, we compared the performance of classification and regression models. In the previous section, we explained how to use “accuracy” as a measure of comparison. However,
“accuracy” is a notion related to classification problems, so somehow we are biased toward classification models in this comparison. We can think of the opposite sense: to convert classifiers’ output
to regressor’s format. This leads to the problem of inter-convertibility between the two models’ outputs.
For our problem, we can think of some intuitional/natural methods:
• From the regressor’s output to the classifier’s: Map the predicted score to the closest integer (as we have done so far).
• From the classifier’s output to the regressor’s: Assume each class is associated with a probability computed by applying softmax on the top layer
In the second approach, we can define either
• Use argmax strategy: Use the highest class as the regression score.
Regression score = 3.00
• Use the weighted-sum strategy: deduce the score as the weighted sum of these values:
Regression score = 0.02 * 0 + 0.42 * 1 + 0.08 * 2 + 0.44 * 3 + 0.04 * 4 = 2.06
Note that the weighted-sum strategy is only applicable when there is an order notion between the classes.
More strategies for converting classifiers’ output into regressors’ are presented in [6].
Example of Behaviour of a Regressor and a Classifier on Single Case
A regressor and a classifier may behave differently in case of confusion. Let’s reuse the previous example when we have the following output of a classifier (classes 1 and 3 have the highest
This phenomenon happens, for example, when the model faced face two examples like this during training:
{"text": "J'étais admis. Vous êtes content ?", "score": 3},
{"text": "J'étais viré. Vous êtes content ?", "score": 1},
and try to predict a new case:
{"text": "Je suis là. Vous êtes content ?"}
In this case, the regressor typically tries to adapt to give a reasonable distance with the two known examples by moving the final output to something near to 2 (the neutral score). In contrast, the
classifier tries to distribute balanced probabilities among classes 1 (negative) and 3 (positive) but does not really pay attention to class 2. If we use the argmax strategy, as usual, there is a
risk to misclass the example (unless we define a score threshold (like 0.5) and exclude both classes 1 and 3 as their probabilities are below this threshold).
The regressor’s behaviour seems to be “safer” to avoid positive-negative misclassification but may make the model more dummy as it will avoid giving a sign (positive or negative) when it confuses
When should we try a Regressor or a Classifier?
In summary,
• Don’t use a regressor if we cannot define an order between the classes.
• Don’t use a classifier if we want to predict a continuous variable (and we cannot discretise it as in the case of house prices).
We can use both models when we want to predict a discrete numerical variable, or categories that can be sorted in an order.
We may also prefer the ordinal regression approach if the classes are not clearly distinct. For instance, sometimes we may face an example where it’s difficult to decide whether it should be scored 2
or 3. For the regression approach, it is OK to annotate it 2.5 or 2.8, while for classification approaches it is more arguable how to handle this problem.
With the experiments in our client’s projects, so far, the two modellings based on the same backbone language models gave us very similar results, although we are not sure if this fact will still be
valid for future issues. Therefore, by this tutorial, we presented a possibility to do ordinal regression tasks with BERT and transformers to help our colleagues and our readers solve future problems
when they need to perform the same task.
[1] Attention Is All You Need — Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, Attention is all you need
[2] BERT: Pre-training of Deep Bidirectional Transformers for Language… — Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, BERT: Pre-training of Deep Bidirectional Transformers for
Language Understanding
[4] Fine-tuning BERT for a regression task: is a description enough to predict a property’s list price?
[5] https://lajavaness.atlassian.net/wiki/spaces/OTTO/pages/2925428757/Projet+PE+Tonalit#Tonalit%C3%A9s
[6] (PDF) Regression as classification — Salman, Raied & Kecman, Vojislav. (2012). Regression as classification. Conference Proceedings — IEEE SOUTHEASTCON. 1–6. 10.1109/SECon.2012.6196887. -
[7] Models: FlauBERT , CamemBERT
[8] Dive into Deep Learning — Dive into Deep Learning 0.17.4 documentation (Chapter 3, Chapter 4, Chapter 10)
Thanks to our colleagues Caroline DUPRE and Achille MURANGIRA for the article review.
Nhut DOAN NGUYEN is data scientist at La Javaness since March 2021. | {"url":"https://lajavaness.medium.com/regression-with-text-input-using-bert-and-transformers-71c155034b13?source=author_recirc-----ef73eba18bad----2---------------------c2deacc0_26dc_4d44_9b01_b54c7da8ccad-------","timestamp":"2024-11-13T11:23:46Z","content_type":"text/html","content_length":"234938","record_id":"<urn:uuid:160d47fc-a4d7-408b-a0b6-58df15fd3899>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00296.warc.gz"} |
Stephen Jordan
Principal Research Manager, Microsoft and
Adjunct Associate Professor, University of Maryland
January 14, 2022
DAVID ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Friday, January 14th, 2022. I am so happy to be here with Dr. Stephen Jordan. Stephen, thank you for joining me
STEPHEN JORDAN: Thanks for inviting me.
ZIERLER: Stephen, to start, would you tell me, please, your current title and institutional affiliation?
JORDAN: I now work at Microsoft and my title is Principal Research Manager. I also have an adjunct affiliation at the University of Maryland.
ZIERLER: What's the affiliation there? What do you do at UMD?
JORDAN: My affiliation is with the University of Maryland Institute for Advanced Computer Studies. I have one PhD student who's finishing up that I advise remotely. I picked up an adjunct faculty
affiliation while I was an employee of the National Institute of Standards and Technology in Gaithersburg, Maryland. Prior to when I joined NIST in 2011, they had already founded the Joint Quantum
Institute or JQI, run between the University of Maryland and NIST, which was focused on the physics side of quantum information. Shortly after I arrived at NIST, they founded a second institute on
the same model, which I helped to name. We call it QuICS, the Joint Center for Quantum Information and Computer Science, which, as you may guess, is more focused on the CS side. Then when I came to
Microsoft, I relinquished my formal affiliations with NIST and QuICS but retained my adjunct affiliation. This allows me to supervise PhD students in physics and computer science at the University of
Maryland. When I came here, I asked my students what their preference was, and they all gave the same answer, which was that they would prefer to continue along the same path that they were already
on. So, I've been supervising them remotely since 2018, and they've been graduating one-by-one, and I now have one left.
ZIERLER: Stephen, beyond just having an enjoyable time staying connected to academia, in what ways is the appointment at Maryland useful for your research?
JORDAN: I would say it keeps me a little bit more broadly focused than I might otherwise become. Since joining Microsoft, I've followed a path that I did not necessarily anticipate, and essentially
became enticed by a much more applied kind of work than I had done in the past, which includes a lot of work on classical algorithms for solving industrial optimization problems. But I always have at
least two meetings a week with my collaborators at University of Maryland that keep me from losing touch with the foundational research side of things.
ZIERLER: Stephen, is that to say that, ironically in some ways, the research in quantum computers might actually yield benefits for classical computers?
JORDAN: Yes. There are not so many changes in your life that you can pinpoint to a specific moment. But one of them for me was a specific day, probably around 2015, when I was working at my office on
the University of Maryland campus. I was carrying out some computer simulations of adiabatic quantum algorithms. Adiabatic quantum algorithms were first proposed—well, there's a little ambiguity, as
is often the case, about who first proposed things because there are few related ideas coming out at roughly the same time. But, anyway, one of the originators of this was Eddie Farhi, who was my PhD
advisor at MIT. He proposed that adiabatic quantum processes could be used to solve optimization problems.
People got excited about this because, first of all, you could use a lot of nice techniques from theoretical physics to analyze these algorithms. But it was also a tantalizing subject because no one
could really quite answer the questions that they really wanted to answer, and so it always seemed like the deeper understanding was right around the corner, which is very motivating. Also, a lot of
people said that non-convex optimization problems are of great economic importance. This economic importance seemed sort of generally plausible to me, although I didn't have much direct insight into
it at the time. Now, in my role at Microsoft, I get to see a lot more detail about applications. Anyway, a lot of people say that the possible advantage of adiabatic algorithms for solving
optimization problems comes from quantum tunneling, which is different from, say, Shor's algorithm, where the advantage comes from interference phenomena. It's fairly clear that you cannot
efficiently simulate the interference phenomena that underlie Shor's algorithm using classical computers. But it's less clear in the case of tunneling. Now, none of these are rigorously proven
statements but this is my sense of things which I think a lot of people share. I and some collaborators were trying to prove some theorems about this. Basically, what I was trying to do is show that
a certain class of these adiabatic algorithms could be efficiently simulated by classical computers, and therefore could not yield exponential speed ups analogous to Shor's algorithm. Now, the
theorems didn't work out. We found a counterexample. But the counterexample was pretty contrived, so it still left open the possibility that these classical simulations worked well in practice most
of the time. So, we decided to run some computer simulations to find out. We needed some optimization problems to run the simulations on, so I did an online search using the first keywords that
popped into my mind about where to find such problems. I typed in, "MAX SAT benchmark," or something like this. The first thing that came up was a set of benchmark instances of MAX SAT problems that
came from an annual contest to see who could build the most efficient solver for MAX SAT. So, I said, OK, well, that's as good of a place as any to get some test cases, and I downloaded these, and I
simulated an adiabatic process to solve these problems. To my surprise, I happened to notice that our simulation of the adiabatic optimizer solved the MAX SAT problem faster than the winning entry
from the most recent contest. I remember being pretty excited about this, and popping out of my office, and running down the hallway and telling people about it. At that point I thought we've
stumbled onto something much more interesting than what we were actually looking for. So, we just took a left turn on our project, and started investigating these simulations of quantum algorithms as
optimizers unto themselves, rather than as a mechanism for understanding adiabatic algorithms.
That's actually now what I focus a lot of my time on here at Microsoft. I came to Microsoft in 2018 in February, and I wasn't completely sure what direction I would take upon arriving here. But I
think my casual assumption was that I would continue on a fairly academic-style, relatively pure quantum algorithms research path, similar to what I had been primarily focused on at NIST / University
of Maryland. But, also around that time, the corporate vice-president in charge of the quantum effort at Microsoft at the time, Todd Holmdahl, decided that we should start to engage with real
customers, and learn more about what are the problems with high business impact that are really important, and that also are compute bottlenecked, as opposed to ones where the limiting factor is data
quality, or input-output bandwidth or latency, or user interface, or any of the many things for which quantum computing offers no possible advantage. We were looking for economically important
problems where computation's really the limiting factor.
So, the business development director at the time, Julie Love, happened to walk past my office—maybe not so coincidentally. Maybe she knew that I was new and could be easily steered, having not yet
been drawn into too many projects yet. She walked by and asked me if I wanted to join some of these meetings with senior level technical people from pharmaceuticals and power utilities and logistics
companies and all these different places, and just talk to them, and learn about what their computational problems were. I thought that sounded very interesting, and so that's what I did. What we
found was that the specific nature of the problems varied between industries but there were very consistent patterns that the most common problems were machine-learning problems and non-convex
optimization problems. Every once in a while, you'd see something that would be a simulation problem or a Monte Carlo sampling problem. But the ones you would see over and over and over were machine
learning / statistical inference and optimization. A lot of these companies were looking to do engagements, and so we thought, well, there are two kinds of engagements you can do. One is you can work
with us on research projects about quantum computing. You can't currently—as of 2018 when we started these engagements or as of now in 2022—you can't currently use quantum computing to solve any
actual industrial problems that you can't solve on your laptop. But we could use these things that we had discovered about simulating tunneling processes classically, and so we took to calling that
stuff quantum-inspired optimization. That was the second kind of engagement, and that effort grew and grew.
Microsoft, unlike many other companies, treats the reporting structure separately from career advancement and thus gives people room for advancement without ever having to necessarily climb the org
chart. They can climb the pay grades without climbing the chain of command. I am in what is considered a technical leadership position. I lead a team of applied researchers who are working basically
full-time on quantum-inspired optimization, and applying it to improve efficiencies of large-scale processes, both internal to Microsoft and for various external customers.
ZIERLER: Stephen, I'm curious if we can compare the research that's being done at Microsoft looking back 50 or 60 years to Bell Labs, for example, where there was really a culture of fundamental
science that didn't have anything to do or didn't need to have anything to do with the corporate bottom dollar. For you, is there a research culture at Microsoft that's promoting and supporting
fundamental research without it necessarily needing to be articulated or connected to some business plan at Microsoft, or is really everything that's being done in the quantum world somehow related
to where Microsoft might see a market or a business opportunity?
JORDAN: Well, in general, I would say that Microsoft Research definitely does support fundamental research. There are people there who write very foundational computer science papers in pretty
much—well, not quite every area of computer science but a wide swath of it. So, if you compared it to a major and first-rate computer science department at a university like, say, UC Berkeley or
someplace like that, and you looked at Microsoft Research, it would look very similar in terms of the list of topics, the type of people who work there, what their goals and motivations are, and so
on. I would say at this point, regarding the quantum group specifically, we're deeply rooted in that world but we are actually pretty focused now on bridging the gap between fundamental research
results and applications. We want to cross this barrier that's very hard to cross to get fundamental research results transitioned into real application. That's actually now the part that I'm focused
on. People sometimes call it the Valley of Death. Fundamental research happens on one side, and incremental progress on existing technologies occurs on the other side. But to get something to jump
from the fundamental research side to the applied technology side is really hard. I'm trying to learn as much as I can about how to do that right now.
ZIERLER: To the extent that we can think about applications of quantum computers, what aspect of the work at Microsoft is devoted internally, in other words, to making things at Microsoft, and what
is devoted to Microsoft's customers or clients in terms of their needs?
JORDAN: I would say at present, the best idea out there for getting real value from quantum computers for problems that will affect human life, and generate economic growth, and cure disease,
whatever it is, it's really computational chemistry. Microsoft is not directly in the business of quantum chemistry. We're not a pharmaceutical company. We're not a chemical manufacturer. We by
ourselves our not going to develop new catalysts or new artificial flavors or whatever the case may be. Really, I would say, for the fully quantum technologies, the applications we're focused on are
outward-facing; things where we would partner with companies that are in the business of doing things that are chemical or materials related, and have deep history and expertise in that. As far as
the quantum inspired optimization, which ultimately are classical algorithms but have their roots in quantum research, that's focused both ways. We have large-scale industrial optimization-type
problems internally, such as around how we operate our data centers, and we have already seen big wins from applying quantum-inspired optimization techniques to those problems.
ZIERLER: Stephen, without going, of course, into any sensitive details, one obvious concern with quantum computing is that it could be a big problem for internet security, for cryptography. So, in
what ways is quantum computation a tool to protect the problem that it might create? In other words, how might quantum computers solve the problem of cryptography in a post-quantum world?
JORDAN: That's a good question. Quantum computers have an exponential scaling advantage for solving discrete logarithm and factoring problems, and so all of the public key cryptosystems currently in
widespread use are broken by this. There are two kinds of cryptosystems: public key and private key, or, as they're also called, asymmetric and symmetric cryptography, respectively. Private key
cryptography is not substantially weakened by quantum computing. Grover's algorithm can be applied, but if it ever becomes necessary, one could protect against this by using only slightly larger key
sizes. The public key cryptosystems in widespread use are mainly RSA which gets broken by Shor's factoring algorithm and Diffie-Hellman which gets broken by Shor's discrete logarithm algorithm, which
can be applied not only to integers but also to other groups such as arising from elliptic curves.
So, the question is what do you replace these public-key cryptosystems with? Occasionally, you may hear it argued that quantum technology provides not only the source of this problem but also the
solution because there's something called quantum key distribution, or QKD, which uses quantum states of photons sent over optical fiber such that any eavesdropping can always be detected. But I
don't particularly agree with this view because there are a lot of different things to use cryptography for. There's encrypting things that you store on a hard drive. There's authentication- proving
you are who you say you are. There's transmitting messages that you want readable by a list of users, and not others. There's all sorts of different things you want to do with encryption. The ability
to detect eavesdropping doesn't really give you all that. Furthermore, quantum key distribution requires special hardware, so it's not just a different firmware upgrade that you can put onto your
router or something. You have to lay special fibers, usually, or at least they have to be dedicated, and it's point-to-point communication. It's also limited to a range of a few hundred kilometers,
so it's not a very convenient technology to use.
Rather than QKD, in my opinion, one should almost always use post quantum cryptography. Post quantum cryptography consists of conventional software-based cryptosystems. They just base their security
on other hard computational problems besides factoring and discrete logarithms. And nobody has found quantum algorithms to efficiently solve these hard problems.
People have actually known for decades of alternative cryptosystems besides the ones that use discrete logarithms and factoring as the foundation of their security. For example, lattice-based
cryptosystems have been studied since the ‘90s. No one's found any serious security holes in them— with the exception of one of the signature schemes. So there are ways you can just do a software or
firmware upgrade to protect your data against quantum attack, and you don't need any new hardware. You don't need to change your fundamental ways of engineering the big picture. It's just all the
specific details inside your encryption module get written over with a different scheme. That's post-quantum cryptography, and I think that is realistically the solution that's going to be adopted in
99% of cases to secure your data and communications against quantum attack.
One key thing is you need the post quantum cryptosystems to be carefully scrutinized. The only way to know that a cryptosystem is secure is to have many experts pore over it and try to find flaws
that leave it open to attack. Now we have to check for attacks both by classical computers and quantum computers. The US National Institute of Standards and Technology, where I used to work, is
coordinating this process. Many organizations have submitted their proposals and helped to vet other proposals. This includes both academic institutions and corporations, including Microsoft.
The process works like this: initially there are many proposed cryptosystems. The flawed ones get eliminated. Then further scrutiny from the global cryptographic community gets concentrated on those
that remain, and so on. Eventually, at the end of this process you have proposals that have withstood a very substantial amount of scrutiny by a very large community of experts. These get recommended
as standards. That's playing out now, and that should shake out over the next few years. Then we will have an established standard for what people should use for post-quantum cryptography. I think
that will likely be a very adequate solution.
ZIERLER: Stephen, a fun question more germane to your research, what is the Quantum Algorithm Zoo, and how did that get started?
JORDAN: Well, the Quantum Algorithm Zoo is a website that I have been maintaining since about 2008 or 2009, and it's supposed to be a comprehensive repository of quantum algorithms that offer
speed-up over classical algorithms. The criterion is you have some notion of your problem size, for example, in factoring, you say, OK, well, we want to factor in N digit numbers, so the size is N,
or if it's a chemistry problem, maybe it's a molecule with N electrons in it. So, that's what your notion of size is—whatever. Then, as N grows, how does the run time of your algorithm go? Does it go
like N squared? Does it go like two to the N? The criterion for quantum speed-up that I tried to adhere to in the Quantum Algorithm Zoo is that the scaling of the proposed quantum algorithm should be
asymptotically, in the limit of N very large, better than the scaling of the fastest known classical algorithm. Every example of that, ideally, I would have documented in the Quantum Algorithm Zoo.
Of course, I'm always somewhat behind on updating this, and I never keep up with everything. It's especially hard now as the number of people involved in the quantum information community has grown
exponentially, and the number of papers per year published in quantum information has grown exponentially. But that's the idea of it. The origin of it was writing my PhD thesis at MIT. Basically, my
PhD thesis had a sandwich structure. In the middle, I just had a series of journal papers that I had already written and published. Those formed the central chapters. Then sandwiching those were an
introductory chapter and a final chapter that were new material. The final chapter was some summaries and philosophical musings and speculations and so on, and the introductory chapter was a
literature review on quantum algorithms. I decided that I would attempt to make it complete; that I would list every quantum algorithm. Then I thought, well, this could be nice as a web page, and
then it could also be continuously updated. So I took that chapter of my thesis, and ran it through this little script that was popular at the time called LaTeX2HTML, which automatically converted
this document into a web page, and I hosted it somewhere. Then, ever since then, I've updated it. I had to move it a couple times so the URL has changed I think twice. But now it's at
quantumalgorithmzoo.org, and hopefully that's its final URL, and people's bibliographies won't become broken links.
The name of the quantum algorithm zoo was stolen from Scott Aaronson. He made the Complexity Zoo first, which was a similarly comprehensive catalog of complexity classes. I asked him for permission
to use his zoo terminology just to make sure he wouldn't be offended that I was stealing his turn of phrase, and he graciously granted me permission. So, it's been going ever since. Also, Victor
Albert emailed me maybe a year or so ago, asking if he could borrow my turn of phrase because he wanted to make a quantum error correcting code zoo, which also is now a very nice web page that he
maintains. So, one thing has led to another. Back in 2008, quantum information was relatively off the radar of most people. It was kind of an obscure field. Most universities weren't hiring anyone to
do it. It was a little bit of an underdog in some ways, looked down upon for various reasons, and, of course, now all that's changed quite drastically. Anyway, I don't have a way of tracking the
traffic on the quantum algorithm zoo web page but I think it has increased a lot since then.
ZIERLER: Well, Stephen, let's go back to graduate school to set the tone for your postdoc at Caltech. When you entered graduate school, this would've been what, 2003, 2004?
JORDAN: Yes, 2003, fall 2003 was when I started at MIT.
ZIERLER: Now, was quantum information, quantum science, was that already on your radar at that point?
JORDAN: Yes. I applied to graduate school, claiming that I would become a condensed matter theorist, and thinking in my mind that this was also indeed the most likely thing that I would do. But also
in the back of my mind, I thought the second most likely outcome was that I would do quantum information. In my first year, I took Peter Shor's course in quantum algorithms and a few courses on
condensed matter theory. And I liked the quantum algorithms side better, basically for two reasons. One is that it was easier for me to pick up. I was a little underprepared for some of the condensed
matter theory courses, actually. Whereas quantum information was a little bit more accessible because it was using more discrete math, more linear algebra, and so on. It was accessible using more
elementary mathematical language, but it was not any less intellectually stimulating because you still needed a lot of cleverness to think of new quantum algorithms, new quantum error correction
schemes, and various other gimmicks.
But it was easier to get into it. It's kind of like a game of chess. Learning the rules of chess doesn't take very long. That's the case for quantum information to some degree compared to, say,
high-energy physics or condensed matter physics where you have to build up deep layers of erudition to even reach the current interesting research questions. So, that was one thing I liked better.
The other thing I liked was it seemed like it was a young field, and there were good prospects for discovering some of the really important foundational things that end up in the textbooks. Like, if
you looked, you'd see that a lot of what Peter Shor was teaching in his class, was discovered five years prior by someone who you could go meet at a conference and talk to them. Whereas in the
condensed matter theory course, you would be taught about something that Landau figured out in 1962 or something. So, it seemed to me that, statistically, the prospects were better in quantum
information for making some discovery that ends up being a basic thing that goes in the textbooks and that everyone uses, and that was my big ambition at the time.
ZIERLER: Not that you would've known at the time, but looking back, was there an institutional organization at MIT devoted to quantum information like the IQI?
JORDAN: The quantum information efforts at MIT were not centralized in any such institution. You had Eddie Farhi who was in the Center for Theoretical Physics, which really was a center for
theoretical high-energy physics. But he just decided personally, as an individual, to change subjects. You had Peter Shor in the applied mathematics department. You had Seth Lloyd in the mechanical
engineering department, oddly enough. You also had Isaac Chuang and Jeff Shapiro who ran the Research Laboratory of Electronics, which was affiliated with electrical engineering. RLE was somewhat
organized and systematic in their approach to quantum information. But, for the most part, quantum information at MIT was kind of scattered all about. In fact, another interesting thing is, in those
times, if you looked at who were the faculty members at US universities who were doing quantum information, almost always they had been hired to do something else. Then after they got there and
usually after they got tenure, they switched into quantum information.
ZIERLER: [laugh]
JORDAN: I think if you tried to apply to most universities at the time, saying, "Yes, I'm going to do quantum information research," it was a very hard road because physics departments would say,
"Oh, well, it's not really physics. It's more like computer science. You don't really fit here." Computer science would say, "Oh, well, this is pretty remote from our concerns. Maybe you should try
the physics department." A lot of people also thought quantum computers are kind of like science fiction. They thought of it the way you might think about space elevators or something.
ZIERLER: [laugh] Was Farhi your advisor?
JORDAN: Yes, yes. He became my PhD thesis advisor. After I took Peter Shor's course, I wrote a research paper. It wasn't my first research paper. Back when I was an undergraduate at Penn State, I did
some undergraduate research, and I was a coauthor on two papers, one which was experimental physics studying superfluid helium, and one which was computational physics studying graphene. But after
arriving at MIT, I became very inspired from taking Peter Shor's course, and I was thinking about quantum algorithms all the time, and I had an idea which I wrote up into a short paper, which is
about estimating gradients on a quantum computer. I did not have an advisor yet. It was just something I did individually. Then I showed it to Peter, and he encouraged me to submit it to Physical
Review Letters, which I did. I think initiating and carrying out a research project independently as a first-year student was a bit unusual, and I think that helped me land Farhi as an advisor. I was
admitted into condensed matter theory, and he was in high-energy physics, so I was going across boundaries.
ZIERLER: What was Farhi working on at that point?
JORDAN: He was mostly working on adiabatic quantum computing, and secondarily on quantum walks, both of which are quite active areas of research still this many years later.
ZIERLER: Up the road, were you following Misha Lukin, what he was doing?
JORDAN: I was aware of his group but I never visited. I did visit Alán Aspuru-Guzik's group over at Harvard and collaborated with them on one journal paper.
ZIERLER: What was the process for you developing what would become your thesis research?
JORDAN: Well, I think I was influenced somewhat by Eddie Farhi's history. He was always thinking of alternative models of quantum computation. So, rather than just sticking with the standard way of
thinking about things in terms of quantum circuits, he would use different physical inspirations for different frameworks for quantum computing. He would pursue these either as mechanisms of physical
implementation or, more frequently in his case, as frameworks for coming up with new algorithms. So, that was definitely a theme in my research. But, mostly, I just kind of followed things where they
led, and read journal papers. I read a lot of papers by Dorit Aharonov, which were also a heavy influence on me. Whatever ideas were spurred, I just tried to pursue them as I found them. One of my
theories was that, when you first learn something, you think of all sorts of questions and tangents and things. I didn't want to waste that opportunity. I wanted to leave no stone unturned and look
into all these little side paths and questions as I noticed them when I was initially learning new subjects, rather than trying to learn the subject first, and then go back and come up with research
questions once I was up to speed. I don't know if this was a good system or not, but that was one of my principles that I used.
ZIERLER: Now, were you aware of IQI, what was happening there while you were a graduate student?
JORDAN: Yes. I didn't maybe know about IQI exactly, but I did know about John Preskill. I was certainly aware that he and his group were a major center, and I was aware, of course, of Alexei Kitaev.
Eventually, I made a visit out to Caltech. At some point, I wrote a paper about error suppression methods for adiabatic quantum computers. Then I saw John Preskill at a conference and chatted with
him at lunch, and told him about the main idea. He found it interesting. So, then, he invited me to come give a seminar at Caltech. That was probably about 2006 or 2007. Then later when I was
graduating, I included Caltech in the list of places that I applied to for postdocs, and indeed went there.
ZIERLER: What were some of the conclusions of your thesis?
JORDAN: It was a bit of a hodgepodge, the title of which was Quantum Computation Beyond the Circuit Model. The unifying theme was exploration of different models of quantum computation, which was
inspired by Eddie Farhi's work. But, as to the specifics, it was a fairly disjointed set of results. One was a mathematical result where I could show that there was this weird connection between a
physical model of quantum computation called the one clean qubit model where you use very high-entropy, noisy states, and a mathematical issue which was estimating the Jones polynomials for certain
classes of knots. There was just this weird connection between things that seem initially unrelated. You could make a mathematical precise equivalence between them. So, that was one thing in there.
Then the rest of it was mostly about adiabatic stuff. On the physical side, I studied how to deal with errors for adiabatic computers. On the algorithm side, I studied some things about how to use
certain more unlimited adiabatic processes to simulate more complicated ones efficiently using what are called perturbative gadgets. My thesis was mostly a compendium of the nuggets that I had
discovered during my time in graduate school.
ZIERLER: Now, before you got to Caltech, you had a brief stint in Japan. Tell me about that.
JORDAN: Yes. Well, again, it arose from meeting someone at a conference. In this case, it was Sahel Ashhab. We got to talking about our research, and we had shared interests having to do with
adiabatic quantum computing. Sahel worked as part of Franco Nori's group at Riken, which is a research institute in the outskirts of Tokyo, and so they invited me over there to do a stint over the
summer of 2008. I thought it sounded like fun, so I agreed to do it. I went there under the premise that I was going to study things about error resilience of adiabatic quantum computers. But after I
was there a few weeks, that project seemed to be stuck and going nowhere, so I just dropped it and worked on something completely different. Specifically, I worked on the continuation of my studies
into link invariants, like Jones polynomials and so on, and their connection to quantum computers. But they seemed not to object to this really. It was quite a fun experience to live in Japan for two
months, although I did not pick up any meaningful amount of Japanese—
ZIERLER: [laugh]
JORDAN: —so that was a limitation.
ZIERLER: Did you consider anywhere else besides Caltech for your postdoc?
JORDAN: Yes, I applied to many places, and had some options to pick from. I considered places both foreign and domestic. But Caltech ended up being my top choice.
ZIERLER: What were your initial impressions when you arrived in Pasadena?
JORDAN: Well, the group was very vibrant. It's unusual because it's sort of top-heavy. There were more postdocs than graduate students, at least at the time, which I think is quite nonstandard. So,
that was a noteworthy thing. The people there, I really liked a lot. It was really a wonderful group of people. Also, the physical setting was extremely pleasant at Caltech, as I'm sure you know. I
was unfamiliar with Southern California. The first time I had ever been there was that time I made a visit to Caltech to give a seminar, which was maybe about one year prior to moving there. It
really seemed very foreign and exotic to me at the time. The vegetation and climate were very different, and even things like the way they pave the streets; it's with a different material than what
they use on the East Coast. There were many such things that surprised me. The distribution of restaurant chains is completely different than on the East Coast, and I hadn't realized how foreign it
would be in these respects. As to how foreign it is culturally, a lot of people say the culture is different in the northeast versus LA. I have no idea about that because, the fact is, I mostly just
interacted with my little group of peers at IQIM. [laugh]
ZIERLER: [laugh] What was the game plan for you? In what ways did you want to expand on your thesis research, and in what ways were you open-ended in looking for new projects?
JORDAN: There was some of each. While I was there, I did continue some work following this thread having to do with topological invariants. I was intrigued by modern mathematics, and I had not been
trained in it, and I kind of glamorized it. I wanted to be able to understand these exotic mathematical things that I'd heard some other people talking about which they understood and I didn't. I
kind of envied their understanding of those things. So, that was probably the real motivation behind pursuing topological invariants and their connections to quantum computing. Then I also started a
new line of research that I hadn't been working on at MIT, which was about simulating quantum field theories on quantum computers, which I think had more sound motivations. There were good reasons to
study that question rather than just as an excuse to learn about some areas of mathematics. The quantum field theory project addressed a well-motivated scientific question—really two questions. If
you go back to the origin of quantum computing, it started with the observation that there are certain processes in quantum mechanics that seem to require exponential resources on classical computers
to simulate. So that was a hint that there might be more computational power there in quantum processes that could be harnessed somehow.
Then you could say, all right, well, can you repeat that trick? Is there any other physical process that even with a quantum computer, it takes exponential resources to simulate it? If so, maybe
there's some other kind of computation even beyond quantum computation that, in principle, could be even more powerful. So, you look around. You say, well, where could that be? One candidate was in
quantum field theory. In principle, according to quantum field theory, you have infinitely many degrees of freedom even within a finite volume. So maybe that's kind of like having infinitely many
qubits, which you might not be able to simulate efficiently with finitely many qubits. You have to check carefully. Maybe you can discretize things and approximate things and thereby simulate with a
standard quantum computer it after all. But it's not completely obvious right off the bat. So, that's motivation number one. Motivation number two is that these quantum field theory processes are
important in particle accelerators. They're important in nuclear physics, and so on. Sometimes, they're actually quite hard to simulate, so it's, in a sense, a practical problem to simulate them.
You've got the theory. You've got the experiment. But you need to calculate somehow the consequences of the theory to get the predictions to compare to experiment. And sometimes that's just
computationally too hard. So, in such cases, you can't carry out the scientific method because of computational limitations. I had much better reasons, rational reasons, for pursuing that research
project, whereas on the stuff about link invariants was motivated by more personal and mystical reasons. Unsurprisingly, the papers that I published about the field theory side of things were much
more successful and highly cited.
ZIERLER: Stephen, the identity crisis that you talked about at MIT where the physicist said that this belonged with the computer scientists, and vice versa, did that not exist at Caltech simply
because IQI was a place where those academic distinctions either didn't matter or where those specialists came together?
JORDAN: I don't know. I think it might've been just that John Preskill was really good at raising funds. He could manage to build a group as big as he wanted, single-handedly. [laugh]
ZIERLER: What was the research culture like at IQI? Where was there competition, where was there collaboration among the postdocs?
JORDAN: I would say it was very collaborative. Basically, there were two kinds of relationships you would see. One is where you each work on fairly distant subjects, and you listen to each other's
seminars, and you have a certain amount of mutual respect but you don't really engage on joint projects because your specialties are too far apart. That's one kind. The other kind is you have shared
interests, and you join forces and work together and write co-authored papers. I didn't really see competition much at all.
ZIERLER: Among the shared interests, who were some of the key people who had the same interests as you, and in what ways did you collaborate?
JORDAN: Well, Robert König knew a lot more about topological invariants than I did, and the deep mathematics behind them. I had a certain conjecture that I was trying to prove in a laborious manner
using brute-force proof methods. I ended up collaborating with Robert König, and also with Ben Reichardt, to prove the conjecture the right way using the tools that are really most suited for it.
That was one example of a collaboration. On the field theory stuff, I collaborated directly with John Preskill as well as with Keith Lee, who I had known at MIT when we were both there, and moved to
Caltech shortly before I did. Keith was in high-energy physics. So, the three of us worked together on that, which was quite rewarding. John runs a big group, so not every graduate student or postdoc
directly co-authors a paper with John, and works really directly with him on a project. But I got to do that. We didn't quite finish it while I was still there. After I had moved to NIST we continued
our collaboration remotely, and finished it, and published it.
Even now, I'm still working on some follow-up papers on that same topic. We are now improving upon the quantum algorithms that we initiated in our 2012 paper. Back then, no one was working on quantum
algorithms for simulating quantum field theories. Ours was not, strictly speaking, the first paper on the topic of quantum algorithms for simulating quantum field theories but it was really the first
one that considered the problem in a thorough way from end to end. So we were kind of initiating a new line of research. Now, there are lots of people working on this subject because, for one thing,
Department of Energy has allocated funds for quantum information initiatives, and so a lot of the Department of Energy-funded scientists in high-energy physics, nuclear physics, plasma physics, etc.
now want to do cross-disciplinary things that combine their traditional expertise plus quantum computing. So, there's a lot of work now going on at the intersection of nuclear and particle physics
and quantum field theory and quantum computing. But back when I was at Caltech, this was not the case at all. It grew into an established subject with its own conferences and workshops, and its own
body of literature in which people build on each other's ideas. There's now a little community around it. That's been very nice to watch.
ZIERLER: Stephen, among the research areas that were very distinct among the scholars at IQI, is that to say that by the time you were a postdoc at Caltech, quantum information had matured and spread
out sufficiently that you could be working on something that was really totally different than somebody who might've been an office mate to you?
JORDAN: Yeah. Well, people tended to specialize. It was not humanly impossible to be a serious researcher in all of those areas. But, in practice, for most people, it was a stretch, and so you tended
to specialize. There were certain people who specialized more on the quantum communications and information theory side of things; maybe also cryptography. Then there were the people like me who
focused on quantum algorithms, quantum complexity, and so on. A third type of people were focused on physical implementations of quantum technologies. It was rare to find someone who was really an
expert in all three areas.
ZIERLER: Now, you were there in the years as the run-up to the creation of IQIM as an NSF site. But I wonder, to foreshadow those developments, if there was greater integration among the condensed
matter theorists and the quantum information theorists that might've been the point to which John Preskill said, "We need to formalize this into a bigger center."
JORDAN: There were definitely people who dropped by from time to time from condensed matter theory. But I would say it seemed clear that there was potential for a deeper and more regular engagement,
which I think indeed has picked up since the creation of IQIM. [pause] In some ways, one of the bigger successes in quantum information is that concepts and theoretical techniques from quantum
information have been fruitful in condensed matter physics over the past decade or so. It happened in condensed matter physics, I would say, first. Then a similar phenomenon started to happen in
high-energy physics. I think actually John was usually a step ahead. He picked up on these things very early or even foresaw them. He and his institution and his graduate students and postdocs
benefitted from that element of foresight or good instincts or whatever it was.
ZIERLER: Was there a general optimism during your time at Caltech that the development of quantum computing was, first of all, something that everybody in one way or another was working toward, and
that it was something achievable in the not-too-distant future, however you might define that?
JORDAN: I think the experimental progress was going pretty well. It was clear that it's a long road. But there was no sense of being stuck. There was a sense that we have this 100-mile long uphill
road. But we're going along at a decent clip, and so I think there was a fairly positive sense about that. Certainly, it's also the case that some of these theorists didn't and probably still don't
care whether actual quantum computers are ever built. The fact that they are, in principle, possible is enough to make them interesting mathematically, and then it's just a mathematical subject to
study. That's not actually such a rare [laugh] viewpoint.
ZIERLER: Sure. What do you see as your own contribution in that metaphor of keeping up progress on this 100-mile uphill?
JORDAN: My focus, until recently, was almost exclusively two things. One is quantum algorithms. That is, what can we use these quantum computers for once we have them? This is partly a practical
question about what value will they have to humanity as a technology, and partly a foundational computer science question about what's the fundamental power of this model of computation, and how does
it compare to other models of computation such as polynomial time classical computation. My focus has been on the algorithms and complexity theory side of things, except that in the past maybe five
years or so, a big part of my focus has been on applied research around quantum-inspired algorithms.
ZIERLER: Was this an area of research that NIST was interested in? Is that the point of connection as your next opportunity after Caltech?
JORDAN: Well, NIST, I think, had two reasons for being interested in quantum information and quantum computing. One is that they're part of the Department of Commerce, and they're responsible for
setting cryptographic standards that get used in the non-defense sector. For example, when you buy something online with your credit card, the transaction is secured using standards that are set by
NIST. Now NIST needs to set new ones that are going to be secure against quantum attack. The second reason is that NIST is thought of as the national metrology laboratory of the United States. Many
countries have national metrology laboratories. Their general goal originally, back in the 19th century when these things were mandated, was: can you weigh bushels of grain accurately to make sure
that you're not getting ripped off in international trade and things like that. But, eventually, when you're measuring time with atomic clocks and so on, the applications are more esoteric. Atomic
clocks get used in GPS satellites, or perhaps to certify that fraud is not being committed in high-frequency trading on Wall Street. But it all evolved from those original weights and measures
institutes. A lot of the best metrology tools come from atomic physics and involve quantum techniques that are very closely related to the experimental techniques that you would use to build both
superconducting qubits and trapped-ion qubits. For both the cryptographic standards reasons and the metrology reasons, it was a natural thing for NIST to have a presence in quantum computing, and
they hired into that. As far as the specific things that I worked on, I was really given free rein, which was very nice. My division chief was Ron Boisvert, and he created a very sheltered and very
nice working environment where I could just pursue my research directions wherever they led.
ZIERLER: Now, you took the job at NIST before the appointment with Maryland, if I understand correctly.
JORDAN: Yes, I bought a house right next to the NIST campus because I liked having a short commute, not knowing that a couple of years down the road, I would be commuting 45 minutes east a lot of the
ZIERLER: [laugh]
JORDAN: —to go to University of Maryland [laugh] in rush hour. [laugh]
ZIERLER: This does suggest though that at the beginning, NIST provided you with all of the academia, as it were, that you needed at that point where an academic affiliation at a university did not
seem necessary to you from the beginning.
JORDAN: Yeah, that's right. The thing that got added to the mix once I picked up the University of Maryland affiliation is that I started advising PhD students. Maybe actually that started a little
bit earlier in practice. I was sort of de facto research advisor of one student, I think, slightly before things became official. My first student was Michael Jarret, and he was working in quantum
foundations. He wanted to make a switch to something a little bit more technical rather than quite so philosophical. So, he started working with me, and I ended up becoming his PhD advisor
ZIERLER: In what ways did the affiliation at Maryland influence or not what you had already been doing at NIST?
JORDAN: It influenced my working style because at NIST, I was not running a research group. I would drive in in the morning. I'd go to my office. It was very peaceful and undisturbed. Sometimes, I
would close the door, and I would pull out my books and study things, and try to prove theorems, and calculate stuff. Sometimes, I would talk to people such as my office mate, Yi-Kai Liu, who I
sometimes collaborated with, or I would talk via video conference or telephone with my remote collaborators, and that was it. For most people, including me at present, time is quite fragmented.
People have calendars with back to back meetings, and they have to respond to 100 emails a day, and all this kind of stuff. It was really the opposite kind of scenario at NIST. I really was like
Isaac Newton during the plague. He just kind of shuts the door, and he could concentrate. At NIST I had that kind of environment. Then when I got more deeply involved in the University of Maryland,
it was a little bit of a different feel. I was having regular meetings to advise students, and I was attending more seminars, and I wasn't cloistering myself quite as much. Of course, it breaks your
concentration a little bit compared to just sitting silently for nine hours in front of a calculation. But, on the other hand, it's a vibrant environment, and stimulating. It taught me a little bit
about advising people on their research efforts, which is one of the primary skills for my current role at Microsoft as a research manager.
ZIERLER: Stephen, when you got involved at Maryland, what aspects of quantum science were well-developed, and what aspects were you really part of building up from scratch?
JORDAN: Well, building something up from scratch is a very ambitious thing, and I think the only time I really have achieved any claim to that was in helping to kick off the subject of quantum
algorithms for simulating quantum field theories. I do think that among the quantum computing work that I've done, that's probably what I take the most pride in.
ZIERLER: In your interactions with the students at Maryland, to what extent did their interests really indicate ongoing forward progress in that metaphoric uphill climb?
JORDAN: I got involved with different things with different students. With Michael Jarret, mostly the focus was around optimization using adiabatic algorithms, and around the fundamentals of how
powerful is the adiabatic model and what properties of the optimization problem influence whether the algorithm will work well or not. His natural talent was around technical mathematics, and so we
proved a lot of theorems to lay some foundations in that subject. Then, for the most part, my other students were involved primarily in continuations of the work on quantum algorithms for simulating
quantum field theories. There's a basic question of whether quantum computers can simulate quantum field theories in polynomial time versus do quantum field theory processes have a fundamentally
different computational complexity than standard quantum computing. I think that's largely resolved; quantum computers can in principle simulate quantum field theories in polynomial time. But then
there's a more detailed question about how do you actually do it in a reasonably efficient way? That's a subject where you make incremental progress. A lot of the other work with students was on
making incremental progress to get better and better quantum algorithms for simulating quantum field theories. That was pursued with Aniruddha Bapat, Troy Sewell, and Ali Moosavian.
ZIERLER: Now, the work you were doing at NIST, as you described earlier, there is a connection between NIST's mission and quantum science. Was any of your work specifically responsive to that
mission, or you were purely off in your world, happy to be able to do your own thing?
JORDAN: I did serve on the working group on cryptographic standards for achieving security against quantum attack. That was not just following my nose and doing my own research. That was sort of a
real job, which took up a fraction of my time at NIST. Also, while I was there, I got invited to participate in some work at the Office of Science and Technology Policy, in which we were drafting
documents to formulate what eventually became the National Quantum Initiative. So, that was my other little adventure into more traditional government activities as opposed to being a very
free-spirited and pure research scientist who happens to be employed by a government lab.
ZIERLER: Tell me about the circumstances of joining Microsoft.
JORDAN: In 2018, I became interested in the private sector. At first, I looked into working for startups in quantum computing. I was pretty concerned about the wellbeing of my students because I'd
built up a research group at University of Maryland, and recruited students and postdocs into it, and I thought I might be letting them down if I moved away. So, I was looking at startups which might
let me remain in place, and work for them remotely, which was not so common back then. But, at the same time, I wanted to explore options. I also looked at the big technology industry players, and
explored my options, got a collection of offers, and ultimately decided to come to Microsoft.
ZIERLER: Now, the idea that you could pursue what you wanted to at NIST, what assurances were you looking for or were you interested in having when you came over to Microsoft, or was that really not
in the cards?
JORDAN: I was kind of open to changing things up a little bit at that point. I had been pursuing various threads that I had started as a graduate student around different models of computation, the
power of adiabatic computation, quantum field theories, and topological invariants for a pretty long time. So, I was a bit open to considering other styles of work, or a shift in focus. I did ask
some questions to get a sense of the research freedom at Microsoft. But in the end I took a little bit of a leap. I didn't know exactly what things would be like at Microsoft. In fact, what I was
planning to do and what I ended up doing were fairly different, I would say.
ZIERLER: What was your first project?
JORDAN: At first I had two projects. One was continuing a research project that I had initiated previously at NIST / University of Maryland about simulating wave propagation using quantum computers.
That was something I was still working on and finishing when I arrived at Microsoft, and that was continuing in the same style of academic publication-focused work that I had previously been doing
since 2003 or 2004. The other project was a collaboration with a group at Case Western Reserve University, which develops magnetic resonance imaging technology, which ultimately is sort of a quantum
technology. You're manipulating the nuclear spins in people's bodies to do the imaging, and it's not such an unusual thing for physicists to conduct work on this topic but it's not something that I
ever anticipated. The idea of our project was to use our quantum-inspired optimization methods for automated design of pulse sequences to run on MRI machines and make them work better. I volunteered
to work on that, and that became my primary focus for a long time. It was very hard, and it took about three years.
ZIERLER: What was so hard about it?
JORDAN: Basically, it's ultimately experiment-driven. You have these machines, and they cost like $1 million. You would think that they're very well characterized, and everything is known about their
systematic errors and all their nonlinearities and quirks. But, actually, it's not that well characterized, and so you really have to run a lot of experiments before you can make a good model of how
these machines behave. It's only once you have a good model that you can start to use that model inside your computer simulations and your optimization processes. That was actually the hardest part.
Getting the optimizers to run well, we had expertise in, and we got that running fairly quickly. But the really hard part was making sure that we were optimizing the right thing. In optimization we
call it the cost function. We needed to work very hard to reach the point that when computer produced an optimized plan for how to use these MRI machines, which was predicted to work well by our cost
function, it would actually work well when we tried it on an actual machine with an actual person in it. It took a long time before things that were predicted to be good actually were good when we
tried them. That was the hardest part.
ZIERLER: Was it fun operating in a more applied environment?
JORDAN: Yes, it was. It affected me— especially because it was medical. There were people who had come in with brain tumors, and they would have to lie in these machines, and you can run these scans
on them. With optimized pulse sequences, you might be able to see tumors that you would miss otherwise, and someone would live who would otherwise die. Or, if you could make the scan faster, you
could get more throughput, and the cost would be cut in half, and more people would have access to it. People also have to hold still in these machines. Little kids can't do it because they're too
wiggly, and so sometimes it's necessary to give the kids sedatives, or even knock them out with anesthetics. But if you can do a really fast scan, then maybe a kid could actually hold still long
enough, and you wouldn't have to drug them. It really hits you hard. It hit me hard anyway. I had never experienced anything like that before. It was very motivating.
ZIERLER: Besides operating in such a different environment, what aspects of your education or research were useful to succeed in this new endeavor?
JORDAN: I had to do a lot of computer programming. I had played around with computer programming as a kid, starting in elementary school, as many people do. Programming in a more serious way,
especially for scientific computing, was a skill I learned during my undergraduate research studying graphene with Vincent Crespi at Penn State. When this MRI project came along it was clear that it
would involve writing some code to simulate the MRI machines. I said: "Yeah, I can do that. I'll just dust off my old skills that I have from 2001 or whatever." [laugh] That's, to a large degree,
what I drew upon. The other import skills were general mathematical knowledge such as understanding things about Fourier transforms and having general scientific instincts about not engaging in too
much wishful thinking, not jumping to conclusions too preliminarily based on weak evidence, and all this kind of stuff, which really doesn't come naturally to humans. It only gets beaten into you by
painful experience of using your natural instincts and wishful thinking and premature conclusion jumping and wasting months of your life because of these mistakes. You eventually get trained out of
it. [laugh]
ZIERLER: Stephen, just to bring the conversation closer to the present, what have you been involved in? What have been some of the major research projects over the past two or three years?
JORDAN: The MRI work was the biggest one that I worked on directly. Currently, I am leading a team to adapt these same kinds of algorithms that we used for designing MRI scans also to other, more
widespread, industrial problems. There are many problems where you have some really large-scale process about, say, the power grid, or transport networks, shipping, or the operation of some chemical
plant, or the operation of a data center. One way you can improve the efficiency of those things is, replacing some component. For example, maybe you use a new material for the wires that will have
slightly lower transmission losses. Sometimes this is worth it because maybe you improve efficiency by only a percent, but because it's operating on such mind-boggling scale, a percent means hundreds
of millions or billions of dollars a year. But a different way you could improve efficiency is without any new components, just coordinating things in a slightly smarter way, routing things
differently, or allocating things differently, or scheduling things differently. That's purely a software problem. You don't need to retrofit any hardware. So, that's just waiting for someone to come
up with better algorithms, and you just press a button and, all of a sudden, you're saving these resources. Maybe you're saving people's time. Maybe you're improving energy efficiency and preventing
unnecessary carbon dioxide emissions. Energy and people's time both cost money so, of course, you're also improving the bottom line of whatever business you're applying these optimization methods to.
So, it's also an inspiring subject. It's not quite as visceral as with medical things where you see individual patients and so on. But if you use your imagination, and you think about the numbers
involved and the scale of the impact, then it can also be very inspiring. So, that's my focus now.
ZIERLER: Stephen, now that we've worked up to the present, a few broadly retrospective questions for the last part of our talk. Is there something that you learned at Caltech—an approach, a sense of
collaboration, a way of looking at the science—that has stayed with you, that informs your work, and presumably with new projects that you might not see coming in the future, will always be present
as you approach these things?
JORDAN: Yes. Well, I think I learned from seeing examples at Caltech of being really knowledgeable. Like many physicists, I was quite influenced by Richard Feynman, and read various books of his,
some of which advocate rediscovering things for yourself, because it deepens your understanding. I think that's absolutely right. But you have to be careful not to misinterpret it and take it as a
license to be lazy about learning from the textbooks and from the journals. At IQIM, there was a culture of really knowing the literature, really knowing the prior work, really knowing the textbooks,
and so on. So, I think that was a positive influence on my attitude. The main big-picture lesson from those years would probably be that. Whereas the main big-picture thing from the MIT years I think
would be to really watch out for wishful thinking, and really try to do the "fail fast" thing. Suppose you have some conjecture. Don't start trying to prove it. Start by trying to look for
counterexamples or reasons why you think it's wrong. Otherwise, you can waste time on dead ends, and you have to be very careful about that. I'd say Peter Shor and Scott Aaronson were both good
influences who said that, before you try to prove something with rigorous mathematics, you should already be pretty much sure that it's true. I think I do have some fairly specific lessons from my
ZIERLER: As quantum computing is getting closer and closer to scalability, what are the benchmarks that you'll be looking for? From your area of expertise, from the things you've worked on, what
unique vantage point might you have to look at ways in which quantum computing will become more and more feasible, more realizable?
JORDAN: I think the big impacts are going to come with error-corrected quantum computers. Right now, there are a lot of interesting things being done with what are called NISQ machines, a term coined
by John Preskill, which are noisy quantum computers. But actually having impact for real problems that are of economic importance that you can't already solve with state-of-the-art methods on serious
classical supercomputers is a high bar. Over the past few years, I've spent hundreds of hours talking to people to learn exactly where that bar is in different domains. My conclusion is that you
probably need error correction to get there. So, I think the key milestone that I'm waiting for is for someone to do a two-qubit gate on logical qubits fault tolerantly with error correction. That
will be when I pop the cork on my champagne. [laugh]
ZIERLER: Now, is that—what is that? Is that a theoretical breakthrough? Is that an experimental breakthrough? Is it a eureka moment?
JORDAN: That would be an experimental breakthrough. I think the way we get there is through a very substantial engineering effort that moves towards it gradually, and eventually crosses that
ZIERLER: Is that to say that, right now, the theorists are out ahead of the experimentalists?
JORDAN: Yeah, in a way. It's possible that a theoretical breakthrough could hasten this milestone. Specifically, if someone comes up with some new class of quantum error-correcting codes and fault
tolerance schemes, it could help. The current frontrunner is surface codes and magic state distillation. The surface codes were discovered by Alexei Kitaev quite a long time ago. There's a lot of
work that goes on in quantum error-correction, and it has many different goals. But one big one has always been to outperform that scheme, and no one's achieved it yet. Maybe surface codes plus magic
state distillation is just the best that can be done. Maybe a better scheme doesn't exist in the mathematical space of possible schemes. But maybe it does. It's hard to say. So, if someone discovers
something like that, then it would be a theoretical breakthrough that would change the timelines of when we reach that experimental milestone.
ZIERLER: Finally, Stephen, for you, last question, what's next on your agenda? What are you excited about in the future for your own work?
JORDAN: What I really am focused on now is getting results from research transitioned into practice. We've now got some successful examples. We've released some information publicly about our
collaboration with Azure Storage, which is a group at Microsoft that oversees data storage in our gigantic data centers. We've worked with them, and optimized their processes, and made very big
improvements, and big savings. So, that was an exciting development. I'd like to see if we can repeat that with some other process, and also see if we can similarly empower some customers. Whether
it's from power utilities, biomedical, logistics and shipping, whatever it is, I would like to achieve large economic impact. Then you could point to it and say, even to a cold, hardheaded
bean-counter, that all the investments in having people like me sit in their chairs, and use up computer time, and collect their salaries, have paid off many, many times over from these applications.
I think with the Azure Storage thing, we have achieved it already. But I would like to achieve it with more examples. There are many different kinds of goals you can have, and you don't have to have
only one. But this is one that I am currently excited about.
ZIERLER: Well, Stephen, it's been a lot of fun talking to you. I'm so glad we were able to do this. I'd like to thank you so much.
JORDAN: Yes, thanks a lot. | {"url":"https://heritageproject.caltech.edu/interviews/stephen-jordan","timestamp":"2024-11-08T21:06:39Z","content_type":"text/html","content_length":"194127","record_id":"<urn:uuid:74dbc06d-23de-4c5b-a888-9e383c4c2e5f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00882.warc.gz"} |
Box plot example minitab torrent
Minitab 18 overview minitab statistical software is the ideal package for six sigma and other quality improvement projects. Download the scatter plot data set for minitab here. R boxplot to create
box plot with numerous examples. A box plot is a graphical representation of the distribution in a data set using quartiles, minimum and maximum values on a number line. Minitab 18 free download
latest version for windows. Put your data values in two of the columns of the minitab worksheet.
Multivariate analysis national chengchi university. For example, in the following worksheet, height contains. Hey, i had just defeated the ender dragon and i threw a ender pearl at the gateway which
unlocked the flight option for metamorph, i morphed into a blaze and was able to fly. A boxplot contains several statistical measures that we will explore after creating. Aug 06, 2014 this
demonstration shows you how to create a box plot with minitab express. Interquartile range boxes on a boxplot represent the middle 50% of the data. Learn more about scatter plots in green belt
training. In the minitab menu bar, go to graph and then click on time series plot in the time series plots dialogue box see picture below choose with.
R boxplot example boxplot usually refers to box andwhisker plot, which is a popular method to show data by drawing a box around the 1st and 3rd quartile, and the whiskers for the smallest and largest
data values, the median is represented by a bold line in the box. It divides the distribution of a data set into four portions. Create a boxplot that displays the distribution of values for
multiplelevel groups. Add variable names in the gray boxes just above the data values. If one girl spends 6 hoursday on mybook and facespace, would she be considered an outlier. To test fertilizer
formulas, a scientist prepares three groups of. And they say the order isnt checked, and thats because im. To construct this diagram, we first draw an equal interval scale on which to make our box
plot. With minitab the user can analyze his data and improve his products and services.
Minitab keygen gives an essential, practical way to deal with incorporate quantifiable data, control that data, perceives examples and models, and a while later extrapolate answers to the current
issue. If 6 hoursday is an outlier, it will be more than 1. Let us understand the power of box plot through a series of examples. Com boxplot a plant fertilizer manufacturer wants to develop a
formula of fertilizer that yields the most increase in the height of plants.
This guide contains information for current faculty, staff, and students at kent state about statistical and qualitative data analysis software. Under the xvar filter, uncheck class 4 to remove this
class from the plot. We will see how to get descriptive statistics such as mean, median, mode, standard deviation, variance, min, max, and range. Choose the columns containing the variables to be
included in the analysis. For example, the box plot for boys may be lower or higher than the equivalent plot for girls. And they gave us a bunch of data points, and it says, if it. The whiskers
extend to the maximum and minimum data points within 1. Statistics for six sigma green belts with minitab and jmp. Home blog uncategorized how to create a box plot in minitab whats a box plot.
In this article, you will learn to create whisker and box plot in r programming. The secret for an autojob is to create a scheduled task to run minitab with a start in folder which includes a
startup. For example, running the code bellow will plot a boxplot of a hundred observation sampled from a normal distribution, and will then. One box plot is much higher or lower than another compare
3 and 4 this could suggest a difference between groups. R boxplot example boxplot usually refers to boxandwhisker plot, which is a popular method to show data by drawing a box around the 1st and 3rd
quartile, and the whiskers for the smallest. Using the time series plot in minitab to show changes in the. This lesson considers the fivenumber summary statistics the minimum, median, maximum, and
first and third quartiles including their calculations and interpretation, and how to use them to construct a boxplot to identify skewness and outliers. A box andwhisker plot or boxplot is a diagram
based on the fivenumber summary of a data set. Click on the simple boxplot, under single y variable upper left. You can highlight a specific item and rightclick to delete or export as pdf or html. It
is obviously possible to get r to draw boxplots our way, but it isnt. A box plot also called a box and whisker plot shows data using the middle value of the data and the quartiles, or 25% divisions
of the data. Box charts and box plots are often used to visually represent research data.
All torrents anime applications games movies music tv shows other documentaries xxx. Interpret the key results for boxplot minitab express. Either double click on one of the variable names or single
click on it then click on the select button. By default, a boxplot shows the median, interquartile range, range, and outliers for each group. Hover next to the plot to display the following
intellisense window. The median is a common measure of the center of your data. A boxplot in minitab consists of a box, whiskers, and extreme observations. We do this because we are tracking within
the dmaic stages. Add a variable name in the gray box just above the data values. Youll see in the example below that our radish boxplot looks a little.
I found each to be well though out a good presentation of materials to get someone a working knowledge of six sigma. If there is an even number of data items, then we need to get the average of the
middle numbers. Box plots, populations versus samples, and random sampling 4 this plot is drawn using our method of doing boxplots. Using the box and whisker plot in example 2, answer the following
The box plot is used to plot the distribution of a data set. It is obviously possible to get r to draw boxplots our way, but it isnt easy. Here we use some sample data for the quiz scores given in
section 2. Once again, exclude the median when computing the quartiles. Produce a histogram of residuals and a plot of residuals vs. Descriptive statistics and box plots the following links lead to
instruction videos on youtube. On the basic tab, select gender and current salary. Well, by default, minitab creates what is called a modified box plot. That can easily be done using the identify
function in r.
You will also learn to draw multiple box plots in a single plot. This demonstration corresponds to the introduction to. The box represents the inter quartile range iqr q3q1 where its left border also
called hinge corresponds to the first quartile q1 and the right border corresponds to the third quartile q3. A box plot is a graphical view of a data set which involves a center box containing 50% of
the data and whiskers which each represent 25% of the data. If one girl spends 6 hoursday on mybook and facespace, would she be. Minitab crack is a request and menudriven programming pack for
quantifiable examination.
Box plot using box plots to compare groups creating various styles of boxandwhiskers plots creating notched boxandwhiskers plots creating boxand. A boxandwhisker plot or boxplot is a diagram based on
the fivenumber summary of a data set. If we ask minitab to create a box plot for this data set, this is what we get. Put your data values in one of the columns of the minitab worksheet. Mac file
which will run the job and then shut down after itself. For example, running the code bellow will plot a boxplot of a hundred observation sampled from a normal distribution, and will then enable you
to pick the outlier point and have its label in this case, that number id plotted beside the point. Box plot discover 6 sigma online six sigma resources.
It is a type of graph which is used to show the shape of the. Multiplelevel groups are displayed as clustered boxes. Most students have a height that is between 66 and 72, but some. Use minitab to
examine the relationship between ages of students fathers and ages of their mothers. The only fault i would say is relying on minitab as a vehicle for. Box plot comparision and sample size issue
isixsigma. Scatterplots, matrix plots, boxplots, dotplots, histograms, charts, time series plots, etc. Most subjects have a resting heart rate that is between 64 and 80, but some subjects have heart
rates. Boxplot a boxplot is a way of summarizing a set of data measured on an interval scale. How come minitab s box plot looks different than our box plot. Last, but not least, remember that minitab
provides a support team staffed by professionals with expertise in the software, statistics, quality improvement, and computer systems.
And they gave us a bunch of data points, and it says, if it helps, you might drag the numbers around, which i will do, because that will be useful. How to calculate summary and boxplot in minitab
data with. A boxplot is another useful visualization for viewing how the data are distributed. In the minitab menu bar, go to graph and then click on time series plot in the time series plots
dialogue box see picture below choose with groups.
Minitab is a statistical program designed for data analysis. This lesson considers the fivenumber summary statistics the minimum, median, maximum, and first and third quartiles including their
calculations and interpretation. Aug 15, 2008 while he does suggest that it is a good idea to include the raw data in the box plot particularly when the plot looks odd i. Box plots, populations
versus samples, and random sampling. Statistics 10020 minitab lab 4 box plots in minitab boxplots are used to check the check a dataset for skew and to identify outliers. A quick introduction to
minitab statistical software this introduction to minitab is intended to provide you with enough information to get you started using the basic functionality of minitab. In a modified box plot, the
box is drawn just as in a standard box plot, but the whiskers are defined differently. Therefore, it is important to understand the difference between the two. For example, check the box for variance
in order to add it to the list of displayed summary statistics. For example, this boxplot of resting heart rates shows that the median heart rate is 71. To understand the different elements of a box
plot, you need to understand quartiles, interquartile. Here we use some sample data for the quiz scores given in. The worksheet must include multiple columns of numeric or datetime data and at least
one column of categorical data. It is full offline installer standalone setup of minitab 18.
For example, the following boxplot of the heights of students shows that the median height is 69. The strength of correlation on a scatter plot can be measured using a correlation coefficient. This
demonstration shows you how to create a box plot with minitab express. Choose graph boxplot, then select a boxplot based on your data and how you want the plot to look. Minitab 19 makro zum erstellen
eines boxplots, fehlerbalken oder. Using the time series plot in minitab to show changes in. In r, boxplot and whisker plot is created using the. It is a type of graph which is used to show the shape
of the distribution, its central value, and variability. Displaying summary statistics in a box plot using box plots to compare groups creating various styles of box andwhiskers plots creating
notched box andwhiskers plots creating box andwhiskers plots with varying widths creating box andwhiskers plots using ods graphics. From statistical process control to design of experiments, it
offers you.
Follow this up by looking at the items at a glance reports. You can get the trial version of minitab from their website for 30 days minitab statistical software minitab if you want to get the
singleuser licensed version. Most people looking for minitab 15 full version downloaded. As many other graphs and diagrams in statistics, box and whisker plot is widely used for solving data
problems. Oct 06, 2006 this example is taken from minitab boxplot manual. For example, the following boxplot displays interquartile range boxes that represent the heights for plants grown with three
different fertilizers. Com boxplot a plant fertilizer manufacturer wants to develop a formula of fertilizer that yields the most increase in the. Stemandleaf plot minitab instructions given on web
page. Math worksheets a box plot also called a box and whisker plot shows data using the middle value of the data and the quartiles, or 25% divisions of the data drawing a. For example, a scientist
creates a boxplot to compare the height of plants grown with two different fertilizers and a control group with no fertilizer. To construct this diagram, we first draw an equal interval scale. To
test fertilizer formulas, a scientist prepares three groups of 50 identical seedlings. Voiceover represent the following data using a boxandwhiskers plot.
Sep 04, 2016 you can get the trial version of minitab from their website for 30 days minitab statistical software minitab if you want to get the singleuser licensed version. Box and whisker plot
examples when it comes to visualizing a summary of a large data in 5 numbers, many realworld box and whisker plot examples can show you how to solve box plots. How come minitabs box plot looks
different than our box plot. For example, lets say that you want to identify the points above 30 in the following time series plot. From here on, well let r do things the way it wants to.
Fundamentally, this item is laid out especially for the prerequisites of six sigma specialists. Minitab 18 overview minitab statistical software is the ideal. A boxplot in minitab consists of a box,
whiskers, and. | {"url":"https://predenliroc.web.app/435.html","timestamp":"2024-11-13T16:09:28Z","content_type":"text/html","content_length":"18346","record_id":"<urn:uuid:34e3a840-34cb-45ff-9015-ec9f29c201ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00514.warc.gz"} |
MATH 141: Linear Analysis I Homework 07 solved
1. Verify that functions defined by a matrix is always linear. More precisely, verify that LA : R
2 → R
LA(~x) = A~x, with A =
a b
c d
, is linear.
2. Determine whether each of the following functions is linear or not. Explain your reasoning.
(a) T : R → R, T(x) = x
(b) T : R → R, T(x) = x + 3.
(c) T : R
2 → R
, T
x1 + 2×2
(d) T : R
2 → R
, T
2×2 − 1
3. Assume that T : R
2 → R
is a linear transformation. Let ~e1 =
and ~e2 =
. Draw the image of the
”half-shaded unit square” (shown below) under the given transformation T , and find the matrix A such
that T = LA.
© IOLA Team – iola.math.vt.edu
7. Consider the image given below and the transformation matrix ! = 2 0
0 −1.5
a. Sketch what will happen to the image under the transformation.
b. Describe in words what will happen to the image under the transformation.
c. Describe how you determined that happened. (What, if any, calculations did you do?
Did you make a prediction? How did you know you were right? etc.
8. Assume that T is a linear transformation and that !! = 1
0 and !! = 0
1 . For each part, find the standard
matrix A for T, and draw the image of the “half-shaded unit square” (shown below) under the given
a. !: ℝ! → ℝ! rotates points (about the origin) through –π/4 radians (clockwise)
b. !: ℝ! → ℝ! is a vertical shear that maps e1 into e1 – e2 but leaves the vector e2 unchanged
c. !: ℝ! → ℝ! first reflects points across the vertical axis and then rotates points π/2 radians
(a) T stretches by a factor of 2 in the x-direction and by a factor of 3 in the y-direction.
(b) T is a reflection across the line y = x.
(c) T is a rotation (about the origin) through −π/4 radians.
(d) T is a vertical shear that maps ~e1 into ~e1 − ~e2 but leaves the vector ~e2 unchanged.
4. For any given m×n matrix A, we are going to use the notation LA to denote the linear transformation that
A defines, i.e., LA : R
n → R
m : LA(~x) = A~x. For each given matrix, answer the following questions.
D =
0 −1 0
0 0 1/2
E =
F =
(a) Rewrite LD : R
n → R
m with correct numbers for m and n filled in for each matrix. Repeat for LE and
LF .
(b) Find some way to explain in words and/or graphically what this transformation does in taking vectors from R
n to R
m. You might find it helpful to try out a few input vectors and see what their image
is under the transformation.
(c) Is this transformation one-to-one? (Hint: Review problem#6 of Homework 06.)
i. If so, explain which properties of the matrix make the transformation one-to-one.
MATH 141: Linear Analysis I Homework 07
ii. If not, given an example of two different input vectors having the same image.
(d) Is this transformation onto? (Hint: Review problem#6 of Homework 06.)
i. If so, explain which properties of the matrix make the transformation onto.
ii. If not, given an example of a vector in R
m that is not the image of any vector in R | {"url":"https://codeshive.com/questions-and-answers/math-141-linear-analysis-i-homework-07-solved/","timestamp":"2024-11-08T17:40:20Z","content_type":"text/html","content_length":"102602","record_id":"<urn:uuid:e9d2efc9-7c84-46fd-abd3-a3ed25a810ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00770.warc.gz"} |
Methods and apparatus for variable radix scalable modular multiplication
Methods and apparatus for Montgomery multiplication process a multiplier operand in k-bit radix-digits, wherein k corresponds to a radix r=2.sup.k. A multiplicand operand and a modulus are
processed word by word, and then additional bits of the multiplier operand are selected for processing. In a radix r=8 example, the multiplier operand is processed in 3 bit radix-8 digits. A
processing kernel is configured to preprocess the modulus and/or the multiplier operand so that at least some values can be obtained from lookup tables. | {"url":"https://patentalert.com/docs/003/z00322444.shtml","timestamp":"2024-11-09T10:38:09Z","content_type":"text/html","content_length":"9565","record_id":"<urn:uuid:07ff9ad2-c0c6-4cf6-92c0-9df1c09160ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00135.warc.gz"} |
Cartesian logarithmic
Cartesian logarithmic
Xwidth[l][/height[l]] or xx-scale[l][/y-scale[l]]
Give the width of the figure and the optional height. The lower-case version x is similar to X but expects an x-scale and an optional y-scale. Each axis with a logarithmic transformation requires l
after its size argument.
import numpy as np
import pygmt
# Create a list of x-values 0-100
xline = np.arange(0, 101)
# Create a list of y-values that are the square root of the x-values
yline = xline**0.5
# Create a list of x-values for every 10 in 0-100
xpoints = np.arange(0, 101, 10)
# Create a list of y-values that are the square root of the x-values
ypoints = xpoints**0.5
fig = pygmt.Figure()
region=[1, 100, 0, 10],
# Set a logarithmic transformation on the x-axis
# Set the figures frame and color as well as
# annotations, ticks, and gridlines
frame=["WSne+gbisque", "xa2g3", "ya2f1g2"],
# Set the line thickness to "2p", the color to "black", and the style to "dashed"
fig.plot(x=xline, y=yline, pen="2p,black,dashed")
# Plot the square root values on top of the line
# Use squares with a size of 0.3 centimeters, an "orange" fill and a "black" outline
# Symbols are not clipped if they go off the figure
fig.plot(x=xpoints, y=ypoints, style="s0.3c", fill="orange", pen="black", no_clip=True)
Total running time of the script: (0 minutes 0.156 seconds) | {"url":"https://www.pygmt.org/dev/projections/nongeo/cartesian_logarithmic.html","timestamp":"2024-11-03T00:17:40Z","content_type":"text/html","content_length":"22571","record_id":"<urn:uuid:599269a1-ec38-41ba-8c70-4d8c28ad667c>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00516.warc.gz"} |