content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
OpenStax College Physics for AP® Courses, Chapter 22, Problem 61 (Problems & Exercises)
To see why an MRI utilizes iron to increase the magnetic field created by a coil, calculate the current needed in a 400-loop-per-meter circular coil 0.660 m in radius to create a 1.20-T field
(typical of an MRI instrument) at its center with no iron present. The magnetic field of a proton is approximately like that of a circular current loop $0.650 \times 10^{-15} \textrm{ m}$ in radius
carrying $1.05 \times 10^4 \textrm{ A}$. What is the field at the center of such a loop?
Question by
is licensed under
CC BY 4.0
Final Answer
$2390 \textrm{ A}$
$1.01 \times 10^{13} \textrm{T}$
Solution video
OpenStax College Physics for AP® Courses, Chapter 22, Problem 61 (Problems & Exercises)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. We're going to find the current that's needed in an MRI machine to create a magnetic field of 1.2 Tesla but assuming that there is no iron core and
that we just have free space. So we use permeability of free space instead of the permeability of some you know, ferro-magnetic material which would increase the permeability. So we divide both sides
of this formula for the magnetic field inside of a solenoid by mu naught and then the number of turns per meter, n. Then we have I equals B over mu naught n. So that's 1.2 Tesla divided by four pi
times ten to the minus seven Tesla meters per amp, times four hundred loops per meter. This gives 2390 amps which is a massively high current and so that's why they like to use an iron core inside
the solenoid of an MRI machine in order to increase this permeability and thereby decrease the current. They also use super-conducting wires so that, you know -- because even if you increase this
number you're still going to have a large current and that reduces the power loss and the amount of heat generated by having super-conducting wires. This question also asks us to find out what
magnetic field is produced by the spin of a proton. So the spin can be modeled as a circular loop with this current and this radius, and so we plug those numbers into our mu naught I over two r
formula for the magnetic field due to the current carrying loop. So that's permeability of free space times 1.05 times ten to the four amps, divided by two times 0.65 femto-meters which is times ten
to the minus fifteen meters. This gives 1.01 times ten to the thirteen Tesla. | {"url":"https://collegephysicsanswers.com/openstax-solutions/see-why-mri-utilizes-iron-increase-magnetic-field-created-coil-calculate-0","timestamp":"2024-11-04T00:49:34Z","content_type":"text/html","content_length":"232226","record_id":"<urn:uuid:4f6421f2-43ca-4697-b47c-aa68e53867e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00815.warc.gz"} |
Two families of circulant nut graphs
A circulant nut graph is a non-trivial simple graph whose adjacency matrix is a circulant matrix of nullity one such that its non-zero null space vectors have no zero elements. The study of circulant
nut graphs was originally initiated by Ba\v{s}i\'c et al.\ [Art Discrete Appl.\ Math.\ 5(2) (2021) \#P2.01], where a conjecture was made regarding the existence of all the possible pairs $(n, d)$ for
which there exists a $d$-regular circulant nut graph of order $n$. Later on, it was proved by Damnjanovi\'c and Stevanovi\'c [Linear Algebra Appl.\ 633 (2022) 127--151] that for each odd $t \ge 3$
such that $t\not\equiv_{10}1$ and $t\not\equiv_{18}15$, the $4t$-regular circulant graph of order $n$ with the generator set $\{ 1, 2, 3, \ldots, 2t+1 \} \setminus \{t\})$ must necessarily be a nut
graph for each even $n \ge 4t + 4$. In this paper, we extend these results by constructing two families of circulant nut graphs. The first family comprises the $4t$-regular circulant graphs of order
$n$ which correspond to the generator sets $\{1, 2, \ldots, t-1\} \cup \left\{\frac{n}{4}, \frac{n}{4} + 1 \right\} \cup \left\{\frac{n}{2} - (t-1), \ldots, \frac{n}{2} - 2, \frac{n}{2} - 1 \right\}
$, for each odd $t \in \mathbb{N}$ and $n \ge 4t + 4$ divisible by four. The second family consists of the $4t$-regular circulant graphs of order $n$ which correspond to the generator sets $\{1, 2, \
ldots, t-1\} \cup \left\{\frac{n+2}{4}, \frac{n+6}{4} \right\} \cup \left\{\frac{n}{2} - (t-1), \ldots, \frac{n}{2} - 2, \frac{n}{2}-1 \right\}$, for each $t \in \mathbb{N}$ and $n \ge 4t + 6$ such
that $n \equiv_{4} 2$. We prove that all of the graphs which belong to these families are indeed nut graphs, thereby fully resolving the $4t$-regular circulant nut graph order--degree existence
problem whenever $t$ is odd and partially solving this problem for even values of $t$ as well.
• There are currently no refbacks. | {"url":"http://journal.pmf.ni.ac.rs/filomat/index.php/filomat/article/view/19606","timestamp":"2024-11-08T15:08:38Z","content_type":"application/xhtml+xml","content_length":"17713","record_id":"<urn:uuid:155c7a59-e18a-48b3-a4f8-4215b08d6165>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00440.warc.gz"} |
Use Ratio to Solve Scale Drawing Problems
In this activity, we will use ratio to find missing missing lengths in scale drawings.
A scale drawing is used when designs are made.
From toy cars being a scale model of actual cars, to high rise buildings having models made before being built.
The scale in a scale drawing is often shown as a ratio.
A ratio of 1:100 means that 1 cm on the drawing represents 100 cm in actual real life.
This could also be written as 1 cm:1 m or 1 cm to 1 m.
This means that a scale of 1 cm to 5 m could also be written as 1:500 because there are 100 cm in one metre.
Let's look at a question!
An American car company want to make a scale drawing to celebrate their new car!
The model will be 1 : 200
If the actual car height is 1.6 metres, what would the scale model height be?
The ratio is 1 cm to 200 cm
The actual height is 1.6 m
We need to convert 1.6 m into cm: 1.6 x 100 = 1,600 cm
How did we get from 200 cm to 1,600 cm? We can work that out by doing 1,600 ÷ 200 = 8
So, we multiplied 200 cm by 8 to get 1,600 cm.
We need to do the same thing to the other side of the ratio: 1 x 8 = 8 cm
The model height will be 8 cm
Let's try some questions like this! | {"url":"https://www.edplace.com/worksheet_info/maths/keystage2/year6/topic/1351/13212/using-ratio-to-solve-scale-drawing-problems","timestamp":"2024-11-08T15:33:01Z","content_type":"text/html","content_length":"81140","record_id":"<urn:uuid:f9c24509-1220-4271-a694-fe85dc14ba56>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00515.warc.gz"} |
Two-dimensional scaling limits via marked nonsimple loops
We postulate the existence of a natural Poissonian marking of the double (touching) points of SLE [6] and hence of the related continuum nonsimple loop process that describes macroscopic cluster
boundaries in 2D critical percolation. We explain how these marked loops should yield continuum versions of near-critical percolation, dynamical percolation, minimal spanning trees and related plane
filling curves, and invasion percolation. We showthat this yields for some of the continuum objects a conformal covariance property that generalizes the conformal invariance of critical systems. It
is an open problem to rigorously construct the continuum objects and to prove that they are indeed the scaling limits of the corresponding lattice objects.
• Conformal covariance
• Finite size scaling
• Minimal spanning tree
• Near-critical
• Off-critical
• Percolation
• Scaling limits
ASJC Scopus subject areas
Dive into the research topics of 'Two-dimensional scaling limits via marked nonsimple loops'. Together they form a unique fingerprint. | {"url":"https://nyuscholars.nyu.edu/en/publications/two-dimensional-scaling-limits-via-marked-nonsimple-loops","timestamp":"2024-11-11T16:19:17Z","content_type":"text/html","content_length":"53939","record_id":"<urn:uuid:10578f1a-b0f5-4e5b-b25e-eb7d977939fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00391.warc.gz"} |
Estimate nonlinear grey-box model parameters
sys= nlgreyest(data,init_sys) estimates the parameters of a nonlinear grey-box model, init_sys, using time-domain data data. data can be in the form of a timetable, a comma-separated pair of numeric
matrices, or a time-domain iddata object.
Selectively Estimate Parameters of Nonlinear Grey-Box Model
Load data.
load twotankdata
z = iddata(y,u,0.2,'Name','Two tanks');
The data contains 3000 input-output data samples of a two tank system. The input is the voltage applied to a pump, and the output is the level of liquid in the lower tank.
Specify file describing the model structure for a two-tank system. The file specifies the state derivatives and model outputs as a function of time, states, inputs, and model parameters. For this
example, use a MEX file.
Specify model orders [ny nu nx].
Specify initial parameters (Np = 6).
Parameters = {0.5;0.0035;0.019; ...
Specify initial states.
InitialStates = [0; 0.1];
Specify model as continuous.
Create idnlgrey model object.
nlgr = idnlgrey(FileName,Order, ...
Parameters,InitialStates,Ts, ...
Name="Two tanks");
Set some parameters as constant.
nlgr.Parameters(1).Fixed = true;
nlgr.Parameters(4).Fixed = true;
nlgr.Parameters(5).Fixed = true;
Estimate the model parameters.
nlgr =
Continuous-time nonlinear grey-box model defined by 'twotanks_c' (MEX-file):
dx/dt = F(t, x(t), u(t), p1, ..., p6)
y(t) = H(t, x(t), u(t), p1, ..., p6) + e(t)
with 1 input(s), 2 state(s), 1 output(s), and 3 free parameter(s) (out of 6).
Name: Two tanks
Estimated using Solver: ode45; Search: lsqnonlin on time domain data "Two tanks".
Fit to estimation data: 97.34%
FPE: 2.425e-05, MSE: 2.42e-05
Estimate a Nonlinear Grey-Box Model Using Specific Options
Create estimation option set for nlgreyest to view estimation progress, and to set the maximum iteration steps to 50.
opt = nlgreyestOptions;
opt.Display = 'on';
opt.SearchOptions.MaxIterations = 50;
Load data.
load dcmotordata
z = iddata(y,u,0.1,'Name','DC-motor');
The data is from a linear DC motor with one input (voltage), and two outputs (angular position and angular velocity). The structure of the model is specified by dcmotor_m.m file.
Create a nonlinear grey-box model.
file_name = 'dcmotor_m';
Order = [2 1 2];
Parameters = [1;0.28];
InitialStates = [0;0];
init_sys = idnlgrey(file_name,Order,Parameters,InitialStates,0, ...
Estimate the model parameters using the estimation options.
sys = nlgreyest(z,init_sys,opt);
Input Arguments
data — Time-domain data
timetable | numeric matrix pair | iddata object
Uniformly sampled estimation data, specified as a timetable, a comma-separated matrix pair, or an iddata, as the following sections describe. data has the same input and output dimensions as init_sys
By default, the software sets the sample time of the model to the sample time of the estimation data.
Specify data as a timetable that uses a regularly spaced time vector. data contains variables representing input and output channels..
Comma-Separated Matrix Pair
Specify data as a comma-separated pair of numeric matrices that contain input and output time-domain signal values. Use this data specification only for discrete-time systems.
Data Object
Specify data as a time-domain iddata object containing the input and output signal values.
For more information about working with estimation data types, see Data Domains and Data Types in System Identification Toolbox.
Intersample Behavior
If you specify the intersample behavior of data as 'bl' (band-limited) and init_sys is a continuous-time model, the software treats data as first-order-hold ('foh'), interpolated for estimation. To
specify intersample behavior, use one of the following methods:
• For timetables and numeric matrices, specify options to include the InputInterSample option. For example, to set the intersample behavior to 'bl', use the following commands:
opt = nlgreyestOptions('InputInterSample','bl');
sys = nlgreyest(data,init_sys,opt)
• For iddata objects, specify the intersample behavior by directly specifying data.InterSample with the following command:
init_sys — Constructed nonlinear grey-box model
idnlgrey object
Constructed nonlinear grey-box model that configures the initial parameterization of sys, specified as an idnlgrey object. init_sys has the same input and output dimensions as data. Create init_sys
using idnlgrey.
options — Estimation options
nlgreyestOptions option set
Estimation options for nonlinear grey-box model identification, specified as an nlgreyestOptions option set.
Output Arguments
sys — Estimated nonlinear grey-box model
idnlgrey object
Nonlinear grey-box model with the same structure as init_sys, returned as an idnlgrey object. The parameters of sys are estimated such that the response of sys matches the output signal in the
estimation data.
Information about the estimation results and options used is stored in the Report property of the model. Report has the following fields:
Report Field Description
Status Summary of the model status, which indicates whether the model was created by construction or obtained by estimation
Method Name of the simulation solver and the search method used during estimation.
Quantitative assessment of the estimation, returned as a structure. See Loss Function and Model Quality Metrics for more information on these quality metrics. The structure has these
• FitPercent — Normalized root mean squared error (NRMSE) measure of how well the response of the model fits the estimation data, expressed as the percentage fitpercent = 100(1-NRMSE)
• LossFcn — Value of the loss function when the estimation completes
• MSE — Mean squared error (MSE) measure of how well the response of the model fits the estimation data
Fit • FPE — Final prediction error for the model
• AIC — Raw Akaike Information Criteria (AIC) measure of model quality
• AICc — Small-sample-size corrected AIC
• nAIC — Normalized AIC
• BIC — Bayesian Information Criteria (BIC)
Estimated values of the model parameters. Structure with the following fields:
Field Description
InitialValues Structure with values of parameters and initial states before estimation.
ParVector Value of parameters after estimation.
Parameters Free Logical vector specifying the fixed or free status of parameters during estimation
FreeParCovariance Covariance of the free parameters.
X0 Value of initial states after estimation.
X0Covariance Covariance of the initial states.
OptionsUsed Option set used for estimation. If no custom options were configured, this is a set of default options. See nlgreyestOptions for more information.
RandState State of the random number stream at the start of estimation. Empty, [], if randomization was not used during estimation. For more information, see rng.
Attributes of the data used for estimation — Structure with the following fields:
Field Description
Name Name of the data set.
Type Data type — For idnlgrey models, this is set to 'Time domain data'.
Length Number of data samples.
Ts Sample time. This is equivalent to data.Ts.
Input intersample behavior. One of the following values:
• 'zoh' — Zero-order hold maintains a piecewise-constant input signal between samples.
InterSample • 'foh' — First-order hold maintains a piecewise-linear input signal between samples.
• 'bl' — Band-limited behavior specifies that the continuous-time input signal has zero power above the Nyquist frequency.
The value of Intersample has no effect on estimation results for discrete-time models.
InputOffset Empty, [], for nonlinear estimation methods.
OutputOffset Empty, [], for nonlinear estimation methods.
Termination conditions for the iterative search used for prediction error minimization, returned as a structure with these fields.
• WhyStop — Reason for terminating the numerical search
• Iterations — Number of search iterations performed by the estimation algorithm
• FirstOrderOptimality — $\infty$-norm of the gradient search vector when the search algorithm terminates
Termination • FcnCount — Number of times the objective function was called
• UpdateNorm — Norm of the gradient search vector in the last iteration. Omitted when the search method is 'lsqnonlin' or 'fmincon'.
• LastImprovement — Criterion improvement in the last iteration, expressed as a percentage. Omitted when the search method is 'lsqnonlin' or 'fmincon'.
• Algorithm — Algorithm used by 'lsqnonlin' or 'fmincon' search method. Omitted when other search methods are used.
For estimation methods that do not require numerical search optimization, the Termination field is omitted.
For more information, see Estimation Report.
Version History
Introduced in R2015a
R2022b: Time-domain estimation data is accepted in the form of timetables and matrices
Most estimation, validation, analysis, and utility functions now accept time-domain input/output data in the form of a single timetable that contains both input and output data or a pair of matrices
that contain the input and output data separately. These functions continue to accept iddata objects as a data source as well, for both time-domain and frequency-domain data.
R2018a: Advanced Options are deprecated for SearchOptions when SearchMethod is 'lsqnonlin'
Specification of lsqnonlin- related advanced options are deprecated, including the option to invoke parallel processing when estimating using the lsqnonlin search method, or solver, in Optimization | {"url":"https://uk.mathworks.com/help/ident/ref/nlgreyest.html","timestamp":"2024-11-11T20:33:50Z","content_type":"text/html","content_length":"105364","record_id":"<urn:uuid:d29ebb37-a914-4a88-9c48-e83906cb87e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00767.warc.gz"} |
University of Edinburgh University of Edinburgh
On the number of infinite geodesics at exceptional directions in the KPZ class
Probability Seminar
7th February 2024, 3:30 pm – 4:30 pm
Fry Building, G.12
The KPZ class consists of many models of random growth interface. In many cases, the dynamics of these models can be studied via variational forms that give rise to metric-like spaces, which in turn,
can be studied through geodesics. The study of infinite geodesics in the KPZ class has been studied intensively in the past 30 years. One central question is the following:
Given a direction, how many infinite geodesics that are asymptotically going in that direction are there?
In this talk I shall discuss what we know about infinite in the KPZ class and some recent developments regarding the question above.
Based on several works with Marton Balazs, Timo Seppalainen and Evan Sorensen. | {"url":"https://www.bristolmathsresearch.org/seminar/ofer-busani-3/","timestamp":"2024-11-05T19:52:26Z","content_type":"text/html","content_length":"54329","record_id":"<urn:uuid:cc48fe09-369f-439b-a125-ff6470cb8b18>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00389.warc.gz"} |
Foundation of Genetic Programming
• Arity Number of arguments that a function has. For example, + usually adds two numbers together. The numbers are its arguments. That is, + is a binary function and so has an arity of 2.
• Binary trees In binary trees, each function (internal node) has exactly two inputs. So the number of functions = [l/2][floor] and number of terminals = [l/2][rceil]. Since l is odd, number of
terminals = number of functions + 1.
• Binomial distribution The probability of exactly k independent events each with a probability p in M trials is M choose k p^k (1-p)^M-k, where M choose k = M!/(k! (M-k)!). The mean of the
Binomial distribution is pM and its variance is p(1-p)M.
• Boolean Dealing only with the two logical values: true (1) and false (0). Named after the mathematician George Boole.
• Building block hypothesis [Goldberg, 1989, page 41] talking of binary bit string genetic algorithms says ``Short, low order, and highly fit schemata are sampled, recombined'' (crossed over),
``and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata (the building blocks), we have reduced the complexity of our problem; instead of
building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings.'' ``Just as a child creates
magnificent fortresses through the arrangement of simple blocks of wood'' (building blocks), ``so does a genetic algorithm seek near optimal performance through the juxtaposition of short,
low-order, high-performance schemata, or building blocks.'' Note [Goldberg, 1989] suggests that building blocks are highly fit schemata with only a few defined bits (low order), and that these
are close together (short).
• Convergence Precisely every individual in the population is identical. Such convergence is seldom seen in genetic programming using Koza's subtree swapping crossover. However, populations often
stabilise after a time, in the sense that the best programs all have a common ancestor and their behaviour is very similar (or identical) both to each other and to that of high fitness programs
from the previous (and future?) generations. Often the term ``convergence'' is loosely used.
• Deception Deception is where the gradient in the fitness landscape leads away from locations with the best possible fitness. This is because the genetic algorithm, genetic programming or other
search technique, is ``deceived'' into being attracted towards local peaks rather than the global best-fitness location.
• Defining Length L(H) The maximum distance between two defining (i.e. non-#) symbols in schema H. In tree GP schemata, L(H) is the number of links in the minimum tree fragment including all the
non-= symbols within a schema H
• Diploid A species is diploid if its genetic chromosomes are paired. (In general, each cell has multiple pairs of chromosomes). A species is haploid if its chromosomes are not paired. In diploid
species, the chromosome pairs separate during sexual reproduction. When they come together again in the child, one chromosome in each pair comes from the mother and the other from the father.
• Disruption If the child of an individual that matches schema H does not itself match H, the schema is said to have been disrupted.
• Diversity Variation between individuals in the population. Typically diversity refers to genetic variation. In bit string genetic algorithms, diversity may be measured by Hamming distance (i.e.
counting the number of bits that are different) between bit strings. [Koza, 1992] defines ``variety'' in the population by the number of unique programs it contains, but this measure takes no
notice of the fact that the behaviour of genetically different programs can be very similar or even identical.
• Dominance When one gene overrides the effect of another. Typically this refers to genes at the same locus of a pair of chromosomes (see diploid). A potential ``use'' is one gene acts as a backup
for the other, taking on its function should the first one be damaged (e.g. by a mutation).
• Effective fitness The effective fitness of a schema is its ``fitness'' to take into account crossover and mutation. This can be thought of as the fitness that the schema would need to have to
grow/shrink as it does with crossover and mutation if they were not there. f[]eff(H,t) = alpha(H,t)/alpha(H,t-1) * f(t)
• Eigenvector An eigenvector v of a square matrix M has the property that multiplying M by it yields another vector which is parallel to v. That is, vM = lambda v where lambda is a (possibly)
complex number known as the eigenvalue of v.
• Enumeration A search in which all possible solutions are tried in sequence.
• Epistasis Non-linear interaction between genes.
• Genetic drift Genes are inherently digital, in that an individual has an integer number of copies of each gene, and its children inherit an integer number of them. That is, an individual cannot
have half a gene. While fitness may play a part in how many children inherit how many copies of a gene, in each individual child there is a large element of chance. Genetic drift is where random
(i.e. not related to fitness) fluctuations in the genetic material in the population occur and lead to macroscopic changes to the population. Naturally genetic drift is more important in smaller
rather than larger populations.
• Genetic drift As a crude example, consider a rare gene, say it occurs in 1% of the population but has no effect on fitness. So we expect it to occur in 1% of the next generation, and 1% of the
next and so on. However, what happens if there are only 100 individuals in the population? In the current population there is one individual with the gene. In the next population (for simplicity
also of size 100) we expect on average there to be one individual with the gene. However, it is quite likely there will be two or more, or that there will be none. Obviously if there are none,
then there can be none in the next generation. While if there are two then on average the third population will also have twice as many as in the initial population. So, even though this gene is
unrelated to fitness, its concentration is liable to drift with time. The number of generations for significant change to occur randomly is O(|population|^2).
• Genotype The complete list of genes contained within an individual. In general, the position of the genes, as well as which ones are present, may be important.
• Haploid Unlike, diploid species, the chromosomes are not paired (although each cell may have more than one chromosome).
• Hits A program scores a ``hit'' when its output is sufficiently close or equal to the target value for a particular test input. In some problems a program's fitness may be given by the number of
hits it collects when run on the entire test set.
• Instantiate An instance of. For example, an individual program may match a similarity template, such as a schema, one or more times. Each match is an instantiation (instance) of the schema.
• Leaf Outermost part of program tree. In contrast to functions or internal nodes, leaves have no arguments. Also called a terminal. In many cases in GP leaves are the inputs to the program.
• Length of a schema The total number of nodes in the schema is called the length N(H) of a schema H. N(H) is also equal to the number of nodes in the programs matching H.
• Linear tree A program composed of a variable number of unary functions and a single terminal. Note linear tree GP differs from bit string genetic algorithms since a population may contain
programs of different lengths and there may be more than two types functions or more than two types of terminals.
• Local peaks These are locations in the fitness landscape where the fitness is below the best possible, and where every movement away from them leads to other locations all of whom have lower
• Loci Plural of locus.
• Locus A specific point of the chromosome. In bit string genetic algorithms it is a particular bit. More generally, it is a specific location in the genetic material that can take on one of a
number of values, known as alleles. Commonly in genetic algorithms a locus controls a specific parameter, but many-to-many mappings between gene loci and parameters are possible (and believed to
be prevalent in real life).
• Markov process A discrete random process where the chance of moving to another state depends only on the current state of the process and not on any earlier history.
• Mating pool While we mainly treat the population as a whole, an equivalent approach is to separate from it and place into a mating pool those individuals that will have children. Fitness
selection is performed separately before genetic operations. Children are created from parents chosen only from the mating pool.
• Monte Carlo In a Monte Carlo search, points in the search space are sampled randomly. If sufficient independent points are sampled, a reasonable estimate of the whole search space can be made.
• Mutation In Nature, a mutation is viewed as a mistake when DNA is copied. E.g. when a cell divides to create two cells. Such an error introduces a random change to the DNA. Such changes are
usually considered to be harmful, and Nature has elaborate error-detection techniques which considerably reduce the rate at which errors are introduced. Note if the cell with the error survives
then all the cells it produces will also have the error. If the cell is a germ cell, i.e. used to create new individuals, then the children produced using it will inherit the error.
In artificial evolution, e.g. GAs and GP, mutation is used to mean any inherited random change to the genes. However, mutation is not used to describe crossover, i.e. the process whereby children
are created from a combination of genes from two (or more) parents in which genes are copied correctly. In GAs it is common to use mutation (i.e. random changes) in combination with crossover. In
traditional genetic algorithms mutation means choosing at random bits in the chromosome and flipping them. Each bit is considered independently and a decision is made with low probability p[mut]
if it is to be changed or not. Note the number of bits changed is variable, and lies between 0 and N (where N is the number of bits in the chromosome) but is on average p[mut] N. It is commonly
recommended to set p[mut] to about N^-1 [Back, 1996]
In evolutionary programming, evolutionary strategies and real-valued genetic algorithms, it is common to apply mutation to every gene by adding a small random value to the gene. In genetic
programming mutation is becoming increasingly often used. However, there are many different types of random changes that can be made to programs (see [Langdon,1998, pages 34-36]). Also, some
authors recommend using a number of different types of mutations in combination with each other [Angeline, 1998]. One unfortunate source of confusion is p[mut]. In GAs it means the chance of
changing each gene per generation. In some cases in GP, p[mut] is used to mean the chance that a child will be produced using mutation, rather than crossover. That is, treating p[mut] as
analogous to p[xo]. Here we use p[m'] to denote this second meaning.
• Order O(H) The number of defining symbols in schema H.
• Parity problems Benchmark problems widely used in GP but inherited from the artificial neural network community. Parity is calculated by summing all the binary inputs and reporting if the sum is
odd or even. This is considered difficult because: (1) a very simple artificial neural network cannot solve it and (2) all inputs need to be considered and a change to any one of them changes the
• Polyploid A species with more than two chromosomes grouped together. See diploid and haploid.
• Premature convergence This loosely means that something has gone wrong. More precisely, that the population has converged (every individual in the population is identical, see convergence) to a
suboptimal solution. Often the term ``premature convergence'' is loosely used.
• Propagation The inheritance of characteristics of one generation by the next. For example, a schema is propagated if individuals in the current generation match it and so do those in the next
generation. Those in the next generation may be (but don't have to be) children of parents who matched it.
• Recombination This is creating children by combining the genetic material from two (or more) parents. Effectively it is another name for crossover.
• Roulette-wheel selection The archetypal selection scheme for genetic algorithms. We imagine a biased roulette wheel where each individual in the population has its own slot and the width of its
slot is proportional to its fitness. That is, above-average individuals have wider-than-average slots. Individuals are selected to be parents of children in the next generation independently, one
at a time, by spinning the wheel and dropping a ball into it. The chance of drawing a particular individual is proportional to the width of its slot, i.e. proportional to its fitness. However,
others selection techniques, such as stochastic universal sampling [Back, 1996, page 120] or tournament selection, are often used in practice. This is because they have less stochastic noise, or
are fast, easy to implement and have a constant selection pressure [Blickle, 1996].
• Schema A set of programs or bit strings that have some genotypic similarity. Usually the set is specified by defining a similarity template which members of the set must match. The template
specifies the fixed part, which the programs must match, and the variable part. Don't care symbols are used to define the variable part. In tree schemata, both the content and the shape of the
tree must be considered.
• Schemata Plural of schema.
• Stochastic matrix A matrix whose elements lie between 0 and 1 and where the sum of all the elements in each row is 1. A doubly stochastic matrix has the additional property that its transpose
(i.e. the matrix whose i,j element is equal to the j,i element of the original matrix) is also a stochastic matrix.
• Terminal Another name for leaf.
• Unary function A function that takes one argument.
• Wild-card symbol This is also known as a ``don't care'' symbol. These are =, * and #. They indicate how a schema can match an actual program (or bit string) in the population.
Glossary from Genetic Programming and Data Structures.
W.B.Langdon 19 October 2002 | {"url":"http://www0.cs.ucl.ac.uk/staff/W.Langdon/FOGP/glossary.html","timestamp":"2024-11-13T14:14:12Z","content_type":"text/html","content_length":"17781","record_id":"<urn:uuid:b93c2f9c-2837-40b9-9df2-96426fc558b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00417.warc.gz"} |
Eureka Math Grade 8 Module 3 Lesson 4
Engage NY Eureka Math 8th Grade Module 3 Lesson 4 Answer Key
Eureka Math Grade 8 Module 3 Lesson 4 Exercise Answer Key
In the diagram below, points R and S have been dilated from center O by a scale factor of r=3.
a. If |OR|=2.3 cm, what is |OR’|?
|OR’|=3(2.3 cm)=6.9 cm
b. If |OS|=3.5 cm, what is |OS’|?
|OS’|=3(3.5 cm)=10.5 cm
c. Connect the point R to the point S and the point R’ to the point S’. What do you know about the lines that contain segments RS and R’ S’?
The lines containing the segments RS and R’S’ are parallel.
d. What is the relationship between the length of segment RS and the length of segment R’ S’?
The length of segment R’ S’ is equal to the length of segment RS, times the scale factor of 3 (i.e., |R’ S’|=3|RS|).
e. Identify pairs of angles that are equal in measure. How do you know they are equal?
|∠ORS|=|∠OR’ S’| and |∠OSR|=|∠OS’ R’| They are equal because they are corresponding angles of parallel lines cut by a transversal.
Eureka Math Grade 8 Module 3 Lesson 4 Problem Set Answer Key
Students verify that the fundamental theorem of similarity holds true when the scale factor r is 0<r<1.
Question 1.
Use a piece of notebook paper to verify the fundamental theorem of similarity for a scale factor r that is
✓ Mark a point O on the first line of notebook paper.
✓ Mark the point P on a line several lines down from the center O. Draw a ray, \(\overrightarrow{O P}\). Mark the point P’ on the ray and on a line of the notebook paper closer to O than you placed
point P. This ensures that you have a scale factor that is 0<r<1. Write your scale factor at the top of the notebook paper.
✓ Draw another ray, \(\overrightarrow{O Q}\), and mark the points Q and Q’ according to your scale factor.
✓ Connect points P and Q. Then, connect points P’ and Q’.
✓ Place a point, A, on the line containing segment PQ between points P and Q. Draw ray \(\overrightarrow{O A}\). Mark point A’ at the intersection of the line containing segment P’Q’ and ray \(\
overrightarrow{O A}\).
Sample student work is shown in the picture below:
a. Are the lines containing segments PQ and P’Q’ parallel lines? How do you know?
Yes, the lines containing segments PQ and P’Q’ are parallel. The notebook lines are parallel, and these lines fall on the notebook lines.
b. Which, if any, of the following pairs of angles are equal in measure? Explain.
i. ∠OPQ and ∠OP’Q’
ii. ∠OAQ and ∠OA’Q’
iii. ∠OAP and ∠OA’P’
iv. ∠OQP and ∠OQ’P’
All four pairs of angles are equal in measure because each pair of angles are corresponding angles of parallel lines cut by a transversal. In each case, the parallel lines are line PQ and line P’ Q’,
and the transversal is the respective ray.
c. Which, if any, of the following statements are true? Show your work to verify or dispute each statement.
i. |OP’|=r|OP|
ii. |OQ’|=r|OQ|
iii. |P’A’|=r|PA|
iv. |A’Q’|=r|AQ|
All four of the statements are true. Verify that students have shown that the length of the dilated segment was equal to the scale factor multiplied by the original segment length.
d. Do you believe that the fundamental theorem of similarity (FTS) is true even when the scale factor is 0<r<1? Explain.
Yes, because I just experimentally verified the properties of FTS for when the scale factor is 0<r<1.
Question 2.
Caleb sketched the following diagram on graph paper. He dilated points B and C from center O.
a. What is the scale factor r? Show your work.
b. Verify the scale factor with a different set of segments.
|B’ C’|=r|BC|
c. Which segments are parallel? How do you know?
Segment BC and B’C’ are parallel. They lie on the lines of the graph paper, which are parallel.
d. Which angles are equal in measure? How do you know?
|∠OB’ C’|=|∠OBC|, and |∠OC’ B’|=|∠OCB| because they are corresponding angles of parallel lines cut by a transversal.
Question 3.
Points B and C were dilated from center O.
a. What is the scale factor r? Show your work.
b. If |OB|=5, what is |OB’|?
c. How does the perimeter of triangle OBC compare to the perimeter of triangle OB’C’?
The perimeter of triangle OBC is 12 units, and the perimeter of triangle OB’C’ is 24 units.
d. Did the perimeter of triangle OB’C’=r×(perimeter of triangle OBC)? Explain.
Yes, the perimeter of triangle OB’C’ was twice the perimeter of triangle OBC, which makes sense because the dilation increased the length of each segment by a scale factor of 2. That means that each
side of triangle OB’C’ was twice as long as each side of triangle OBC.
Eureka Math Grade 8 Module 3 Lesson 4 Exit Ticket Answer Key
Steven sketched the following diagram on graph paper. He dilated points B and C from point O. Answer the following questions based on his drawing.
Question 1.
What is the scale factor r? Show your work.
Question 2.
Verify the scale factor with a different set of segments.
|B’ C’|=r|BC|
Question 3.
Which segments are parallel? How do you know?
Segments BC and B’C’ are parallel since they lie on the grid lines of the paper, which are parallel.
Question 4.
Are ∠OBC and ∠OB’C’ right angles? How do you know?
The grid lines on graph paper are perpendicular, and since perpendicular lines form right angles, ∠OBC and ∠OB’C’ are right angles. | {"url":"https://eurekamathanswerkeys.com/eureka-math-grade-8-module-3-lesson-4/","timestamp":"2024-11-08T23:40:36Z","content_type":"text/html","content_length":"44698","record_id":"<urn:uuid:87bbb272-345e-426b-a0ca-7d10a3b7e472>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00342.warc.gz"} |
Section 14.6: Review for Chapter 14 - The Nature of Mathematics - 13th Edition
Section 14.6: Review for Chapter 14
Studying for a chapter examination is a personal process, one which nobody else can do for you. Simply take the time to review what you have done.
Here are the new terms in Chapter 14.
Average [14.2]
Bar graph [14.1]
Bell-shaped curve [14.3]
Bimodal [14.2]
Box plot [14.2]
Circle graph [14.1]
Classes [14.1]
Continuous distribution [14.3]
Correlation [14.4]
Cumulative frequency [14.3]
Deciles [14.2]
Descriptive statistics [14.2]
Fallacy of exceptions [14.5]
Frequency [14.1]
Frequency distribution [14.1]
Graph [14.1]
Grouped frequency distribution [14.1]
Histogram [14.1]
Inferential statistics [14.5]
Interval [14.1]
Least squares line [14.4]
Least squares method [14.4]
Line graph [14.1]
Linear correlation coefficient [14.4]
Mean [14.2]
Measures of central tendency [14.2]
Measures of dispersion [14.2]
Measures of position [14.2]
Median [14.2]
Mode [14.2]
Normal curve [14.3]
Pearson correlation coefficient [14.4]
Percentile [14.2]
Pictograph [14.1]
Pie chart [14.1]
Population [14.5]
Quartile [14.2]
Range [14.2]
Regression analysis [14.4]
Sample [14.5]
Scatter diagram [14.4]
Significance level [14.4]
Skewed distribution [14.3]
Standard deviation [14.2]
Statistics [overview]
Stem-and-leaf plot [14.1]
Target population [14.5]
Type I error [14.5]
Type II error [14.5]
Variance of a population [14.2]
Variance of a random variable [14.2]
Weighted mean [14.2]
z-score [14.3]
If you can describe the term, read on to the next one; if you cannot, then look it up in the text (the section number is shown in brackets).
Can you explain each of these important ideas in your own words?
Bar graphs, line graphs, circle graphs, and pictographs [14.1]
Misuses of graphs [14.1]
Measure of central tendency. [14.2]
Mean, weighted mean, median, and mode [14.2]
Standard deviation [14.2]
Normal curve [14.3]z-scores. [14.3]
Linear correlation coefficient [14.4]
Slope and y-intercept of the least squares (or regression) line [14.4]
Descriptive and inferential statistics [14.5]
Sample vs. population [14.5]
Type I and Type II sampling error [14.5]
Next, make sure you understand the types of problems inChapter 14.
Prepare a frequency distribution. [14.1]
Draw a bar graph. [14.1]
Draw a line graph. [14.1]
Draw a stem-and-leaf plot. [14.1]
Draw a circle graph. [14.1]
Draw a pictograph. [14.1]
Read and interpret relationships presented in graphical form. [14.1]
Recognize misuses of graphs. [14.1]
Find the mean, median, and mode for a set of data. [14.2]
Decide on an appropriate measure of central tendency. [14.2]
Find the range, standard deviation, and variance for a set of data. [14.2]
Find a cumulative distribution. [14.3]
Interpret information given in table form. [14.3]
Find the expected numbers for ranges of a normally distributed set of data. [14.3]
Determine the probability of falling within a certain range of a normally distributed
set of data. [14.3]
Find and use z-scores. [14.3]
Know the meanings associated with a normal curve. [14.3]
Draw a scatter diagram for a data set. [14.4]
Decide whether there is a significant linear correlation between two given variables. [14.4]
Find a regression line for a data set.[14.4]
Discuss the type of correlation for a given data set. [14.4]
Determine whether there is a significant linear correlation, given the number of items
and the correlation coefficient. [14.4]
Find the correlation coefficient for a given set of data. [14.4]
Choose an appropriate procedure for selecting an unbiased sample. [14.4]
Classify Type I and Type II errors. [14.5]
Make an inference about a population by taking a sample. [14.5]
Once again, see if you can verbalize (to yourself) how to do each of the listed types of problems. Work all of Chapter 14 Review Questions (whether they are assigned or not).
Work through all of the problems before looking at the answers, and then correct each of the problems. The entire solution is shown in the answer section at the back of the text. If you worked the
problem correctly, move on to the next problem, but if you did not work it correctly (or you did not know what to do), look back in the chapter to study the procedure, or ask your instructor.
Finally, go back over the homework problems you have been assigned. If you worked a problem correctly, move on the next problem, but if you missed it on your homework, then you should look back in
the text or talk to your instructor about how to work the problem. If you follow these steps, you should be successful with your review of this chapter.
We give all of the answers to the Chapter Review questions (not just the odd-numbered questions), so be sure to check your work with the answers as you prepare for an examination. | {"url":"https://mathnature.com/essential-reviews-14-6/","timestamp":"2024-11-04T06:08:32Z","content_type":"text/html","content_length":"114751","record_id":"<urn:uuid:68a5c1c4-d830-4fd9-82fc-6bb31b9771a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00765.warc.gz"} |
Vision in Cichlids :: by Dr. H.J. van der Meer
Every sense organ, including the eye, is characterized by its capacity to detect detail (resolution) and by a threshold-value of the weakest incoming stimulus registered (sensitivity). Retinal
resolution and sensitivity are functionally connected to density and size of visual unit and therefore compete with each other for available space. Retinal growth occurs by adding new neurons
including cones from the retinal margin and by increasing cell size (cone size). Only rods increase their number throughout the retina in between the other neurons. As a consequence, both resolution
and sensitivity increase during growth, even though cone density decreases. Sometimes, specific cone types may benefit from an improved packing. For instance, single cones may degenerate after which
the liberated space comes available to the double cones.
Structural parameters
The size of the double cones (Sd) and of the single cones (Ss) can be derived from tangential sections through the retina at the level of the cone photoreceptors by measuring the areas of their
cross-sectional inner sections (ellipsoids). Planimetric cone density can be derived from the same tangential sections by counting the number of cones per unit area (Dd) or single cones per unit area
(Ds). The angular density of the double cones (Hd) is the number of double cones per degree of visual angle. It is derived from the planimetric density as:
Where i is the image-distance (the distance between the centre of the eye-lens and the photoreceptor layer). The image-distance as a function of the lens-radius is often taken as the mean value of
2,5 . r (r = lens radius) conforming to Matthiesen’s ratio (Sivak, 1990).
The packing of the cones (Z), viz. the fraction retinal area occupied by cones, can be calculated by multiplying the planimetric cone density by cone size and is expressed as the percentage occupied
retinal area.
In transverse sections of the retina the number of nuclei of retinal neurons (rods, cones, intermediate cells, ganglion cells) can be counted in a number of field of a specific length throughout a
section in various areas of the retina (dorsal, ventral, caudal, rostral). The planimetric neuron densities (D) can then be calculated using:
where m is the number of nuclei per field, t is the thickness of the section, d is the approximate diameter of the concerned nuclei and f is the thickness of the smallest fragments of recognized
nuclei with an estimated value of f = 0,25 d. All planimetric densities are presented as numbers of cells per 0,01 mm2.
Quantitative comparison of retinal structures is hampered by differences in eye-size and intra-retinal variability. Inter-retinal comparison is usually based on the mean values of the structural
parameters collected from a range of similar locations in each retina. Clear correlations within each species between eye-size and structural parameters facilitate interspecific comparison by using
regression analysis. To test significance of interspecific differences of structural features, multiple regression analyses is carried out using dummy variables to indicate the different species (
Kleinbaum & Kupper, 1978).
For simple presentation of interspecific comparison, specimens with eyes of equal size would be most convenient. As such material is hardly available, linear regression has been applied to estimate
the structural data for all species at a specific lens radius (r), usually of 1,40 mm.
Visual resolution
At a given eye-size, the best possible resolution is constrained by the distance between the cones. However, the cones are not similar: there are single cones and double cones, and they contain three
different visual pigments (Bowmaker, 1995). There is also a certain degree of convergent and divergent connections between the cones and the postsynaptic cells (Wagner, 1990), suggesting a number of
interacting cones involved in a photopic unit. The total number of cones involved in such a photopic unit may be influenced by the light conditions, but has most certainly a minimum value.
The resolution expressed as the minimum separable angle a can be given as a function of the angular density of the double cones according to:
where d represents the (minimum) number of double cones per photopic unit. Otten (1981) proposed a unit size of 5 cones (two double cons + one single cone). Van der Meer (1995) started from the same
unit size but considered a certain degree of divergence that reduced d from 2 to 0,5 due to overlapping receptive fields.
Sensitivity is mend here as threshold values. It is associated with the photon catching ability and the degree of convergence of the receptor cells. At low light levels, the degree of convergence of
the rods to postsynaptic cells plays a prominent role in the detection of photons (Lythgoe, 1979). The more the rods converge to one ganglion cell the larger the receptive field of the scotopic unit
and the lower the scotopic threshold becomes. The ratio between rods and ganglion cells (Cr) can be taken as a comparative measure for scotopic sensitivity, following Cr1 / Cr2 when comparing species
1 and 2.
At higher light levels the rods are screened off from the incoming light and the photopic system becomes effective. The similarity in retinal architecture among the haplochromine cichlids suggests
the same degree of cone convergence in the closely related species. In a comparative way the photon catching ability is of major importance. The photon catching ability of individual cones depends on
their size and the properties of their visual pigments. With respect to the spectral sensitivity, one can distinguish between the short wave sensitive single cones and the medium/long wave sensitive
double cones.
The size of the cones is taken as a comparative determinant for the photopic thresholds. The inner segments of the cone cells contain high concentrations of mitochondria. This influences the
refractive index of the cells in comparison with their surrounding as such that the cones can be regarded as wave guides (Snyder, 1975). Besides, the amount of mitochondria can be understood as an
indication for the amount of visual pigment in the outer segments. Photon catching ability is therefore best presented by the content of the cones that compose a photopic unit (Vc), according to:
Both resolution and photon catching ability can be improved by maximizing the spatial occupation of the receptors, which is expressed as the packing (Z). So maximization of the packing is a mean to
optimise optical functions. | {"url":"https://vision-in-cichlids.com/resolution_sensitivity.htm","timestamp":"2024-11-06T12:18:04Z","content_type":"application/xhtml+xml","content_length":"18787","record_id":"<urn:uuid:503d6c34-a6b8-450e-9b0a-bffb00011a10>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00241.warc.gz"} |
New Decimal Systems - Great Sandbox for Data Scientists and Mathematicians
We illustrate pattern recognition techniques applied to an interesting mathematical problem: The representation of a number in non-conventional systems, generalizing the familiar base-2 or base-10
systems. The emphasis is on data science rather than mathematical theory, and the style is that of a tutorial, requiring minimum knowledge in mathematics or statistics. However, some
off-the-beaten-path, state-of-the-art number theory research is discussed here, in a way that is accessible to college students after a first course in statistics. This article is also peppered with
mathematical and statistical oddities, for instance the fact that there are units of information smaller than the bit.
You will also learn how the discovery process works, as I have included research that I thought would lead me to interesting results, but did not. In all scientific research, only final, successful
results are presented, while actually most of the research leads to dead-ends, and is not made available to the reader. Here is your chance to discover these hidden steps, and my thought process!
The topic discussed is one of active research with applications to Blockchain or strong encryption. It is of interest to agencies such as the NSA or private research labs working on security issues,
which explains why it is not easy to find many references; some are probably classified documents. As far as I know, it is not part of any university curriculum either.
Indeed, the fear of reversibility (successful attacks on cryptographic keys using modern computer networks, new reverse-engineering algorithms, and distributed architecture) has led to the
development of quantum algorithms and quantum computers, as well as Blockchain.
All the results in this article were obtained without writing a single line of code, and replicable as I share my Excel spreadsheets.
Content of the article 1. General Framework
• Components of number representation systems
• General properties of these systems
• Examples of number representation systems
• Examples of patterns in digit distribution
• Purpose of this research
2. Defects found in the Logistic Map system
3. First step in designing a new system
• First version of new number representation system
• Holes, autocorrelations, and entropy (information theory)
4. Towards a more general, better, hybrid system
• Faulty digits, ergodicity, and high precision computing
• Finding the equilibrium distribution with the percentile test
• Central limit theorem, random walks, Brownian motions
• Data set and Excel computations
5. Related articles
You can read the full article here.
No comments: | {"url":"https://www.vgranville.com/2018/04/new-decimal-systems-great-sandbox-for.html","timestamp":"2024-11-05T06:42:46Z","content_type":"text/html","content_length":"55744","record_id":"<urn:uuid:2466ce83-7907-4909-9de0-7cd738259b79>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00894.warc.gz"} |
Gradient Descent Zigzag - Try Machine Learning
Gradient Descent Zigzag
When it comes to optimizing machine learning models, gradient descent is a popular and powerful optimization algorithm. However, traditional gradient descent algorithms may suffer from slow
convergence and high computational costs. This is where the concept of gradient descent zigzag comes into play. In this article, we will explore what gradient descent zigzag is, how it works, and how
it can improve the performance of optimization algorithms.
Key Takeaways:
• Gradient descent zigzag is an optimization algorithm that can improve the performance of traditional gradient descent algorithms.
• It achieves faster convergence and reduces computational costs by introducing zigzag movements in the parameter space.
• The zigzag movements allow the algorithm to escape from local minima and explore a larger portion of the parameter space.
Gradient descent zigzag is based on the concept of random restarts, where the optimization algorithm is restarted multiple times with different initial parameters. However, instead of completely
random restarts, gradient descent zigzag introduces controlled zigzag movements in the parameter space. These zigzag movements help the algorithm to explore different regions of the parameter space
and avoid getting trapped in local minima.
*Gradient descent zigzag introduces controlled zigzag movements in the parameter space to explore different regions and avoid local minima.
In traditional gradient descent, the algorithm updates the parameters in small steps towards the direction of steepest descent. This continuous movement towards the direction of steepest descent can
lead to slow convergence if the initial parameters are far away from the optimal solution. Additionally, it might take a considerable amount of time to explore the entire parameter space, especially
in high-dimensional problems.
*Continuous movement towards the direction of steepest descent can lead to slow convergence if the initial parameters are far from the optimal solution.
To address these issues, gradient descent zigzag introduces periodic zigzag movements. After certain iterations, instead of continuously moving towards the direction of steepest descent, the
algorithm changes its direction and moves in the opposite direction for a certain number of iterations. This zigzag movement helps the algorithm to explore the parameter space more efficiently and
escape from local minima.
*Periodic zigzag movements help the algorithm to explore the parameter space more efficiently and escape from local minima.
To illustrate how gradient descent zigzag works, let’s consider an example of optimizing a linear regression model. We have a dataset with a single input feature and want to find the optimal slope
and intercept for the linear regression line. Traditional gradient descent would update the slope and intercept in small steps towards the direction of steepest descent, but it might take a long time
to converge if the initial parameters are far from the optimal solution.
However, with gradient descent zigzag, the algorithm can explore the parameter space more efficiently. It periodically introduces zigzag movements that help the algorithm to move towards different
regions of the parameter space and avoid getting stuck in local minima. This leads to faster convergence and improved performance of the optimization algorithm.
*With gradient descent zigzag, the algorithm periodically introduces zigzag movements to move towards different regions and avoid local minima.
Benefits of Gradient Descent Zigzag
Gradient descent zigzag offers several benefits compared to traditional gradient descent algorithms:
• Faster convergence: The zigzag movements allow the algorithm to explore a larger portion of the parameter space in a shorter amount of time, leading to faster convergence.
• Improved exploration: By moving in zigzag patterns, the algorithm can escape from local minima and explore different regions of the parameter space, which can help find better solutions.
• Reduced computational costs: The controlled zigzag movements help to avoid unnecessary computations in specific areas of the parameter space, reducing the overall computational costs.
Data Comparisons
Algorithm Convergence Time
Traditional Gradient Descent 10 minutes
Gradient Descent Zigzag 5 minutes
Table 1: Comparison of convergence times between traditional gradient descent and gradient descent zigzag. Gradient descent zigzag achieves faster convergence compared to traditional gradient
*Table 1 shows a comparison of convergence times between traditional gradient descent and gradient descent zigzag. Gradient descent zigzag achieves faster convergence compared to traditional gradient
Gradient descent zigzag is a powerful optimization algorithm that can enhance the performance of traditional gradient descent algorithms. By introducing controlled zigzag movements, it allows for
faster convergence, improved exploration, and reduced computational costs. With its ability to escape local minima and efficiently explore the parameter space, gradient descent zigzag is a valuable
technique for optimizing machine learning models.
Common Misconceptions
Gradient Descent Zigzag
Gradient descent is an optimization algorithm commonly used in machine learning to minimize the error of a model. However, there are several misconceptions people often have regarding gradient
descent and its zigzag behavior. Let’s debunk some of these common misconceptions below:
• Gradient descent always follows a smooth path towards the minimum:
□ In reality, gradient descent can exhibit a zigzag pattern, where it oscillates back and forth around the minimum point.
□ This behavior is due to the step size and the curvature of the loss function surface.
□ Zigzagging doesn’t imply that the algorithm is stuck or incorrect, but rather it is finding a route to the minimum within the search space.
• Zigzagging means the algorithm is inefficient:
□ While the zigzagging may seem inefficient, it’s actually a necessary part of the optimization process.
□ This behavior allows the algorithm to explore different regions of the parameter space and potentially escape local minima.
□ By taking a zigzagging path, gradient descent can converge to the global minimum rather than getting stuck at a suboptimal local minimum.
• Gradient descent always requires a fixed step size:
□ Contrary to popular belief, gradient descent can have variable step sizes.
□ There are variations of gradient descent algorithms, such as adaptive learning rate methods like AdaGrad or Adam, that adjust the step size dynamically based on the past gradients.
□ These adaptive methods help overcome some of the limitations of fixed step size methods and can potentially accelerate convergence.
Gradient descent is an optimization algorithm commonly used in machine learning and data science to minimize the error of a model by adjusting its parameters. However, gradient descent can have its
challenges, including oscillating or zigzagging behavior. This article explores various aspects of gradient descent zigzag and presents verifiable data and insights to illustrate this phenomenon.
Initial Learning Rates and Error Reduction
Gradient descent requires an appropriate learning rate to ensure convergence. In this table, we compare the initial learning rate to the reduction in error after 100 iterations.
Initial Learning Rate Error Reduction after 100 Iterations
0.001 0.125
0.01 0.573
0.1 0.918
1 0.998
10 0.999
Algorithm Convergence and Iterations
This table showcases the convergence of gradient descent algorithms with different numbers of iterations.
Iterations Error
100 0.324
500 0.041
1000 0.008
5000 0.001
10000 0.0005
Different Cost Functions and Minimization
Gradient descent enables minimization of various cost functions. Here, we compare the minimum achieved by different cost functions.
Cost Function Minimum
Mean Squared Error 0.007
Cross-Entropy Loss 0.235
Hinge Loss 0.425
Log Loss 0.548
Huber Loss 0.221
Dimensionality and Gradient Descent
Gradient descent can exhibit varied behavior in high-dimensional spaces. This table showcases the performance of different algorithms with varying input dimensions.
Input Dimensions Error
10 0.014
50 0.032
100 0.059
500 0.114
1000 0.165
Momentum Optimization and Zigzag
Momentum optimization is a technique that significantly reduces zigzagging during gradient descent. Here, we compare the oscillatory behavior with and without momentum.
Iteration Without Momentum With Momentum
1 0.036 0.025
2 0.421 0.271
3 0.307 0.133
4 0.376 0.048
5 0.162 0.005
Batch Sizes and Gradient Descent
The batch size in gradient descent impacts its convergence rate and ability to escape local optima. This table demonstrates the effect of different batch sizes.
Batch Size Error after 100 Iterations
10 0.092
50 0.057
100 0.041
500 0.032
1000 0.029
Regularization Methods and Performance
Regularization techniques are employed to prevent overfitting during gradient descent. This table shows the impact of different regularization methods on performance.
Regularization Method Error
L1 Regularization 0.154
L2 Regularization 0.097
Elastic Net Regularization 0.081
Dropout Regularization 0.113
Batch Normalization 0.059
Feature Scaling and Accuracy
Applying feature scaling to the input data can improve the accuracy and stability of gradient descent. This table highlights the accuracy achieved with and without feature scaling.
Scaling Applied Accuracy (%)
No 82.5
Yes 89.3
Gradient descent zigzag is a phenomenon that can affect the effectiveness and efficiency of optimization algorithms. Through analyzing various aspects of gradient descent, including learning rates,
iterations, cost functions, dimensions, momentum optimization, batch sizes, regularization methods, and feature scaling, we have gained insights into the behavior of gradient descent and its impact
on results. Understanding these intricacies plays a crucial role in enhancing the performance of machine learning models and further advancing the field of data science.
Frequently Asked Questions
FAQs about Gradient Descent
1. What is gradient descent?
Gradient descent is an optimization algorithm used in machine learning and optimization to find the minimum of a function. It iteratively adjusts the parameters of the function by moving in the
direction of steepest descent, which is the negative gradient.
2. How does gradient descent work?
Gradient descent works by computing the gradient of a cost function with respect to the parameters of the function. It then updates the parameters by taking steps in the direction of the negative
gradient until it reaches a local minimum of the function.
3. What is the purpose of gradient descent?
The purpose of gradient descent is to optimize the parameters of a function so that it minimizes the cost function. It is commonly used in machine learning to train models by finding the best set
of parameters that minimize the difference between predicted and actual outputs.
4. What are the types of gradient descent algorithms?
There are three main types of gradient descent algorithms: batch gradient descent, stochastic gradient descent, and mini-batch gradient descent. Batch gradient descent computes the gradient of
the cost function using all training examples. Stochastic gradient descent uses a single training example to compute the gradient. Mini-batch gradient descent is a variant that computes the
gradient using a small subset of training examples.
5. What is the learning rate in gradient descent?
The learning rate in gradient descent is a hyperparameter that determines the size of the steps taken during optimization. It controls how quickly the algorithm converges to the minimum. A larger
learning rate can cause the algorithm to converge faster, but it may overshoot the minimum. A smaller learning rate can ensure convergence but at the cost of a slower convergence rate.
6. Can gradient descent get stuck in local minima?
Yes, gradient descent can get stuck in local minima, which are suboptimal solutions. However, this problem can be mitigated by using techniques such as random initialization of parameters,
adaptive learning rates, or using different optimization algorithms like momentum-based optimization or simulated annealing.
7. What is the tradeoff between batch gradient descent and stochastic gradient descent?
Batch gradient descent computes the gradient using all training examples, which can be computationally expensive for large datasets. However, it provides a more accurate estimate of the gradient.
On the other hand, stochastic gradient descent uses a single training example for each update, which is more computationally efficient but can lead to noisy gradients.
8. What are the convergence criteria for gradient descent?
Gradient descent typically stops when one or more convergence criteria are met. These criteria can include a maximum number of iterations, a small change in the cost function between iterations,
or reaching a specific threshold value for the cost function.
9. How to choose the appropriate learning rate for gradient descent?
Choosing the appropriate learning rate for gradient descent involves experimentation. A learning rate that is too small can result in slow convergence, while a learning rate that is too large can
cause the algorithm to overshoot the minimum. Techniques like grid search or using learning rate schedules can help in finding an optimal learning rate.
10. Can gradient descent be used for non-numeric optimization problems?
Gradient descent is primarily used for optimizing numeric functions. However, with appropriate modifications and domain-specific transformations, gradient descent can be adapted for non-numeric
optimization problems, such as optimizing the parameters of a machine learning model. | {"url":"https://trymachinelearning.com/gradient-descent-zigzag/","timestamp":"2024-11-02T07:26:35Z","content_type":"text/html","content_length":"76728","record_id":"<urn:uuid:9f7465ec-9cd8-4a6a-8970-4c882c25305e>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00041.warc.gz"} |
as.matrix.bdsmatrix: Convert a bdsmatrix to a ordinary (dense) matrix
Method to convert from a Block Diagonal Sparse (bdsmatrix) matrix representation to an ordinary one
# S3 method for bdsmatrix
as.matrix(x, ...)
a bdsmatrix object
other arguments are ignored (necessary to match the as.matrix template)
Note that the conversion of a large bdsmatrix can easily exceed memory. | {"url":"https://www.rdocumentation.org/packages/bdsmatrix/versions/1.3-7/topics/as.matrix.bdsmatrix","timestamp":"2024-11-15T00:31:25Z","content_type":"text/html","content_length":"52345","record_id":"<urn:uuid:231bac73-cb45-420f-9938-17936d20cf22>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00744.warc.gz"} |
Data Structures Shell Sort - Tutoline offers free online tutorials and interview questions covering a wide range of technologies, including C, C++, HTML, CSS, JavaScript, SQL, Python, PHP, Engineering courses and more. Whether you're a beginner or a professional, find the tutorials you need to excel in your field."
Introduction to Data Structures
Data structures are fundamental concepts in computer science that allow us to organize and manipulate data efficiently. They provide a way to store and retrieve data in a structured manner, enabling
efficient operations such as searching, sorting, and inserting. In order to understand the importance of data structures, it is essential to grasp the concept of data itself.
Data can be defined as any information that is processed or stored by a computer. It can take various forms, including numbers, text, images, audio, and video. In order to make sense of this data and
perform meaningful operations on it, we need to organize it in a way that is logical and efficient. This is where data structures come into play.
Data structures provide a framework for organizing and storing data in a way that allows for efficient access, manipulation, and retrieval. They are like containers that hold different types of data
and provide methods to interact with that data. By choosing the appropriate data structure for a specific problem or task, we can optimize the performance of our programs and make them more
There are various types of data structures, each with its own characteristics and use cases. Some common examples include arrays, linked lists, stacks, queues, trees, and graphs. Each data structure
has its own advantages and disadvantages, and the choice of which one to use depends on the specific requirements of the problem at hand.
For example, if we need to store a collection of elements that can be accessed in constant time, an array would be a suitable choice. On the other hand, if we need to perform frequent insertions and
deletions at both ends of the collection, a doubly-linked list would be more appropriate. By understanding the strengths and weaknesses of different data structures, we can make informed decisions
when designing and implementing our programs.
Furthermore, data structures are not only important for efficient data manipulation but also for solving complex problems. Many algorithms and problem-solving techniques rely heavily on the use of
data structures. For example, graph algorithms such as Dijkstra’s algorithm and depth-first search heavily rely on the use of graphs as a data structure to represent and solve problems.
In conclusion, data structures are essential concepts in computer science that enable us to organize and manipulate data efficiently. They provide a way to store and retrieve data in a structured
manner, allowing for efficient operations such as searching, sorting, and inserting. By understanding the different types of data structures and their characteristics, we can make informed decisions
when designing and implementing our programs, leading to more efficient and optimized solutions.
Shell Sort Algorithm
Shell sort is an efficient sorting algorithm that builds upon the insertion sort algorithm. It was invented by Donald Shell in 1959 and is known for its ability to sort a large number of elements
The Shell sort algorithm works by dividing the input array into smaller subarrays and then sorting these subarrays using the insertion sort algorithm. The subarrays are created by selecting a gap
sequence, which determines the size of the subarrays. The gap sequence is typically generated using the formula h = 3h + 1, where h is the initial gap size. The gap size is reduced after each
iteration until it becomes 1.
When sorting the subarrays, the elements are compared and swapped if necessary to ensure that they are in the correct order. The insertion sort algorithm is used to sort the subarrays because it is
efficient for small arrays. By sorting the subarrays, the algorithm gradually reduces the number of inversions in the array, which results in a partially sorted array. The final pass of the algorithm
using a gap size of 1 is essentially an insertion sort on the partially sorted array, resulting in a fully sorted array.
One of the key advantages of the Shell sort algorithm is that it performs well on large arrays. The gap sequence used by the algorithm ensures that the elements are compared and swapped across a
large distance, allowing for more efficient sorting. Additionally, the algorithm has a time complexity of O(n log n), which is faster than other simple sorting algorithms such as bubble sort and
selection sort.
However, the performance of the Shell sort algorithm can vary depending on the choice of the gap sequence. Different gap sequences can result in different running times, and finding the optimal gap
sequence for a given input array can be a challenging task. In practice, many programmers use the Marcin Ciura gap sequence, which has been found to perform well on a wide range of inputs.
In conclusion, the Shell sort algorithm is a powerful sorting algorithm that builds upon the insertion sort algorithm. It is known for its ability to sort a large number of elements quickly and has a
time complexity of O(n log n). By dividing the input array into smaller subarrays and sorting them using the insertion sort algorithm, the Shell sort algorithm gradually reduces the number of
inversions in the array, resulting in a fully sorted array. Although the choice of the gap sequence can affect the performance of the algorithm, the Marcin Ciura gap sequence is commonly used in
How Shell Sort Works
The shell sort algorithm works by repeatedly dividing the original list into smaller sublists and sorting them using the insertion sort algorithm. The key idea behind shell sort is to move elements
that are far apart closer to their correct position, which makes the subsequent insertion sort steps more efficient.
Here is a step-by-step explanation of how the shell sort algorithm works:
1. Start by defining the gap size, which determines the number of elements between each pair of compared elements. The gap size is initially set to half of the total number of elements in the list.
2. Compare elements that are gap positions apart and swap them if they are in the wrong order.
3. Repeat step 2 for all elements in the list, gradually reducing the gap size after each pass.
4. Continue the process until the gap size becomes 1, at which point the algorithm performs a final pass using the insertion sort algorithm to sort the remaining elements.
Let’s consider an example to understand how the shell sort algorithm works. Suppose we have an unsorted list of numbers: 5, 2, 9, 1, 8, 3, 7. We start by setting the gap size to half of the total
number of elements, which is 3. This means we will compare elements that are 3 positions apart.
In the first pass, we compare the elements at positions 1 and 4 (5 and 1), 2 and 5 (2 and 8), and 3 and 6 (9 and 3). We swap the pairs (5 and 1) and (9 and 3) because they are in the wrong order. The
list now becomes: 1, 2, 3, 5, 8, 9, 7.
In the second pass, we compare the elements at positions 1 and 2 (1 and 2), 2 and 3 (2 and 3), and 3 and 4 (3 and 5). Since all the pairs are in the correct order, no swaps are made. The list remains
the same: 1, 2, 3, 5, 8, 9, 7.
In the third pass, we compare the elements at positions 1 and 2 (1 and 2), 2 and 3 (2 and 3), and 3 and 4 (3 and 5). Again, no swaps are made as all the pairs are in the correct order. The list
remains the same: 1, 2, 3, 5, 8, 9, 7.
Finally, in the fourth pass, we compare the elements at positions 1 and 2 (1 and 2), 2 and 3 (2 and 3), and 3 and 4 (3 and 5). Once again, no swaps are made as all the pairs are in the correct order.
The list remains the same: 1, 2, 3, 5, 8, 9, 7.
At this point, the gap size becomes 1, and the shell sort algorithm performs a final pass using the insertion sort algorithm to sort the remaining elements. The insertion sort algorithm works by
comparing each element with the elements before it and inserting it in the correct position. In our example, the final pass would sort the list 1, 2, 3, 5, 7, 8, 9.
Overall, the shell sort algorithm is an efficient sorting algorithm that improves upon the insertion sort algorithm by reducing the number of comparisons and swaps needed to sort the list. It
achieves this by dividing the list into smaller sublists and sorting them using the insertion sort algorithm with a gradually decreasing gap size.
Shell Sort Example
Let’s illustrate the shell sort algorithm with an example. Consider the following list of numbers: 8, 3, 1, 5, 9, 2
Step 1: Set the initial gap size to half of the total number of elements, which is 3.
Step 2: Compare and swap elements that are 3 positions apart: 8, 3, 1, 5, 9, 2 ^ ^
Since 8 is greater than 5, we swap them: 5, 3, 1, 8, 9, 2
Step 3: Repeat step 2 for all elements in the list: 5, 3, 1, 8, 9, 2 ^ ^
Since 3 is less than 8, we don’t need to swap them. The list remains the same: 5, 3, 1, 8, 9, 2
Step 4: Reduce the gap size to 1 and perform a final pass using the insertion sort algorithm: 1, 2, 3, 5, 8, 9
After the final pass, the list is sorted in ascending order. The shell sort algorithm has successfully sorted the list using the gap sequence of 3, 1.
Shell sort is an efficient sorting algorithm that improves upon the insertion sort algorithm. It works by sorting sublists of the input array using different gap sizes, gradually reducing the gap
size until it becomes 1. The main idea behind shell sort is to move elements that are far apart towards their final positions, which makes it more efficient than insertion sort for larger lists.
The choice of gap sequence greatly affects the performance of the shell sort algorithm. The original shell sort algorithm used a gap sequence of n/2, n/4, n/8, …, 1, where n is the total number of
elements in the list. However, there are other gap sequences that can be used, such as the Knuth sequence or the Sedgewick sequence, which have been shown to have better performance in certain cases.
Overall, the shell sort algorithm is a versatile and efficient sorting algorithm that can be used to sort large lists of elements. Its performance depends on the choice of gap sequence, and it can be
further optimized by combining it with other sorting algorithms.
Advantages and Disadvantages of Shell Sort
Shell sort offers several advantages over other sorting algorithms:
• Efficiency: Shell sort is generally faster than other simple sorting algorithms, such as bubble sort and selection sort. This is because it uses a technique called “gap insertion sort” that
reduces the number of comparisons and swaps required.
• In-place Sorting: Shell sort only requires a constant amount of additional memory, making it suitable for sorting large datasets. Unlike merge sort or quicksort, which require additional memory
for recursion or merging, shell sort operates directly on the input array.
• Adaptive: The performance of shell sort can be improved by choosing an appropriate gap sequence based on the characteristics of the input data. This means that the algorithm can be customized to
perform better on certain types of data, such as partially sorted or nearly sorted arrays.
However, shell sort also has some disadvantages:
• Not Stable: Shell sort is not a stable sorting algorithm, which means that the relative order of equal elements may change during the sorting process. This can be a problem if the original order
of equal elements needs to be preserved.
• Gap Sequence Selection: The choice of gap sequence can significantly affect the efficiency of the shell sort algorithm. Different gap sequences may yield different sorting times for different
datasets. Finding the optimal gap sequence for a given dataset is a challenging task and often requires empirical testing.
Despite these disadvantages, shell sort remains a popular choice for sorting large datasets efficiently. Its combination of speed, in-place sorting, and adaptability make it a valuable tool in the
sorting toolbox.
Introduction to Data Structures
Data structures are fundamental concepts in computer science that allow us to organize and manipulate data efficiently. They provide a way to store and retrieve data in a structured manner, enabling
efficient operations such as searching, sorting, and inserting. In order to understand the importance of data structures, it is essential to grasp the concept of data itself.
Data can be defined as any information that is processed or stored by a computer. It can take various forms, including numbers, text, images, audio, and video. In order to make sense of this data and
perform meaningful operations on it, we need to organize it in a way that is logical and efficient. This is where data structures come into play.
Data structures provide a framework for organizing and storing data in a way that allows for efficient access, manipulation, and retrieval. They are like containers that hold different types of data
and provide methods to interact with that data. By choosing the appropriate data structure for a specific problem or task, we can optimize the performance of our programs and make them more
There are various types of data structures, each with its own characteristics and use cases. Some common examples include arrays, linked lists, stacks, queues, trees, and graphs. Each data structure
has its own advantages and disadvantages, and the choice of which one to use depends on the specific requirements of the problem at hand.
For example, if we need to store a collection of elements that can be accessed in constant time, an array would be a suitable choice. On the other hand, if we need to perform frequent insertions and
deletions at both ends of the collection, a doubly-linked list would be more appropriate. By understanding the strengths and weaknesses of different data structures, we can make informed decisions
when designing and implementing our programs.
Furthermore, data structures are not only important for efficient data manipulation but also for solving complex problems. Many algorithms and problem-solving techniques rely heavily on the use of
data structures. For example, graph algorithms such as Dijkstra’s algorithm and depth-first search heavily rely on the use of graphs as a data structure to represent and solve problems.
In conclusion, data structures are essential concepts in computer science that enable us to organize and manipulate data efficiently. They provide a way to store and retrieve data in a structured
manner, allowing for efficient operations such as searching, sorting, and inserting. By understanding the different types of data structures and their characteristics, we can make informed decisions
when designing and implementing our programs, leading to more efficient and optimized solutions.
Shell Sort Algorithm
Shell sort is an efficient sorting algorithm that builds upon the insertion sort algorithm. It was invented by Donald Shell in 1959 and is known for its ability to sort a large number of elements
The Shell sort algorithm works by dividing the input array into smaller subarrays and then sorting these subarrays using the insertion sort algorithm. The subarrays are created by selecting a gap
sequence, which determines the size of the subarrays. The gap sequence is typically generated using the formula h = 3h + 1, where h is the initial gap size. The gap size is reduced after each
iteration until it becomes 1.
When sorting the subarrays, the elements are compared and swapped if necessary to ensure that they are in the correct order. The insertion sort algorithm is used to sort the subarrays because it is
efficient for small arrays. By sorting the subarrays, the algorithm gradually reduces the number of inversions in the array, which results in a partially sorted array. The final pass of the algorithm
using a gap size of 1 is essentially an insertion sort on the partially sorted array, resulting in a fully sorted array.
One of the key advantages of the Shell sort algorithm is that it performs well on large arrays. The gap sequence used by the algorithm ensures that the elements are compared and swapped across a
large distance, allowing for more efficient sorting. Additionally, the algorithm has a time complexity of O(n log n), which is faster than other simple sorting algorithms such as bubble sort and
selection sort.
However, the performance of the Shell sort algorithm can vary depending on the choice of the gap sequence. Different gap sequences can result in different running times, and finding the optimal gap
sequence for a given input array can be a challenging task. In practice, many programmers use the Marcin Ciura gap sequence, which has been found to perform well on a wide range of inputs.
In conclusion, the Shell sort algorithm is a powerful sorting algorithm that builds upon the insertion sort algorithm. It is known for its ability to sort a large number of elements quickly and has a
time complexity of O(n log n). By dividing the input array into smaller subarrays and sorting them using the insertion sort algorithm, the Shell sort algorithm gradually reduces the number of
inversions in the array, resulting in a fully sorted array. Although the choice of the gap sequence can affect the performance of the algorithm, the Marcin Ciura gap sequence is commonly used in
How Shell Sort Works
The shell sort algorithm works by repeatedly dividing the original list into smaller sublists and sorting them using the insertion sort algorithm. The key idea behind shell sort is to move elements
that are far apart closer to their correct position, which makes the subsequent insertion sort steps more efficient.
Here is a step-by-step explanation of how the shell sort algorithm works:
1. Start by defining the gap size, which determines the number of elements between each pair of compared elements. The gap size is initially set to half of the total number of elements in the list.
2. Compare elements that are gap positions apart and swap them if they are in the wrong order.
3. Repeat step 2 for all elements in the list, gradually reducing the gap size after each pass.
4. Continue the process until the gap size becomes 1, at which point the algorithm performs a final pass using the insertion sort algorithm to sort the remaining elements.
Let’s consider an example to understand how the shell sort algorithm works. Suppose we have an unsorted list of numbers: 5, 2, 9, 1, 8, 3, 7. We start by setting the gap size to half of the total
number of elements, which is 3. This means we will compare elements that are 3 positions apart.
In the first pass, we compare the elements at positions 1 and 4 (5 and 1), 2 and 5 (2 and 8), and 3 and 6 (9 and 3). We swap the pairs (5 and 1) and (9 and 3) because they are in the wrong order. The
list now becomes: 1, 2, 3, 5, 8, 9, 7.
In the second pass, we compare the elements at positions 1 and 2 (1 and 2), 2 and 3 (2 and 3), and 3 and 4 (3 and 5). Since all the pairs are in the correct order, no swaps are made. The list remains
the same: 1, 2, 3, 5, 8, 9, 7.
In the third pass, we compare the elements at positions 1 and 2 (1 and 2), 2 and 3 (2 and 3), and 3 and 4 (3 and 5). Again, no swaps are made as all the pairs are in the correct order. The list
remains the same: 1, 2, 3, 5, 8, 9, 7.
Finally, in the fourth pass, we compare the elements at positions 1 and 2 (1 and 2), 2 and 3 (2 and 3), and 3 and 4 (3 and 5). Once again, no swaps are made as all the pairs are in the correct order.
The list remains the same: 1, 2, 3, 5, 8, 9, 7.
At this point, the gap size becomes 1, and the shell sort algorithm performs a final pass using the insertion sort algorithm to sort the remaining elements. The insertion sort algorithm works by
comparing each element with the elements before it and inserting it in the correct position. In our example, the final pass would sort the list 1, 2, 3, 5, 7, 8, 9.
Overall, the shell sort algorithm is an efficient sorting algorithm that improves upon the insertion sort algorithm by reducing the number of comparisons and swaps needed to sort the list. It
achieves this by dividing the list into smaller sublists and sorting them using the insertion sort algorithm with a gradually decreasing gap size.
Shell Sort Example
Let’s illustrate the shell sort algorithm with an example. Consider the following list of numbers: 8, 3, 1, 5, 9, 2
Step 1: Set the initial gap size to half of the total number of elements, which is 3.
Step 2: Compare and swap elements that are 3 positions apart: 8, 3, 1, 5, 9, 2 ^ ^
Since 8 is greater than 5, we swap them: 5, 3, 1, 8, 9, 2
Step 3: Repeat step 2 for all elements in the list: 5, 3, 1, 8, 9, 2 ^ ^
Since 3 is less than 8, we don’t need to swap them. The list remains the same: 5, 3, 1, 8, 9, 2
Step 4: Reduce the gap size to 1 and perform a final pass using the insertion sort algorithm: 1, 2, 3, 5, 8, 9
After the final pass, the list is sorted in ascending order. The shell sort algorithm has successfully sorted the list using the gap sequence of 3, 1.
Shell sort is an efficient sorting algorithm that improves upon the insertion sort algorithm. It works by sorting sublists of the input array using different gap sizes, gradually reducing the gap
size until it becomes 1. The main idea behind shell sort is to move elements that are far apart towards their final positions, which makes it more efficient than insertion sort for larger lists.
The choice of gap sequence greatly affects the performance of the shell sort algorithm. The original shell sort algorithm used a gap sequence of n/2, n/4, n/8, …, 1, where n is the total number of
elements in the list. However, there are other gap sequences that can be used, such as the Knuth sequence or the Sedgewick sequence, which have been shown to have better performance in certain cases.
Overall, the shell sort algorithm is a versatile and efficient sorting algorithm that can be used to sort large lists of elements. Its performance depends on the choice of gap sequence, and it can be
further optimized by combining it with other sorting algorithms.
Advantages and Disadvantages of Shell Sort
Shell sort offers several advantages over other sorting algorithms:
• Efficiency: Shell sort is generally faster than other simple sorting algorithms, such as bubble sort and selection sort. This is because it uses a technique called “gap insertion sort” that
reduces the number of comparisons and swaps required.
• In-place Sorting: Shell sort only requires a constant amount of additional memory, making it suitable for sorting large datasets. Unlike merge sort or quicksort, which require additional memory
for recursion or merging, shell sort operates directly on the input array.
• Adaptive: The performance of shell sort can be improved by choosing an appropriate gap sequence based on the characteristics of the input data. This means that the algorithm can be customized to
perform better on certain types of data, such as partially sorted or nearly sorted arrays.
However, shell sort also has some disadvantages:
• Not Stable: Shell sort is not a stable sorting algorithm, which means that the relative order of equal elements may change during the sorting process. This can be a problem if the original order
of equal elements needs to be preserved.
• Gap Sequence Selection: The choice of gap sequence can significantly affect the efficiency of the shell sort algorithm. Different gap sequences may yield different sorting times for different
datasets. Finding the optimal gap sequence for a given dataset is a challenging task and often requires empirical testing.
Despite these disadvantages, shell sort remains a popular choice for sorting large datasets efficiently. Its combination of speed, in-place sorting, and adaptability make it a valuable tool in the
sorting toolbox.
Introduction to Data Structures
Data structures are fundamental concepts in computer science that allow us to organize and manipulate data efficiently. They provide a way to store and retrieve data in a structured manner, enabling
efficient operations such as searching, sorting, and inserting. In order to understand the importance of data structures, it is essential to grasp the concept of data itself.
Data can be defined as any information that is processed or stored by a computer. It can take various forms, including numbers, text, images, audio, and video. In order to make sense of this data and
perform meaningful operations on it, we need to organize it in a way that is logical and efficient. This is where data structures come into play.
Data structures provide a framework for organizing and storing data in a way that allows for efficient access, manipulation, and retrieval. They are like containers that hold different types of data
and provide methods to interact with that data. By choosing the appropriate data structure for a specific problem or task, we can optimize the performance of our programs and make them more
There are various types of data structures, each with its own characteristics and use cases. Some common examples include arrays, linked lists, stacks, queues, trees, and graphs. Each data structure
has its own advantages and disadvantages, and the choice of which one to use depends on the specific requirements of the problem at hand.
For example, if we need to store a collection of elements that can be accessed in constant time, an array would be a suitable choice. On the other hand, if we need to perform frequent insertions and
deletions at both ends of the collection, a doubly-linked list would be more appropriate. By understanding the strengths and weaknesses of different data structures, we can make informed decisions
when designing and implementing our programs.
Furthermore, data structures are not only important for efficient data manipulation but also for solving complex problems. Many algorithms and problem-solving techniques rely heavily on the use of
data structures. For example, graph algorithms such as Dijkstra’s algorithm and depth-first search heavily rely on the use of graphs as a data structure to represent and solve problems.
In conclusion, data structures are essential concepts in computer science that enable us to organize and manipulate data efficiently. They provide a way to store and retrieve data in a structured
manner, allowing for efficient operations such as searching, sorting, and inserting. By understanding the different types of data structures and their characteristics, we can make informed decisions
when designing and implementing our programs, leading to more efficient and optimized solutions.
Shell Sort Algorithm
Shell sort is an efficient sorting algorithm that builds upon the insertion sort algorithm. It was invented by Donald Shell in 1959 and is known for its ability to sort a large number of elements
The Shell sort algorithm works by dividing the input array into smaller subarrays and then sorting these subarrays using the insertion sort algorithm. The subarrays are created by selecting a gap
sequence, which determines the size of the subarrays. The gap sequence is typically generated using the formula h = 3h + 1, where h is the initial gap size. The gap size is reduced after each
iteration until it becomes 1.
When sorting the subarrays, the elements are compared and swapped if necessary to ensure that they are in the correct order. The insertion sort algorithm is used to sort the subarrays because it is
efficient for small arrays. By sorting the subarrays, the algorithm gradually reduces the number of inversions in the array, which results in a partially sorted array. The final pass of the algorithm
using a gap size of 1 is essentially an insertion sort on the partially sorted array, resulting in a fully sorted array.
One of the key advantages of the Shell sort algorithm is that it performs well on large arrays. The gap sequence used by the algorithm ensures that the elements are compared and swapped across a
large distance, allowing for more efficient sorting. Additionally, the algorithm has a time complexity of O(n log n), which is faster than other simple sorting algorithms such as bubble sort and
selection sort.
However, the performance of the Shell sort algorithm can vary depending on the choice of the gap sequence. Different gap sequences can result in different running times, and finding the optimal gap
sequence for a given input array can be a challenging task. In practice, many programmers use the Marcin Ciura gap sequence, which has been found to perform well on a wide range of inputs.
In conclusion, the Shell sort algorithm is a powerful sorting algorithm that builds upon the insertion sort algorithm. It is known for its ability to sort a large number of elements quickly and has a
time complexity of O(n log n). By dividing the input array into smaller subarrays and sorting them using the insertion sort algorithm, the Shell sort algorithm gradually reduces the number of
inversions in the array, resulting in a fully sorted array. Although the choice of the gap sequence can affect the performance of the algorithm, the Marcin Ciura gap sequence is commonly used in
How Shell Sort Works
The shell sort algorithm works by repeatedly dividing the original list into smaller sublists and sorting them using the insertion sort algorithm. The key idea behind shell sort is to move elements
that are far apart closer to their correct position, which makes the subsequent insertion sort steps more efficient.
Here is a step-by-step explanation of how the shell sort algorithm works:
1. Start by defining the gap size, which determines the number of elements between each pair of compared elements. The gap size is initially set to half of the total number of elements in the list.
2. Compare elements that are gap positions apart and swap them if they are in the wrong order.
3. Repeat step 2 for all elements in the list, gradually reducing the gap size after each pass.
4. Continue the process until the gap size becomes 1, at which point the algorithm performs a final pass using the insertion sort algorithm to sort the remaining elements.
Let’s consider an example to understand how the shell sort algorithm works. Suppose we have an unsorted list of numbers: 5, 2, 9, 1, 8, 3, 7. We start by setting the gap size to half of the total
number of elements, which is 3. This means we will compare elements that are 3 positions apart.
In the first pass, we compare the elements at positions 1 and 4 (5 and 1), 2 and 5 (2 and 8), and 3 and 6 (9 and 3). We swap the pairs (5 and 1) and (9 and 3) because they are in the wrong order. The
list now becomes: 1, 2, 3, 5, 8, 9, 7.
In the second pass, we compare the elements at positions 1 and 2 (1 and 2), 2 and 3 (2 and 3), and 3 and 4 (3 and 5). Since all the pairs are in the correct order, no swaps are made. The list remains
the same: 1, 2, 3, 5, 8, 9, 7.
In the third pass, we compare the elements at positions 1 and 2 (1 and 2), 2 and 3 (2 and 3), and 3 and 4 (3 and 5). Again, no swaps are made as all the pairs are in the correct order. The list
remains the same: 1, 2, 3, 5, 8, 9, 7.
Finally, in the fourth pass, we compare the elements at positions 1 and 2 (1 and 2), 2 and 3 (2 and 3), and 3 and 4 (3 and 5). Once again, no swaps are made as all the pairs are in the correct order.
The list remains the same: 1, 2, 3, 5, 8, 9, 7.
At this point, the gap size becomes 1, and the shell sort algorithm performs a final pass using the insertion sort algorithm to sort the remaining elements. The insertion sort algorithm works by
comparing each element with the elements before it and inserting it in the correct position. In our example, the final pass would sort the list 1, 2, 3, 5, 7, 8, 9.
Overall, the shell sort algorithm is an efficient sorting algorithm that improves upon the insertion sort algorithm by reducing the number of comparisons and swaps needed to sort the list. It
achieves this by dividing the list into smaller sublists and sorting them using the insertion sort algorithm with a gradually decreasing gap size.
Shell Sort Example
Let’s illustrate the shell sort algorithm with an example. Consider the following list of numbers:
8, 3, 1, 5, 9, 2
Step 1: Set the initial gap size to half of the total number of elements, which is 3.
Step 2: Compare and swap elements that are 3 positions apart:
8, 3, 1, 5, 9, 2
^ ^
Since 8 is greater than 5, we swap them:
5, 3, 1, 8, 9, 2
Step 3: Repeat step 2 for all elements in the list:
5, 3, 1, 8, 9, 2
^ ^
Since 3 is less than 8, we don’t need to swap them. The list remains the same:
5, 3, 1, 8, 9, 2
Step 4: Reduce the gap size to 1 and perform a final pass using the insertion sort algorithm:
1, 2, 3, 5, 8, 9
After the final pass, the list is sorted in ascending order. The shell sort algorithm has successfully sorted the list using the gap sequence of 3, 1.
Shell sort is an efficient sorting algorithm that improves upon the insertion sort algorithm. It works by sorting sublists of the input array using different gap sizes, gradually reducing the gap
size until it becomes 1. The main idea behind shell sort is to move elements that are far apart towards their final positions, which makes it more efficient than insertion sort for larger lists.
The choice of gap sequence greatly affects the performance of the shell sort algorithm. The original shell sort algorithm used a gap sequence of n/2, n/4, n/8, …, 1, where n is the total number of
elements in the list. However, there are other gap sequences that can be used, such as the Knuth sequence or the Sedgewick sequence, which have been shown to have better performance in certain cases.
Overall, the shell sort algorithm is a versatile and efficient sorting algorithm that can be used to sort large lists of elements. Its performance depends on the choice of gap sequence, and it can be
further optimized by combining it with other sorting algorithms.
Advantages and Disadvantages of Shell Sort
Shell sort offers several advantages over other sorting algorithms:
• Efficiency: Shell sort is generally faster than other simple sorting algorithms, such as bubble sort and selection sort. This is because it uses a technique called “gap insertion sort” that
reduces the number of comparisons and swaps required.
• In-place Sorting: Shell sort only requires a constant amount of additional memory, making it suitable for sorting large datasets. Unlike merge sort or quicksort, which require additional memory
for recursion or merging, shell sort operates directly on the input array.
• Adaptive: The performance of shell sort can be improved by choosing an appropriate gap sequence based on the characteristics of the input data. This means that the algorithm can be customized to
perform better on certain types of data, such as partially sorted or nearly sorted arrays.
However, shell sort also has some disadvantages:
• Not Stable: Shell sort is not a stable sorting algorithm, which means that the relative order of equal elements may change during the sorting process. This can be a problem if the original order
of equal elements needs to be preserved.
• Gap Sequence Selection: The choice of gap sequence can significantly affect the efficiency of the shell sort algorithm. Different gap sequences may yield different sorting times for different
datasets. Finding the optimal gap sequence for a given dataset is a challenging task and often requires empirical testing.
Despite these disadvantages, shell sort remains a popular choice for sorting large datasets efficiently. Its combination of speed, in-place sorting, and adaptability make it a valuable tool in the
sorting toolbox. | {"url":"https://tutoline.com/data-structures-shell-sort/","timestamp":"2024-11-10T13:07:03Z","content_type":"text/html","content_length":"222497","record_id":"<urn:uuid:433524d8-0d40-4f7e-a788-012da6855e50>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00742.warc.gz"} |
EDUC 157 - Using Markdown
Markdown is a special kind of markup language that lets you format text with simple syntax. You can then use a converter program like pandoc to convert Markdown into whatever format you want: HTML,
PDF, Word, PowerPoint, etc. (see the full list of output types here)
Basic Markdown formatting
Type… …or… …to get
Some text in a paragraph. Some text in a paragraph.
More text in the next paragraph. Always More text in the next paragraph. Always use empty lines between paragraphs.
use empty lines between paragraphs.
Italic Italic Italic
Bold Bold Bold
# Heading 1 Heading 1
## Heading 2 Heading 2
### Heading 3 Heading 3
(Go up to heading level 6 with ######)
Link text Link text
<code>Inline code with backticks Inline code with backticks
> Blockquote Blockquote
- Things in * Things in • Things in
- an unordered * an unordered • an unordered
- list * list • list
1. Things in 1) Things in 1. Things in
2. an ordered 2) an ordered 2. an ordered
3. list 3) list 3. list
Horizontal line Horizontal line
Horizontal line
--- ***
Basic math commands
Markdown uses LaTeX to create fancy mathematical equations. There are like a billion little options and features available for math equations—you can find helpful examples of the the most common
basic commands here. In this class, these will be the most common things you’ll use:
Description Command Output
Roman letters a b c d e f \(a\ b\ c\ d\ e\ f\)
Greek letters (see this for all possible letters) \alpha \beta \Gamma \gamma \(\alpha\ \beta\ \Gamma\ \gamma\ \Delta\ \delta\ \epsilon\)
\Delta \delta \epsilon
Letters will automatically be italicized and treated as math variables; Ew: Treatment = \beta Ew: \(Treatment = \beta\)
if you want actual text in the math, use \text{} Good: \text{Treatment} = \beta Good: \(\text{Treatment} = \beta\)
Extra spaces will automatically be removed; if you want a space, use \ No space: x y No space: \(x y\)
Space: x\ y Space: \(x \ y\)
Superscripts and subscripts
Use ^ to make one character superscripted. x^2 \(x^2\)
Wrap the superscripted part in {} if there's more than one character x^{2+y} \(x^{2+y}\)
Use _ to make one character subscripted \beta_1 \(\beta_1\)
Wrap the subscripted part in {} if there's more than one character \beta_{i, t} \(\beta_{i, t}\)
Use superscripts and subscripts simultaneously \beta_1^{\text{Treatment}} \(\beta_1^{\text{Treatment}}\)
You can even nest them x^{2^{2^2}} \(x^{2^{2^2}}\)
Math operations
Addition 2 + 5 = 7 \(2 + 5 = 7\)
Subtraction 2 - 5 = -3 \(2 + 5 = -3\)
Multiplication x \times y \(x \times y\)
x \cdot y \(x \cdot y\)
Division 8 \div 2 \(8 \div 2\)
Fractions \frac{8}{2} \(\frac{8}{2}\)
Square roots; use [3] for other roots \sqrt{81} = 9 \(\sqrt{81} = 9\)
\sqrt[3]{27} = 3 \(\sqrt[3]{27} = 3\)
Summation; use sub/superscripts for extra details \sum x \(\sum x\)
\sum_{n=1}^{\infty} \frac{1}{n} \(\sum_{n=1}^{\infty} \frac{1}{n}\)
Products; use sub/superscripts for extra details \prod x \(\prod x\)
\prod_{n=1}^{5} n^2 \(\prod_{n=1}^{5} n^2\)
Integrals; use sub/superscripts for extra details \int x^2 \ dx \(\int x^2 \ dx\)
\int_{1}^{100} x^2 \ dx \(\int_{1}^{100} x^2 \ dx\)
Extra symbols
Add a bar for things like averages \bar{x} \(\bar{x}\)
Use an overline for longer things Ew: \bar{abcdef} Ew: \(\bar{abcdef}\)
Good: \overline{abcdef} Good: \(\overline{abcdef}\)
Add a hat for things like estimates \hat{y} \(\hat{y}\)
Use a wide hat for longer things Ew: \hat{abcdef} Ew: \(\hat{abcdef}\)
Good: \widehat{abcdef} Good: \(\widehat{abcdef}\)
Use arrows for DAG-like things Z \rightarrow Y \leftarrow X \(Z \rightarrow Y \leftarrow X\)
Bonus fun
Use colors!; see here for more details and here for a list of color names \color{red}{y} = \color{blue}{\beta_1 x_1} \(\color{red}{y}\ \color{black}{=}\ \color{blue}{\beta_1 x_1}\)
Using math inline
You can use math in two different ways: inline or in a display block. To use math inline, wrap it in single dollar signs, like $y = mx + b$:
Type… …to get
Based on the DAG, the regression model for
estimating the effect of education on wages Based on the DAG, the regression model for estimating the effect of education on wages is \(\hat{y} = \beta_0 + \beta_1 x_1 + \epsilon\), or \(\text
is $\hat{y} = \beta_0 + \beta_1 x_1 + \epsilon$, {Wages} = \beta_0 + \beta_1 \text{Education} + \epsilon\).
or $\text{Wages} = \beta_0 +
\beta_1 \text{Education} + \epsilon$.
Using math in a block
To put an equation on its own line in a display block, wrap it in double dollar signs, like this:
The quadratic equation was a way to solve for $x$ in high school math:
x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}
…to get…
The quadratic equation was a way to solve for \(x\) in high school math:
\[ x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} \]
Dollar signs and math
Because dollar signs are used to indicate math equations, you can’t just use dollar signs like normal if you’re writing about actual dollars. For instance, if you write This book costs $5.75 and this
other costs $40, Markdown will treat everything that comes between the dollar signs as math, like so: “This book costs \($5.75 and this other costs $40\)”.
To get around that, put a backslash (\) in front of the dollar signs, so that This book costs \$5.75 and this other costs \$40 becomes “This book costs $5.75 and this other costs $40”.
There are 4 different ways to hand-create tables in Markdown—I say “hand-create” because it’s normally way easier to use R to generate these things with packages like kableExtra (use kable()) or
pander (use pandoc.table()). The two most common are simple tables and pipe tables. You should look at the full documentation here.
For simple tables, type…
Right Left Center Default
------- ------ ---------- -------
Table: Caption goes here
…to get…
Caption goes here
Right Left Center Default
For pipe tables, type…
| Right | Left | Default | Center |
| 12 | 12 | 12 | 12 |
| 123 | 123 | 123 | 123 |
| 1 | 1 | 1 | 1 |
Table: Caption goes here
…to get…
Caption goes here
Right Left Default Center
There are two different ways to add footnotes (see here for complete documentation): regular and inline.
Regular notes need (1) an identifier and (2) the actual note. The identifier can be whatever you want. Some people like to use numbers like [^1], but if you ever rearrange paragraphs or add notes
before #1, the numbering will be wrong (in your Markdown file, not in the output; everything will be correct in the output). Because of that, I prefer to use some sort of text label:
Here is a footnote reference[^1] and here is another [^note-on-dags].
[^1]: This is a note.
[^note-on-dags]: DAGs are neat.
And here's more of the document.
…to get…
Here is a footnote reference^1 and here is another.^2
And here’s more of the document.
1. This is a note.↩︎
2. DAGs are neat.↩︎
You can also use inline footnotes with ^[Text of the note goes here], which are often easier because you don’t need to worry about identifiers:
Causal inference is neat.^[But it can be hard too!]
…to get…
Causal inference is neat.^1
1. But it can be hard too!↩︎
Front matter
You can include a special section at the top of a Markdown document that contains metadata (or data about your document) like the title, date, author, etc. This section uses a special simple syntax
named YAML (or “YAML Ain’t Markup Language”) that follows this basic outline: setting: value for setting. Here’s an example YAML metadata section. Note that it must start and end with three dashes
You can put the values inside quotes (like the date and name in the example above), or you can leave them outside of quotes (like the title in the example above). I typically use quotes just to be
safe—if the value you’re using has a colon (:) in it, it’ll confuse Markdown since it’ll be something like title: My cool title: a subtitle, which has two colons. It’s better to do this:
If you want to use quotes inside one of the values (e.g. your document is An evaluation of "scare quotes"), you can use single quotes instead:
One of the most powerful features of Markdown + pandoc is the ability to automatically cite things and generate bibliographies. to use citations, you need to create a BibTeX file (ends in .bib) that
contains a database of the things you want to cite. You can do this with bibliography managers designed to work with BibTeX directly (like BibDesk on macOS), or you can use Zotero (macOS and Windows)
to export a .bib file. You can download an example .bib file of all the readings from this class here.
Complete details for using citations can be found here. In brief, you need to do three things:
1. Add a bibliography: entry to the YAML metadata:
2. Choose a citation style based on a CSL file. The default is Chicago author-date, but you can choose from 2,000+ at this repository. Download the CSL file, put it in your project folder, and add
an entry to the YAML metadata (or provide a URL to the online version):
title: Title of your document
date: "January 13, 2020"
author: "Your name"
bibliography: name_of_file.bib
csl: "https://raw.githubusercontent.com/citation-style-language/styles/master/apa.csl"
Some of the most common CSLs are:
3. Cite things in your document. Check the documentation for full details of how to do this. Essentially, you use @citationkey inside square brackets ([]):
Type… …to get…
Causal inference is neat [@Rohrer:2018;
@AngristPischke:2015]. Causal inference is neat (Rohrer 2018; Angrist and Pischke 2015).
Causal inference is neat [see @Rohrer:2018,
p. 34; also @AngristPischke:2015, chapter 1] Causal inference is neat (see Rohrer 2018, 34; also Angrist and Pischke 2015, chap. 1)
Angrist and Pischke say causal inference
is neat [-@AngristPischke:2015; see also Angrist and Pischke say causal inference is neat (2015; see also Rohrer 2018).
@AngristPischke:2015 [chapter 1] say causal Angrist and Pischke (2015, chap. 1) say causal inference is neat, and Rohrer (2018) agrees.
inference is neat, and @Rohrer:2018 agrees.
After compiling, you should have a perfectly formatted bibliography added to the end of your document too:
Angrist, Joshua D., and Jörn-Steffen Pischke. 2015. Mastering ’Metrics: The Path from Cause to Effect. Princeton, NJ: Princeton University Press.
Rohrer, Julia M. 2018. “Thinking Clearly About Correlations and Causation: Graphical Causal Models for Observational Data.” Advances in Methods and Practices in Psychological Science 1 (1):
27–42. https://doi.org/10.1177/2515245917745629.
Other references
These websites have additional details and examples and practice tools:
Angrist, Joshua D., and Jörn-Steffen Pischke. 2015. Mastering ’Metrics: The Path from Cause to Effect. Princeton, NJ: Princeton University Press.
Rohrer, Julia M. 2018.
“Thinking Clearly about Correlations and Causation: Graphical Causal Models for Observational Data.” Advances in Methods and Practices in Psychological Science
1 (1): 27–42. | {"url":"https://educ157.de-barros.com/resource/markdown","timestamp":"2024-11-10T14:09:19Z","content_type":"application/xhtml+xml","content_length":"59491","record_id":"<urn:uuid:73c36188-2474-46a3-ae79-6f86843f8153>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00799.warc.gz"} |
NCERT Solutions Maths for Class 8 - Teachoo (New NCERT Updated)
Updated according to new NCERT - 2023-24 NCERT Books.
NCERT Solutions of all exercise questions and examples have been solved for Class 8 Maths. Answers to all questions have been solved without missing a step, with detailed explanation of the concepts
as well.
In teachoo, each chapter is divided into - Serial Order Wise, and Concept Wise
In Serial Order Wise, the chapter is taught according to the NCERT Book - divided into exercises and examples.
You can check it if you have a doubt in a particular question, click on an exercise link under Serial Order Wise
The Teachoo way of doing a chapter is from Concept Wise.
Each chapter is divided into concepts, first the concept is explained, and then the questions of that concept. Easiest question first, difficult questions later.
This is a great method to do the chapter when you're studying it for the first time, or revising.
In Class 8, the chapters and its topics include
1. Chapter 1 Rational Numbers - What are Natural Numbers, Whole Numbers, Integers, Rational numbers. How are they related. Properties of rational numbers like Closure, Commutativity, Associativity,
Distributivity. Additiving Identity and Additive Inverse. Multiplicative Identity and Multiplicative inverse. Rational numbers on the number line, Rational numbers between two rational numbers
2. Chapter 2 Linear Equations in One Variable - What are Equations, What are linear Equations in one variable, Solving Linear Equations, Making equations from Statements and then solving
3. Chapter 3 Understanding Quadrilaterals - Curves - Simple, Closed, Polygons - Convex, Concave, Diagonal of a polygon, Regular and Irregular polygons, Angle sum property of polygons, Sum of
exterior angle of a polygon, Interior and Exterior Angle of a Regular Polygon.Type of Quadrilaterals - Trapezium, Kite, Parallelogram, Rhombus, Rectangle, Square and their properties
4. Chapter 4 Practical Geometry - Using the properties studied in Chapter 3, we will learn how to construct Quadrilaterals - When four sides and one diagonal are given, When two diagonals and three
sides are given, When two adjacent sides and three angles are given, When three sides and two included angles are given, When other special properties are known
5. Chapter 5 Data Handling - We learn what is Data, how to represent data - pictograph, bar graph, double bar graph, How to draw a frequency distribution table, drawing a grouped frequency
distribution table, reading a histogram, drawing a histogram, Reading a Pie chart (Circle Graph), Drawing a circle graph, Chance, Probability, Finding Probability
6. Chapter 6 Squares and Square Roots - What are square numbers, Properties of square number, Unit digit of square numbers, Solving Patterns, Finding squares without multiplication (using properties
(a + b)^2, (a - b)^2), What is a Pythagorean triplet, What is square root, Symbol for Square root, Finding Square root through repeated subtraction, Finding Square root through prime
factorisation, Finding Square root by division method (of numbers, as well as decimal numbers)
7. Chapter 7 Cubes and Cube Roots - What is a cube, What are Hardy-Ramanujan Numbers, Unit digit of Cubes, Properties of cube numbers, Checking if number is a perfect cube, What is cube root, Symbol
for cube root, Finding cube root through prime factorization, Finding cube root through estimation
8. Chapter 8 Comparing Quantities - Recalling what are ratios and percentages, how to convert ratio into percentage, finding percentage, Finding increase or decrease percent, Finding discount,
discount percentage, Selling Price, Cost Price, Profit or Loss, Profit percentage or Loss percentage, What is simple interest, Compound Interest, Compound Interest Formula, Compound Interest when
interest is compounded half yearly, Compound Interest for Fraction years
9. Chapter 9 Algebraic Expressions and Identities - What is expression; Terms, Factors and Coefficients of Expressions, What are monomials, binomials, trinomials, polynomials; Like and unlike terms,
Addition, Subtraction,Multiplication and Division of Algebraic Expressions, Some Algebra Identities
10. Chapter 10 Visualising solid shapes - We learn what is a 3 Dimensional Shape, Front View, Top View, Side View of 3D Objects, How to read a map, and what is scale in a map, Faces, Edges and
Vertices of Polyhedrons, and Euler's Formula
11. Chapter 11 Mensuration - Perimeter and Area of different Shapes like Rectangle, Square, Triangle, Parallelogram, Circle, Finding Area of Trapezium, Finding Area of a General Quadrilateral, Area
of Rhombus, Finding Area of Polygons by dividing it into different parts, Solid Shapes, Lateral (or Curved) Surface Area, Total Surface Area, Volume of Cube, Cuboid and Cylinder
12. Chapter 12 Exponents and Powers - What is exponent and base of a number like 10^24, Numbers with negative exponents, Law of Exponents with positive and negative powers, Converting very small, and
very big numbers into standard form
13. Chapter 13 Direct and Inverse Proportions - What is Direct Proportion, examples of things in direct proportion, and some questions on it. Similarly, what is indirect proportion and solving
questions with indirect proportion
14. Chapter 14 Factorization - We factorise algebra expressions using different methods like - Method of Common Factors, By Regrouping Terms, Using Identities, Using Splitting the middle term method,
Dividing Algebra Expressions, Finding Errors in some questions
15. Chapter 15 Introduction to Graphs - We remember common graphs like Bar Graph, Double Bar Graph, Circle Graph (Pie Chart), Histogram, We study what is a line graph, How to read a line graph, then
what is linear graph, What is a point in the graph, coordinates of a point in the graph, and plotting a linear graph
16. Chapter 16 Playing with Numbers - In this chapter, we will do some questions where we need to find letters while doing addition or multiplication, and then we will study divisibility of numbers
and solve some questions
Note: When you click on a link in the chapter, the first question will be opened. To check other questions and concepts, you can click on next or check the arrowed list which has the list of
questions. Important Questions have also been marked
Click on a chapter below to get started. | {"url":"https://www.teachoo.com/subjects/cbse-maths/class-8/","timestamp":"2024-11-07T23:47:34Z","content_type":"text/html","content_length":"115895","record_id":"<urn:uuid:c58750ce-9d14-44c9-aec5-55b523bec389>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00586.warc.gz"} |
10 decimal division examples teachers must use - Number Dyslexia
10 decimal division examples teachers must use
Last Updated on July 16, 2022 by Editorial Team
In this post, we will talk about decimal division by showing various examples for easy understanding. Let us learn a little about the topic first.
As the name suggests, the topic deals with the division operation of numbers in decimals. Things get a little confusing for kids when decimals are involved. Don’t worry, through this post you will
learn some good techniques to deal with decimals.
We also talked about easy approach to solve such operations with appropriate examples. Not only school exams, if practiced well in routine, it will be helpful in competitive exams where timing is
Dividend is whole number, divisor is in decimals
1. Example
375 divided by 1.5
A typical way of doing such examples is to multiply both dividend and divisor by power of 10 as the decimals. If there is one number after the decimal point, we will multiply the divisor by 10. This
way we get whole number as divisor to divide dividend.
Following this we get,
375 x 10 = 3750, 1.5 x 10= 15
Next, we will divide it with the usual way.
2. Example
484 divided by 0.22
Here, there are two numbers after the decimal point in divisor, so we will take the 2 power of 10 i.e. 100. This way we will multiply the divisor and dividend by 100. We get:
484 x 100 = 48400, 0.22 x 100 = 22
Next, we will divide the usual way.
Divdend is in decimal, divisor is whole number
3. Example
In examples where dividend is in decimal and divisor not, we will use the most common long division method. The approach here is to solve the operation by ignoring the decimal and simply putting the
decimal point in the answer directly above the decimal point in the dividend.
Let us take the example of, 134.4 divided by 4
Ignoring the decimal, first we will divide 1344 by 4
Next, we will put the decimal point in the quotient directly above the dividend 134.4 . The answer is 33.6
4. Example
Let us use the example of 564.775 divided by 5
Again, we will ignore the decimal point and solve with long division method
After getting the quotient, we will put the decimal point as told earlier.
Both dividend & divisor is in decimals
5. Example
When both the dividend and divisor is in decimals, we will focus on decimal of divisor. We will make the divisor a whole number.
Either we could multiply both divisor and dividend by the power of multiples similar to the number after the decimals of divisor, or remove the decimal from the divisor and move the decimal point of
the dividend to the right as many digits as the divisor has after the decimal point.
After this step, we will use the same method we used for example 3 and example 4.
Let us solve 37.5 divided by 2.5
First, remove the decimal point of divisor and move the decimal of the dividend one step to the right side.
We get 375 divided by 25.
Next, we solve with long division method.
6. Example
Let us solve 449.5 divided by 1.45
First, remove the decimal point of divisor and move the decimal of the dividend two steps to the right side. As there is only one number following the decimal number, we will put a 0 in the end.
We get the 44950 divided by 145
Next, we solve with long division method.
7. Example
Let us divide 56.55 by 2.5,
First we remove the decimals of divisior and move the decimals of dividend in similar manner discussed above.
Here, we still left with decimal of dividend. Use the long division as we used in example 3 and 4.
Decimal to fraction
While dealing with decimals, it is always advisable to make the question as simple as possible. Simplifying it in a way you’re comfortable with. In the next section, we talked about possible ways of
doing it.
Do note that there is no fixed method of doing it. Its different for different types of problems of decimal operations. The main idea is to follow an approach with which you’re comfortable with. Lets
have a look at some of them
8. Example
This one is probably most common approach used to do decimals division faster. We write the division operation in the fraction form and cancel out the decimal as per its position. Keep simplifying
the form by dividing both numerator and denominator with common multiple.
Solve 94.5 divided by 1.5
Write down the operation in fraction form. It will look like 94.5/1.5.
Next we cancel out the decimal -> 945/15
Next, we will simplify as per the diagram.
9. Example
Sometimes we multiply both numerator and denominator other than 10 to make things easier. The approach here is to convert it into a form with which we are comfortable as we done it quite a few times
Take this example, 6.3 divided by 1.4
Ignore the decimal for this step, we get 63/14
One thing common between the numerator and denominator is that both are multiple of 7.
Next, we will divide 14 with 2 to make it 7. Similarly we divide 63 by 2 to balance it. All this can be written in fraction form as given in the image. Further we simplify the operation and get the
10. Example
Lets follow the same above approach to solve the case of ‘Dividend is whole number but divisor is in decimals’.
Solve 387 divided by 4.5
Instead of cancelling the decimal here by multiplying it by 10, its better to make divisor a whole number by multiplying it by 2 which makes it 9. To balance it we multiply the dividend by 2 as well.
The reason why we choose to multiply by 2 because divisor becomes 9, and as we know 387 is divisible by 9, the calculation become alot easier. If we choose the multiplication with 10 route, the
fraction becomes 3870/45 which requires a bit of calculation.
An engineer, Maths expert, Online Tutor and animal rights activist. In more than 5+ years of my online teaching experience, I closely worked with many students struggling with dyscalculia and
dyslexia. With the years passing, I learned that not much effort being put into the awareness of this learning disorder. Students with dyscalculia often misunderstood for having just a simple math
fear. This is still an underresearched and understudied subject. I am also the founder of Smartynote -‘The notepad app for dyslexia’, | {"url":"https://numberdyslexia.com/10-decimal-division-examples-teachers-must-use/","timestamp":"2024-11-05T07:20:48Z","content_type":"text/html","content_length":"55656","record_id":"<urn:uuid:c27f68c3-23d6-49fc-867d-695ecd076e28>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00396.warc.gz"} |
The Stacks project
Remark 42.45.1. In the discussion above we have defined the components of the Chern character $ch_ p(\mathcal{E}) \in A^ p(X) \otimes \mathbf{Q}$ of $\mathcal{E}$ even if the rank of $\mathcal{E}$ is
not constant. See Remarks 42.38.10 and 42.43.5. Thus the full Chern character of $\mathcal{E}$ is an element of $\prod _{p \geq 0} (A^ p(X) \otimes \mathbf{Q})$. If $X$ is quasi-compact and $\dim (X)
< \infty $ (usual dimension), then one can show using Lemma 42.34.6 and the splitting principle that $ch(\mathcal{E}) \in A^*(X) \otimes \mathbf{Q}$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0ESX. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0ESX, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0ESX","timestamp":"2024-11-08T07:52:33Z","content_type":"text/html","content_length":"14015","record_id":"<urn:uuid:86c485fc-26f1-41b8-9450-ca09dba2e22a>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00593.warc.gz"} |
: Uncertainty Quantification for Inverse Problems
Workshop at DTU: Uncertainty Quantification for Inverse Problems
December 17 and 18, 2018
Two days of seminars about uncertainty quantification (UQ) and its use in the treatment of inverse problems.
The course took place at DTU, the Technical University of Denmark, on the main campus in Lyngby located north of Copenhagen.
The goal of this workshop is to give the participants an introduction to the central ideas and computational methods for uncertainty quantification, with a focus on its application to inverse
problems, and with illustrations from applications. The workshop is aimed at newcomers in the field, but more experienced user will also benefit from the presentations.
The field of inverse problems is fertile ground for the development of computational uncertainty quantification methods. Inverse problems involve noisy measurements, leading naturally to statistical
estimation problems, and due to their size and ill conditioning they are computationally challenging.
Regularization is a technique that provides stability for inverse problems, and in the Bayesian setting this is synonymous with the choice of the prior probability density function. Once a prior is
chosen, the posterior probability density function results, and it is the solution of the inverse problem in the Bayesian setting.
The posterior maximizer - known as the MAP estimator - provides a stable estimate of the unknown parameters. However, uncertainty quantification requires that we extract more information from the
posterior, which often requires sampling. The posterior density functions that arise in typical inverse problems are high-dimensional, and are often non-Gaussian, making the corresponding sampling
problems challenging.
The participants are expected to be familiar with inverse problems, basic statistics, and numerical computations.
The presentations on the first day are based on the book: Johnathan M. Bardsley, Computational Uncertainty Quantification for Inverse Problems and the lecture notes: Antti Salonen, Heikki Haario, and
Marko Laine, Statistical Analysis in Modeling.
Monday, Dec. 17
• 9:00-11:30 Johnathan M. Bardsley - Computational UQ for Inverse Problems: Part 1, Part 2.
• Lunch
• 13:00-14:00 Heikki Haario, Lappeenranta University of Technology, Finland: Parameter Estimation, Monte Carlo Methods, Parametric Uncertainty Quantification (UQ)
• 14:00-15:00 Marko Laine, Finnish Meteorological Institute, Helsinki, Finland: Efficient Parameter Estimation with the MCMC Toolbox
• 15:15 - 16:15 Lassi Roininen, Lappeenranta University of Technology, Finland: Sparse Non-Stationary Hierarchical Priors for Bayesian Inversion
• 16:30 - 17:00 Maksim Mazuryn, DTU Compute: Inverse random medium scattering in 2D
Tuesday, Dec. 18 | {"url":"https://people.compute.dtu.dk/pcha/UQ/UQforIP.html","timestamp":"2024-11-11T09:27:29Z","content_type":"text/html","content_length":"5081","record_id":"<urn:uuid:a759d122-cb4e-4410-a73b-ce19f1131e35>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00274.warc.gz"} |
Precalculus (6th Edition) Blitzer Chapter 8 - Section 8.3 - Matrix Operations and Their Applications - Exercise Set - Page 917 1
a) The order of the matrix is $2\times 3$. b) The identification of the element ${{a}_{32}}$ is not possible because of ${{a}_{23}}=-1$.
Work Step by Step
(a) Since this matrix has two rows and three columns, therefore, the order of the matrix is $2\times 3$ The order of the matrix $A$ is $2\times 3$. (b) The notation ${{a}_{23}}$ stands for the
element present in the second row and third column of the given matrix. Therefore ${{a}_{23}}=-1$ Since there are only two rows, therefore, the element ${{a}_{32}}$ present in the third row can never
be identified. Hence, the identification of the element ${{a}_{32}}$ is not possible because of ${{a}_{23}}=-1$. | {"url":"https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-8-section-8-3-matrix-operations-and-their-applications-exercise-set-page-917/1","timestamp":"2024-11-04T11:04:21Z","content_type":"text/html","content_length":"72625","record_id":"<urn:uuid:2662e66c-bd26-438f-b7fe-cd55c03ee277>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00722.warc.gz"} |
Computes the cosine similarity between y_true & y_pred. — loss_cosine_similarity
Computes the cosine similarity between y_true & y_pred.
loss <- -sum(l2_norm(y_true) * l2_norm(y_pred))
Note that it is a number between -1 and 1. When it is a negative number between -1 and 0, 0 indicates orthogonality and values closer to -1 indicate greater similarity. This makes it usable as a loss
function in a setting where you try to maximize the proximity between predictions and targets. If either y_true or y_pred is a zero vector, cosine similarity will be 0 regardless of the proximity
between predictions and targets.
axis = -1L,
reduction = "sum_over_batch_size",
name = "cosine_similarity",
dtype = NULL
Tensor of true targets.
Tensor of predicted targets.
The axis along which the cosine similarity is computed (the features axis). Defaults to -1.
For forward/backward compatability.
Type of reduction to apply to the loss. In almost all cases this should be "sum_over_batch_size". Supported options are "sum", "sum_over_batch_size" or NULL.
Optional name for the loss instance.
The dtype of the loss's computations. Defaults to NULL, which means using config_floatx(). config_floatx() is a "float32" unless set to different value (via config_set_floatx()). If a
keras$DTypePolicy is provided, then the compute_dtype will be utilized.
y_true <- rbind(c(0., 1.), c(1., 1.), c(1., 1.))
y_pred <- rbind(c(1., 0.), c(1., 1.), c(-1., -1.))
loss <- loss_cosine_similarity(y_true, y_pred, axis=-1) | {"url":"https://keras3.posit.co/reference/loss_cosine_similarity.html","timestamp":"2024-11-01T22:24:55Z","content_type":"text/html","content_length":"18979","record_id":"<urn:uuid:7bd771df-0344-4f0d-8fe2-3621eb4e1a7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00873.warc.gz"} |
Solver at a glance
Additional hints:
operation will take any single expression and simplify it according to standard College Algebra rules. The
operation requires entry of an 'expandable' expression. This means that the entered expression must contain a power or a product of terms. The
operation requires entry of a factorable expression. This means that the entered expression must be a sum of terms. It is important to note that equality or inequality symbols (=, <, >, <=, >=)
cannot be used with the 'Simplify' operation. The reason for this is that equations and inequalities must be solved, not simplified. The
operations can be used with equations, inequalities, and systems. This means that an equality or inequality symbol (=, <, >, <=, >=) must be present in your entry. Solver can solve and graph a large
variety of single equations and inequalities (linear, quadratic, special cases of higher order equations, rational, radical etc.). Systems of equations are restricted to linear ones. However,
non-linear systems (both of equations and inequalities) can be graphed. The
operations can be used for a large variety of single equations and inequalities (linear, quadratic, special cases of higher order equations, rational, radical, etc.). However, systems of equations
are restricted to linear systems. Finally, systems of equations and inequalities may always be solved graphically, even if non-linear. When graphing multiple items, you can enter comma-separated
equations and inequalities. However you can never have expressions in this mix. Multiple expression entry is appropriate only on the
or the
page. Use comma-separation when entering multiple equations, expressions and inequalities (e.g.,
). Please note, expressions cannot be graphed. Furthermore, multiple expression entry is only appropriate on the
pages. Note that if you are required to find the LCD (Least Common Denominator), you can either enter the sum of fractions on the Simplify page and identify the denominator (before it gets reduced),
or you can enter the denominators only (using comma-separation) on the LCM page. If you need to graph a function (e.g.,
) rewrite it as an equation (e.g.,
), since the general functional notation is not supported. Simply replace the equation's left hand-side with a variable that does not appear in the function definition.
Problem entry at a glance
Additional hints:
More than 90% of problems found in a typical algebra textbook can be entered via the keyboard layout displayed above. If you do need to enter a different variable, a function or a special constant
(such as pi), click on one of these buttons: , . As soon as you select a different variable from the alternate keyboard, the keyboard reverts to the primary one. key remembers the last used variable
from the alternate keyboard. This speeds up problem entry by allowing you to stay on the main keyboard screen. and keys allow you to move throughout the expression, one character at a time. For
faster (but less precise movement) you can position the cursor with a press of the finger. These keys are also used to 'exit' exponents, denominators and parentheses. needs to be used to separate
equations or inequalities when a system is being solved or graphed. It also needs to be used to separate multiple expressions in GCF and LCM problems. Fractions can be entered in two different ways.
If you have not yet begun your entry, your best option is to use the fraction template , and then "fill in" both the numerator and denominator. If, in contrast, you have already started the fraction
(e.g. in
you want
to be the fraction's numerator), the easiest way to complete it is to use the division symbol . If your numerator or denominator is complex, we highly recommend using the fraction template
(otherwise, you need to enclose the numerator and/or denominator in parentheses so that the complex expression is interpreted as a single expression in the numerator or denominator). Always be aware
of which part of the expression your cursor is in. For example, if you type in
to raise
to the power of 2 (
), make sure to 'exit' the exponent before adding another term (e.g.
). Unless the exponent is exited via the key, your entry will actually be
. When you click on the parentheses key , both left and right parentheses are created and cursor is placed inside them. Once you are finished typing the parenthesized expression, use the to exit the
parentheses. The same procedure applies to the absolute value key . In order to use the subscript key , you already need to have the variable (without its subscript) typed in. For example, to type
, first type 'x', then click , and then enter '5' in the provided subscript. The key can then be used to 'exit' the subscript position. The square root is located on the primary keyboard layout. If
you need the cube root , please select the function keyboard layout. For higher radical indexes replace the root index "3" by the desired integer index. If your device is small (3'' to 4''
diagonally), landscape mode will most likely enable you to enter math expressions with more precision (because of wider keys). For 'tall' expressions (e.g., complex fraction), the portrait mode will
provide a better viewing experience.
Last, but not least, be aware that you are using a web site, not an app. The keyboard response will not be as instantaneous as it would be with a native keyboard. While the delay should not be
significant, it will interfere with speed-typing (which you probably want to avoid anyway, while trying to enter a complex math expression).
Also, please be aware of the fact that fast typing on your Android device may be interpreted as a 'double-tap', which will unnecessarily zoom in the keyboard.
Steps at a glance
Additional hints:
You can obtain an explanation for any step by clicking on it. Please note that you cannot click on the very first expression, which is the original problem. Sometimes the solution process is quite
lengthy and will not fit on your screen. If this is the case, drag the steps up to reach the bottom of the solution process, where you will find the final result. If you are not satisfied with the
number of solution steps displayed (either too many steps or too few), click on the
button to adjust the number of steps to be shown. On the same screen, you can select integer or real arithmetics to be used during the solution process (e.g.,
). It is important to understand that if integer arithmetics is chosen, then
decimal numbers will be converted to integer fractions as part of the solution process. This may produce a solution that is long and somewhat confusing. Pressing the
button will take you back to the problem entry screen, where you can edit and re-solve the existing problem. If you need to enter an entirely different problem or change the operation (e.g., from
), press the Solver's
button. This will take you back to the main operations menu.
Explanations at a glance
Additional hints:
Quite often, several transformations will be performed within the same step. Use the explanation navigation buttons to step through all of them. When the explanation window is too tall, you may not
be able to see the explanation navigation buttons. Simply drag the explanation up, until navigation comes into view. Explanations are typically best viewed in portrait mode. You can easily adjust the
font used for explanations. Simply open up the
page when the current font is not appropriate for your device (device physical dimensions are simply too diverse to allow us to pick 'the best font for everyone'). | {"url":"http://m.algebra-equation.com/help.php","timestamp":"2024-11-15T01:16:34Z","content_type":"text/html","content_length":"27440","record_id":"<urn:uuid:c9a1f1b8-70d7-4b50-b54c-afde4a8850c2>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00120.warc.gz"} |
Introduction to AdaBoost Algorithm with Python
In the last article, we saw the Gradient boosting machine and how it works. It works on correcting the errors of previous models by building a new model on the error. The article on Gradient boosting
Machine is available on the following link- Gradient Boosting Machine for Data Scientists
Machine learning algorithms have revolutionized the way we tackle complex problems in various domains. From predicting stock prices to diagnosing diseases, these algorithms play a crucial role. In
this article, we delve into the realm of regression using the powerful scikit-learn library in Python. While scikit-learn offers an extensive collection of pre-built models, understanding how these
algorithms work from scratch is paramount for any aspiring data scientist or machine learning enthusiast. Among these algorithms, regression stands out as a fundamental technique for predicting
continuous outcomes based on input features.
One such regression technique is the Adaptive Boosting algorithm, commonly known as AdaBoost. AdaBoost is a boosting technique that iteratively corrects the errors of previous models by assigning
higher weights to misclassified data points. By sequentially training weak learners on these weighted data points, AdaBoost constructs a strong ensemble model capable of making accurate predictions.
In this article, we not only explore the theoretical underpinnings of AdaBoost but also provide a hands-on implementation using Python’s scikit-learn library. Additionally, we discuss the importance
of hyperparameter tuning in optimizing the performance of AdaBoost, demonstrating how variations such as the base estimator and learning rate impact model accuracy. Through this comprehensive
exploration, readers will gain a deeper understanding of AdaBoost and its practical applications in real-world scenarios.
Note: If you are more interested in learning concepts in an Audio-Visual format, We have this entire article explained in the video below. If not, you may continue reading.
AdaBoost Algorithm
In the case of AdaBoost, higher points are assigned to the data points which are miss-classified or incorrectly predicted by the previous model. This means each successive model will get a weighted
Let’s understand how this is done using an example.
Say, this is my complete data. Here, I have the blue positives and red negatives. Now the first step is to build a model to classify this data.
Suppose the first model gives the following result, where it is able to classify two blue points on the left side and all red points correctly. But the model also miss-classify the three blue points
Model 2
Now, these miss- classified data points will be given higher weight. So these three blue positive points will be given higher weights in the next iteration. For representation, the points with higher
weight are bigger than the others in the image. Giving higher weights to these points means my model is going to focus more on these values. Now we will build a new model.
In the second model you will see, the model boundary has been shifted to the right side in order to correctly classify the higher weighted points. Still, it’s not a perfect model. You will notice
three red negatives are miss-classified by model 2.
Model 3
Now, these miss-classified red points will get a higher weight. Again we will build another model and do the predictions. The task of the third model is two focus on these three red negative points.
So the decision boundary will be something as shown here.
This new model again incorrectly predicted some data points. At this point, we can say all these individual models are not strong enough to classify the points correctly and are often called weak
And guess what should be our next step. Well, we have to aggregate these models. One of the ways could be taking the weighted average of the individual weak learners. So our final model will be the
weighted mean of individual models.
After multiple iterations, we will be able to create the right decision boundary with the help of all the previous weak learners. As you can see the final model is able to classify all the points
correctly. This final model is known as a strong learner.
Let’s once again see all the steps taken in AdaBoost.
1. Build a model and make predictions.
2. Assign higher weights to miss-classified points.
3. Build next model.
4. Repeat steps 3 and 4.
5. Make a final model using the weighted average of individual models.
Now we will see the implementation of the AdaBoost Algorithm on the Titanic dataset.
First, import the required libraries pandas and NumPy and read the data from a CSV file in a pandas data frame.
Here are the first few rows of the data. Here we are using pre-processed data. So we can see it is a classification problem with those who were survived classified as 1 and those who not labeled as
In the next step, we will separate the independent and dependent variables, Saving the features in x and the target variable in y. Later, divide the data into train and test set using train_test
split from sklearn. As shown below.
Here the stratify is set to y, it is to make sure that the proportion of both the classes remain the same in both the train and test data. Say if you have 60% of class 1 and 40% of class 0 in train
data then you would have the same distribution in the test.
Now we will import the AdaBoostClassifier from sklearn.ensemble and create an instance of the same. We have set the random state value to 96, to reproduce the result. Then we used train_x and train_y
to train our model.
Now let’s check the score on training data, it comes around .84. Now we will do the same for the test data and it comes out to be .79.
Hyper-parameter Tuning
Let’s have a look at the hyper-parameters of the AdaBoost model. Hyper-parameters are the values that we give to a model before we start the modeling process. Let’s see all of them.
• base_estimator: The model to the ensemble, the default is a decision tree.
• n_estimators: Number of models to be built.
• learning_rate: shrinks the contribution of each classifier by this value.
• random_state: The random number seed, so that the same random numbers generated every time.
To experiment with hyper-parameters, this time we have set the base_estimator as RandomForestClassifier. We are using 100 estimators and learning_rate as 0.01. Now we train the model using the
train_x and train_y as previously. Next is to check the score of the new model.
On the basis of the performance, you can change the hyper-parameters accordingly.
AdaBoost, an adaptive boosting algorithm, enhances model performance by assigning higher weights to misclassified data points in each iteration. This iterative process focuses on improving the
accuracy of weak learners, ultimately creating a strong classifier through ensemble learning. Through a Python implementation utilizing the AdaBoostClassifier, we observed its effectiveness in binary
classification tasks, achieving a training score of approximately 0.84 and a test score of around 0.79. Moreover, hyper-parameter tuning, such as adjusting the base estimator, number of estimators,
and learning rate, further offers avenues for optimizing model performance.
Additionally, exploring alternatives like RandomForestClassifier as the base estimator showcases the versatility of AdaBoost. Overall, this article provides a comprehensive understanding of AdaBoost,
its implementation in Python, and the significance of ensemble learning techniques in enhancing classification accuracy. Through integrating existing boosting techniques and innovative approaches
like deep learning, the realm of ensemble learning continues to evolve, promising even more robust solutions for diverse machine learning tasks.
You can also enroll in our free Python Course Today!
Frequently Asked Questions
Q1. What Is the AdaBoost Algorithm?
A. The AdaBoost algorithm, short for Adaptive Boosting, is an ensemble method utilized for classification tasks. It works by sequentially training a series of weak learners on the training set, where
each subsequent learner focuses more on the data points that were misclassified by the previous ones. By adjusting the weights of these data points, AdaBoost aims to minimize the overall error rate.
Eventually, the predictions of all weak learners are combined through a weighted sum to form the final prediction. Unlike methods like bagging or random forest, AdaBoost prioritizes the most
challenging instances, enhancing its performance metrics and making it a powerful tool alongside neural networks and xgboost.
Q2. How Adaboost Classifier Works?
A. For beginners looking to understand the implementation of AdaBoost, importing necessary libraries like numpy and pandas is the first step. Next, utilizing x_train and y_train data, AdaBoost
sequentially trains a series of weak classifiers, often simple models like logistics regression, to optimize performance on the given dataset, such as iris. By adjusting weights based on
misclassifications, AdaBoost mitigates overfitting while improving accuracy. Through this iterative process, AdaBoost combines the predictions of multiple weak classifiers to form a robust ensemble
model, showcasing its efficacy in enhancing machine learning models for various classification tasks.
Q3. What is AutoML in Machine Learning?
A. AutoML, short for Automated Machine Learning, streamlines the model development process by automating various stages, including importing essential libraries like numpy and pandas, initializing
diverse algorithms such as SVM and DecisionTreeClassifier, and implementing techniques like logistic regression. Through sample weights and optimization, AutoML fine-tunes model parameters to enhance
performance on datasets. Leveraging majority vote, AutoML combines predictions from multiple models for improved accuracy. Additionally, AutoML facilitates seamless integration with GitHub
repositories and offers tutorials for beginners to swiftly navigate its functionalities, making it a comprehensive solution for automating machine learning workflows, particularly evident in tasks
like implementing AdaBoost and evaluating models on x_test data.
Responses From Readers | {"url":"https://www.analyticsvidhya.com/blog/2021/03/introduction-to-adaboost-algorithm-with-python/?utm_source=reading_list&utm_medium=https://www.analyticsvidhya.com/blog/2015/09/hypothesis-testing-explained/","timestamp":"2024-11-03T15:24:25Z","content_type":"text/html","content_length":"370115","record_id":"<urn:uuid:8187b655-48ec-4088-866a-007ed4742277>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00848.warc.gz"} |
How do you determine if 3x = |y| is an even or odd function? | Socratic
How do you determine if #3x = |y|# is an even or odd function?
1 Answer
$3 x = | \setminus y |$
substituting $y$ for $- y$
$3 x = | \setminus \pm y |$
$3 x = | \setminus y |$
so it's even.
In general:
A function is even $\rightarrow$ it is symmetrical about the $y$ axis
and an odd function$\rightarrow$ if it's symmetrical about the origin
but if You want to know through an equation you just substitute for each $x$ for $- x$
A function is even
if $f \left(x\right) = f \left(- x\right)$
like $y = {x}^{2}$ if You substitute for each $x$ for $- x$ You get
$y = {\left(- x\right)}^{2} = {x}^{2} = f \left(x\right)$ so it's even
and same goes for $y = | \setminus x |$
And it will be odd if $f \left(x\right) = - f \left(- x\right)$
ex: $y = x$
if You substitute $x$ for $- x$you get
$y = - x = - f \left(x\right)$
and it will be neither even nor odd if it gives You something else like $y = 3 x + 2$
if You substitute $x$ for $- x$
you get:
$y = - 3 x + 2 \ne \pm f \left(x\right)$
Impact of this question
2597 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-determine-if-3x-y-is-an-even-or-odd-function#590837","timestamp":"2024-11-05T21:32:11Z","content_type":"text/html","content_length":"34739","record_id":"<urn:uuid:4570d44d-8f29-44f5-94a4-945223823a01>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00898.warc.gz"} |
B June 2017 clipping volume report | One Bucket at a Time />
<meta property=
B June 2017 clipping volume report
I prepared this progress report in June 2017 to share information with turf managers who shared clipping volume data with me over the past few years.^34
I now have data on clipping volume from putting greens on nine golf courses. For many of these courses there are data from multiple (or all) greens at each mowing; for some of the courses there are
data for a selection of indicator greens, or for a single green. This report gives a summary of the data collected so far, with some advice on how the data can be understood and used, and with an
explanation of what I’m trying to learn from these data.
The previous page shows monthly totals for the nine locations.
B.1 What’s the use of clipping volume?
In Short Grammar of Greenkeeping I wrote that a way to define greenkeeping is to say it is “managing the growth rate of the grass to create the desired playing surface for golf.” The typical way to
assess growth rate is to observe the grass, and also to observe the quantity of clippings. Because clippings from putting greens are collected in mower baskets, it is a simple procedure to measure
the volume of clippings as the mower baskets are emptied.
I can think of many ways that these data might be useful, but the volume data need to be put into context, or have some baseline comparisons, before one can be confident in changing maintenance
practices based on them. Here’s what I’m usually thinking of when I consider clipping volume data:
growth rate tracking
One might have a target growth rate for certain seasons, for cumulative amounts to match traffic, or target rates for special events. Big tournaments might have lower growth rates. Busy seasons
of golfing traffic might require faster growth rates. One can compare from course to course, or one can compare within the same course from year to year or season to season. I have been using
units of liters (L) of clippings per 100 m^2 of green area. There is plenty of variation, and that is normal. To pick the center of the variation, I can describe the median values. When putting
greens are mown, the growth rate is such that 50% of the time the volume is from 0.67 to 2.2, with a median of 1.2. That’s for all seasons and doesn’t correct for number of days since the last
mowing. That’s just what is in the baskets, and the histogram in Figure 1 shows what to expect. If you are getting less than 0.67, then that is in the lower 25%. That’s not much growth. And you
might sometimes get from 2.2 up to about 10, but that is rather high. This is not to say what any course should be, but is an attempt to put this into context.
checking consistency or variability
Are all greens growing at the same rate? Are all the mowers cutting the same amount of grass? Is anything changing across the course as a whole, or on a subset of greens? Having a number for the
amount of leaves being cut off the greens helps to answer those questions.
adjusting inputs based on growth
It seems reasonable in the day to day operations of a golf course to consider making adjustments in the N supply, mowing height, topdressing rate, growth regulators, rolling frequency, irrigation
rate, etc., based on how much the grass is growing and how much the actual growth rate deviates from the desired growth rate.
estimating the real growth
Volume is measured because it is easy. What can be even more useful is the dry weight of the clippings harvested, but that is not practical, for two reasons. First, one needs to separate any sand
collected in the clippings from the clippings themselves. Second, one must dry the clippings in an oven. But one can make an estimate of the dry weight from the clipping volume.
estimating nutrient use and harvest
Now we move on to topics that are of particular interest to me, perhaps more from a research perspective rather than turfgrass management. But if we have an estimate of the dry weight harvest,
then we can estimate nutrient removal through mowing on a daily, weekly, monthly, and annual basis. If we know nutrient use, then we should be able to match that with nutrient supply. We can also
compare locations. For example, if location X used this much N and K and had a known growth rate, then location Y turf management can be adjusted based on how location X was managed.
topdressing and organic matter management
Wouldn’t it be nice to never topdress? To never core aerify? If the surface conditions were perfect, and then the growth rate was 0, one would never have to topdress or aerify. I am interested
not so much in eliminating topdressing or aerifying, but in trying to understand how this work can be done as little, or as infrequently, as possible. It seems likely that knowing the growth
rate, and then comparing what is done at location X and Y, one can get closer to the objective of doing that work as infrequently as possible.
checking temperature and growth potential
When I first started looking into this, it was largely on this topic. I was less interested in the actual amount of growth, and was more interested in how it was related to temperature or
temperature-based growth potential. See, for example, this report from 2014: http://www.seminar.asianturfgrass.com/20140612_clipping_yield.html. I’m less interested in that now – more about that
In some of these areas, I have a pretty good idea of what the numbers mean. In others, I don’t, but I want to learn more and see if it is possible to make use of the clipping volume data to get to an
answer. Here’s a summary of where I’m at so far.
B.2 Growth rate tracking
I really started to get interested in these clipping volume data while working with Andrew McDaniel at Keya GC during the KBC Augusta tournament. I’ve written about this here: http://
www.blog.asianturfgrass.com/2015/09/tournament-week-clipping-volume.html. To produce the desired putting surfaces for the tournament, it was really helpful to check the clipping volume data. As I
looked into this more, I saw how easy it was to collect the data, and I realized that there were a lot of uses for these data.
The second page of this report shows total amounts by month for different grasses at different locations. From a look at that, you can see what the normal amounts of clippings tend to be. Figure 1
shows the clippings you can expect on a single day. But of course that is bound to vary by season. I think it makes sense to be familiar with the normal ranges and fluctuations at your course, and
also to know what is normal in the big picture of all courses.
Figure 2 shows the cool-season and warm-season monthly totals. Remember, this is for all months, so it includes some months when the grass is only mown a few times. But it generally represents what
is typical over a long period of time.
B.3 Checking consistency or variability
This is something I am really interested in. I know some places just test one green, and use that to track some data. I find in most interesting when all the greens are measured, because one can then
check that all mowers are cutting the same, and can evaluate the microenvironments on the course that may influence growth, and can check the consistency of the measurements.
I’ve spent a lot of time working on this, and don’t have a perfect answer yet. If you are really interested in this, please have a look at http://www.blog.asianturfgrass.com/2016/08/
clipping-volume-variation-from-green-to-green.html and especially check the charts I showed in that post.
The best I can come up with now is this. When measuring all the greens (or multiple greens), the median coeifficient of variation (CV) is about 0.33. The coefficient of variation is the standard
deviation of all the measurements, divided by the mean (average) of all the measurements. That would be for a single day. If the CV is less than 0.33, you can think of the greens as being more
consistent than normal. If the CV is more than 0.33, and as it moves above 0.4, then you can think the clipping volume is less consistent than normal. The CV is not exactly like going up or down by a
fixed percentage, but I think it is reasonable to look at it this way. If you measure say 18 greens, or 20 greens, or even 3 greens, and want to check the consistency, but don’t have an easy way to
calculate the CV, just take the average (mean) and then go down 33% and up 33%. If all the clipping volumes are within those boundaries, then the greens are more consistent than usual. If one or more
greens are outside those boundaries, then the greens are less consistent than normal.
I’m still working on this, so if you have suggestions, I’ll be happy to hear them. I prefer the coefficient of variation to describe the variability in multiple numbers. I looked up the standard
equation for defining what is an outlier, and that is 1.5 times the distance from the 25% quartile to the 75% quartile. 50% of the measurements will be in that range, and one can take that range,
multiply by 1.5, and then add to the mean, and subtract from the mean, and then check if any measurements are outside that range.
I prefer the CV to this outlier identification approach because I think it more accurately represents what we would be looking at. Here’s a quick example with green speed to illustrate this.
Let’s say we measure five greens with a stimpmeter. The greens have a speed of 9, 9, 10, 11, and 11 feet. The average (mean) is 10 feet. The standard deviation of those greens is 1 foot. That is, the
average green is 1 foot from the mean. The CV is then 0.1.
We could also have five greens with a speed of 9, 9.5, 10, 10.5, and 11 feet. Now the standard deviation goes down, because two of greens move closer to the mean which remains at 10. The standard
devation of those five greens is 0.79 feet, and the CV is lower than 0.1; it is now 0.079. The CV tells us something here. It shows which set of data have less variation from the average value.
If we would take the outlier identification approach, it doesn’t identify any outliers.
B.4 Ajusting inputs based on growth
I don’t have any numbers to share here, but if I were managing turf, I’d be adjusting inputs based on how much the grass was growing. I’d look at the grass, and I’d look at the color of the grass,
and I’d check the weather forecast, and I’d consider the current grass conditions and how those compare to what I want the future grass conditions to be. To those data or observations, I’d add the
clipping volume data, and take these all together to make better choices about what to do and what to change. How to adjust the dials of the inputs to get the desired turfgrass conditions, as Chris
Tritabaugh has explained to me.
B.5 Estimating the real growth
Volume is kind of useless. What we really want is dry matter harvest. That is how much the grass grew, not counting water. How much actual plant material was produced, per surface area of turf? While
volume is useless, it is easy, and from volume we can estimate the dry matter yield.
B.6 Fresh weight of clippings
I think volume is the easiest thing to measure. To measure fresh weight one must bring along a scale, or bring the clippings back to a scale. And then there is the issue of the water in the
clippings, and the sand in the clippings. Sand is going to affect the volume too, but I think sand will have proportionately more effect on the mass of the clippings.
Andrew McDaniel^35 has shared 2016 data with me of volume and fresh weight measured from the same greens (Figure 3).
This is for korai greens, and the ratio may be different for other species. But for korai, as Figure 3 shows, when the ratio of mass to volume is more than 0.5, that is when there is a lot of sand on
the greens. These data are being collected again in 2017 at Keya. In 2016, 16% of the mowing events had so much sand in the clippings that the data had to be discarded, if we use that 0.5 ratio
cutoff rule.
But after getting rid of the samples that are contaminated with sand, there is a consistent relationship between fresh weight and volume (Figure 4). To reiterate, I prefer to measure clipping volume
because volume is easier to measure, and because volume has less of a problem with sand mass error. And after one has measured volume, one can predict the mass anyway.
B.7 Estimating nutrient use and harvest
This is something I find really interesting. If we can estimate the dry matter harvest from the clipping volume, then we can have an idea of how many nutrients the grass is using. Mr. Kihara and
Mr. Seiyama from Nichino Ryokka carefully collected clippings and measured volume, fresh weight, and dry weight at the Nichino Ryokka research center in Chiba. Figure 5 shows the relationship between
clipping volume and clipping dry weight (there is a drying oven at the research center) for three different grasses.
If we focus on creeping bentgrass and Zoysia matrella (korai), I’ll quickly explain how this works. First, let’s do bentgrass. The slope of the line (in Figure 5) for the bentgrass volume to dry
weight relationship is 0.63 g m^-2 for every 1 L 100 m^-2. The bentgrass greens at the location 36 degrees North latitude have an average annual yield in volume of 326 L 100 m^-2. That works out to
an expected annual clipping harvest of 205 g m^-2. If the bentgrass leaves have 4% N, that is 8.2 g N m^-2. The N applied to those greens in 2014 was 12.9 g, then 10.1 in 2015, and I don’t have the
final numbers for 2016 but it was less than 10. So there seems to be a pretty close correspondence between the N applied and the growth.
For korai, I’ll use the data from the zoysia at 33 degrees North. The average clipping volume from 2013 to 2016 at this location was 340 L 100 m^-2. From the Nichino Ryokka data, we can expect each
liter of korai clippings in 100 m^2 to work out to 1.1 g of dried clippings in 1 m^2. That gives an expected harvest of 374 g m^-2, and if the korai clippings have 3% N, that would be a N harvest in
the clippings of 11.2 g m^-2. At this location, the N applied from 2013 to 2016 was 14.6, 9.5, 10.6, and 11.1 g m^-2. We can work through this for every element. I just show for N to point out that
in the case of profesionally-managed turf (both the bentgrass and korai course used as examples here host professional tournaments too) supplied with a relatively low amount of N fertilizer, the
quantity of clippings produced is related to the quantity of N supplied.
The implications of this are that one can be more precise in nutrient applications. And I am interested in this as a way to check nutrient depletion rates from the soil, and do some calculations
related to the MLSN guidelines. I think for most turfgrass managers, it is enough to understand that the clipping harvest represents a quantity of nutrients, and also the fertilizer applied
represents a quantity of nutrients. This particular aspect is more of a research thing, rather than a turf management thing, but I am especially interested in this.
You can imagine what I think of supplying two or three or more times the quantity of an element, like potassium or phosphorus or calcium, compared to the amount that is used by the grass. These
calculation are a way to document this.
B.8 Topdressing and organic matter management
But this growth, as it relates to the nitrogen supply, is quite pertinent to this section. I don’t have too much to say here, other than it makes sense to me that if the grass grows less, by less N
supply, then there will be less of a need for topdressing and core aeration.
If you haven’t, see http://www.blog.asianturfgrass.com/2016/05/data-to-support-an-anecdote.html for more about this. This is about a course doing 45,000 rounds per year and that has dormant grass for
6 months. The N supply is low, the grass conditions have gotten better year after year, and the soil organic matter keeps going down. But the core aeration and topdressing are also less than what you
might expect.
I believe this approach can be implemented in more places. And I think measuring clipping volume can be a way to compare locations. So if X location is having great success with this much topdressing
and this much N and this much clipping volume, I think the clipping volume comparison from place to place can be used to fine tune some of the organic matter management. I’ve written articles about
this in my series for Golf Course Seminar magazine, in particular I think the March and April 2017 issues. When I get the 2^nd edition of Short Grammar of Greenkeeping to the printer, those articles
(in English) will be included as appendices.
B.9 Checking temperature and growth potential
This was my original interest in this, and I am still working to understand this. There is obviously a relationship – look for example at seashore paspalum at 21 degrees North – it grows less in
winter. Look at fine fescue mix at 64 degrees North – it doesn’t grow in winter. At the broad scale, temperature plays a huge role. Actually, looking at all the charts again, it is easy to see how
much of a seasonal – a temperature – effect is there.
As I’ve become interested in predicting dry weight of clippings and nutrient use of grass, and also trying to understand what normal variability of growth is, I put the temperature analysis on the
back burner. I’m still really interested in this, but there is only so much time to work on it!
34. I want to say thank you again to everyone who has made the effort to collect the data and then the extra effort to share it with me.↩
35. And the maintenance staff at Keya GC who have been so kind to bring a scale along with them on their 1, 6, 9 route.↩ | {"url":"https://micahwoods.github.io/buckets/june-2017-clipping-volume-report.html","timestamp":"2024-11-11T04:17:30Z","content_type":"text/html","content_length":"37146","record_id":"<urn:uuid:ffbd126b-c527-4381-8086-c5274f881449>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00653.warc.gz"} |
Another Quiz for Thinkers - From the NuPathz Collection
Another Quiz for Thinkers
1. There is one word in the English language that is always pronounced incorrectly. What is it?
2. A man gave one son 10 cents and another son was given 15 cents. What time is it?
3. A boat has a ladder that has six rungs; each rung is one foot apart. The bottom rung is one foot from the water. The tide rises at 12 inches every 15 minutes. High tide peaks in one hour. When the
tide is at its highest, how many rungs are under water?
4. Is half of two plus two equal to two or three?
5. There is a room. The shutters are blowing in. There is broken glass on the floor. There is water on the floor. You find Sloppy dead on the floor. How did Sloppy die?
6. How much dirt would be in a hole 6 feet deep and 6 feet wide, that has been dug with a square edged shovel?
7. I am in Hawaii and drop a bowling ball into a bucket of water, which is 45 degrees F. At the same time, I drop another ball, of the same weight, mass and size, into a bucket at 30 degrees F. Which
ball would hit the bottom of the bucket first? ... Same question, but the location is in Canada?
8. What is the significance of the following? The year is 1978, thirty-four minutes past noon on May 6th.
9. What can go up a chimney down, but can't go down a chimney up?
10. A farmer has 5 haystacks in one field and 4 haystacks in another field. How many haystacks would he have if he combined them all in the center field?
11. What is it that goes up and goes down but does not physically move?
12. Paul is 20 years old in 1980, but only 15 years old in 1985. How is this possible?
13. What can have four legs, but only one foot?
14. Kindly old Grandfather Lunn
Is twice as old as his son
Twenty-five years ago
Their age ratio
Strange enough was three to one.
How old is Grandfather Lunn?
15. Said a certain young lady named Gwen
Of her tally of smitten young men
"One less plus three more
Divided by four
Together give one more than ten."
How many boyfriends does Gwen have?
16. A team's opening batter named Nero
Squared his number of hits, the big hero!
After subtracting his score
He took off ten and two more
And the final result was a "zero."
How many hits did Nero make?
17. Some freshman from Trinity Hall
Played hockey with a wonderful ball.
Two times its weight
Plus weight squared, minus eight,
Gave "nothing" in ounces at all.
What was the weight of the ball?
18. The Bar Z ranch was a dude ranch. One day a new "dude" asked one of the stable hands how many men were tending the horses in the corral. Having a mischievous sense of humor, he replied, "I saw
eighty-two feet and twenty-six heads." He then walked away, leaving the dude scratching his head trying to figure it out. How many men were tending the horses?
19. One morning as Paul was getting his newspaper, he noticed something that needed to be fixed, on his new house. At the hardware, he described his problem to the manager. The manager said, "I know
just what you need." He led Paul down some aisles and stopped in front of some bins. Digging down into some of the bins, he set something up on the shelf.
"I saw your house when it was built," the manager said. "Here's all that you'll need and how much it'll cost ... five will be 15 cents, while fifty will be 30 cents. 250 will be 45 cents, while 2507
will only cost you 60 cents. One lady, about 20 blocks from your house, bought 30247 and only paid 75 cents! These are black, but they also come in gold and silver."
What was the manager selling?
20. If it takes 3 people to dig a hole, how many does it take to dig half hole?
21. What is the beginning of eternity? The end of time and space? The beginning of every end? And the end of every place?
1. There is one word in the English language that is always pronounced incorrectly. What is it?
2. A man gave one son 10 cents and another son was given 15 cents. What time is it?
“1:45” The man gave away a total of 25 cents. He divided it between two people. Therefore, he gave a quarter to two.
3. A boat has a ladder that has six rungs; each rung is one foot apart. The bottom rung is one foot from the water. The tide rises at 12 inches every 15 minutes. High tide peaks in one hour. When the
tide is at its highest, how many rungs are under water?
“None, the boat rises with the tide. Duh.”
4. Is half of two plus two equal to two or three?
“Three” Well, it seems that it could almost be either, but if you follow the mathematical orders of operation, division is performed before addition. So ... half of two is one. Then add two, and the
answer is three.
5. There is a room. The shutters are blowing in. There is broken glass on the floor. There is water on the floor. You find Sloppy dead on the floor. How did Sloppy die?
“Sloppy is a goldfish. The wind blew the shutters in, which knocked his goldfish bowl off the table and it broke, killing him.”
6. How much dirt would be in a hole 6 feet deep and 6 feet wide, that has been dug with a square edged shovel?
No matter how big a hole is, it's still a hole -- the absence of dirt.
7. I am in Hawaii and drop a bowling ball into a bucket of water, which is 45 degrees F. At the same time, I drop another ball, of the same weight, mass and size, into a bucket at 30 degrees F. Which
ball would hit the bottom of the bucket first? Same question, but the location is in Canada?
“Both questions, same answer. The ball in the bucket of 45 degree F. water hits the bottom of the bucket last.” Did you think that the water in the 30 degree F. bucket is frozen? Think again. The
question said nothing about that bucket having anything in it. Therefore, there is no water (or ice) to slow the ball down.
8. What is the significance of the following ... The year is 1978, thirty-four minutes past noon on May 6th.
“The time and month/date/year are 12:34, 5/6/78.”
9. What can go up a chimney down, but can't go down a chimney up?
“An umbrella”
10. A farmer has 5 haystacks in one field and 4 haystacks in another field. How many haystacks would he have if he combined them all in the center field?
“One” If he combines all of his haystacks, they all become one big stack.
11. What is it that goes up and goes down but does not physically move?
The temperature
12. Paul is 20 years old in 1980, but only 15 years old in 1985. How is this possible?
“The years are in B.C., not A.D. as you probably assumed.” Based on the system we use to number the years, the years counted down in B.C. (but they weren't counting backwards back then).
13. What can have four legs but only one foot?
“A bed”
14. Kindly old Grandfather Lunn
Is twice as old as his son
Twenty-five years ago
Their age ratio
Strange enough was three to one.
How old is Grandfather Lunn?
“He is 100 and his son is 50.”
15. Said a certain young lady named Gwen
Of her tally of smitten young men
"One less plus three more
Divided by four
Together give one more than ten."
How many boyfriends had she?
“Gwen had 42 boyfriends.”
42-1=41. 41+3=44. 44/4=11. 11-1=10.
16. A team's opening batter named Nero
Squared his number of hits, the big hero!
After subtracting his score
He took off ten and two more
And the final result was a "zero."
How many hits did Nero make?
“4” If you square it, you get 16. Subtract his number of hits and you get 12. Subtract 10 and then 2 more and you get 0.
17. Some freshman from Trinity Hall
Played hockey with a wonderful ball.
Two times its weight
Plus weight squared, minus eight,
Gave "nothing" in ounces at all.
What was the weight of the ball?
“2 ounces” (Beach ball or ping-pong ball?) 2x2=4. 4+2^2=8. 8-8=0.
18. The Bar Z ranch was a dude ranch. One day a new "dude" asked one of the stable hands how many men were tending the horses in the corral. Having a mischievous sense of humor, he replied, "I saw
eighty-two feet and twenty-six heads." He then walked away, leaving the dude scratching his head trying to figure it out. How many men were tending the horses?
“11 men (and 15 horses)” 11 (men) x 2 (feet per man)=22, 15 (horses) x 4 (feet per horse)=60, and 22 (men's feet) + 60 (horse's feet) = 82 feet. Also, 11 (men) + 15 (horses) = 26 (total heads).
19. One morning as Paul was getting his newspaper, he noticed something that needed to be fixed, on his new house. At the hardware, he described his problem to the manager. The manager said, "I know
just what you need." He led Paul down some aisles and stopped in front of some bins. Digging down into some of the bins, he set something up on the shelf.
"I saw your house when it was built," the manager said. "Here's all that you'll need and how much it'll cost ... five will be 15 cents, while fifty will be 30 cents. 250 will be 45 cents, while 2507
will only cost you 60 cents. One lady, about 20 blocks from your house, bought 30247 and only paid 75 cents! These are black, but they also come in gold and silver."
What was the manager selling?
“House numbers” Each digit costs 15 cents.
20. If it takes 3 people to dig a hole, how many does it take to dig half a hole?
“It's impossible to dig a half of a hole. Either you have a hole, or you don't.”
21. What is the beginning of eternity? The end of time and space? The beginning of every end? And the end of every place?
“The letter E”
NuPathz Home Back to Odds 'N Ends
NuPathz.com – Your affordable source for self improvement and self help books & materials
Illuminating the path for personal motivation, growth and development
SUCCESS = TAKING THE STEPS TO DO THE THINGS YOU WANT TO DO!
JUST GO FOR IT! | {"url":"https://nupathz.com/oddsnends/Another_Quiz_for_Thinkers.htm","timestamp":"2024-11-09T07:02:13Z","content_type":"text/html","content_length":"31973","record_id":"<urn:uuid:be3f2f51-f845-4087-84d5-ae981bad000e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00512.warc.gz"} |
Modeling insulin kinetics: responses to a single oral glucose administration or ambulatory-fed conditionsModel StatusModel Structuremodel diagram
Catherine Lloyd Auckland Bioengineering Institute, The University of Auckland Please note that this particular variant of the model is the basic core model which includes three main variables: plasma
insulin concentration (x), glucose concentration (y), and the density of the pancreatic beta cells (z) - calculated by equations 11-13. Parameter values have been taken from the legend of figure 2.
The model runs in COR and OpenCell and the units are consistent throughout, however the CellML model does not recreate the published results for x (it does for y in figure 2). ABSTRACT: This paper
presents a nonlinear mathematical model of the glucose-insulin feedback system, which has been extended to incorporate the beta-cells' function on maintaining and regulating plasma insulin level in
man. Initially, a gastrointestinal absorption term for glucose is utilized to effect the glucose absorption by the intestine and the subsequent release of glucose into the bloodstream, taking place
at a given initial rate and falling off exponentially with time. An analysis of the model is carried out by the singular perturbation technique in order to derive boundary conditions on the system
parameters which identify, in particular, the existence of limit cycles in our model system consistent with the oscillatory patterns often observed in clinical data. We then utilize a sinusoidal term
to incorporate the temporal absorption of glucose in order to study the responses in the patients under ambulatory-fed conditions. A numerical investigation is carried out in this case to construct a
bifurcation diagram to identify the ranges of parametric values for which chaotic behavior can be expected, leading to interesting biological interpretations. Schematic diagram of the pancreatic
beta-cells. Glucose is taken up and produced in response to insulin secretion and clearance. Beta-cell formation and loss represent the rates at which beta-cells replicate and die. The original paper
reference is cited below: Modeling insulin kinetics: responses to a single oral glucose administration or ambulatory-fed conditions. Yongwimon Lenbury, Sitipong Ruktamatakul, and Somkid
Amornsamarnkul, 2001, Mathematical Biosciences, 59, 15-25. PubMed ID: 11226623 | {"url":"http://models.cellml.org/workspace/lenbury_ruktamatakul_amornsamarnkul_2001/@@rawfile/716f016fc5bd424fef954a8fb20cb2dd4e0caef2/lenbury_ruktamatakul_amornsamarnkul_2001_a.cellml","timestamp":"2024-11-12T09:02:17Z","content_type":"application/cellml+xml","content_length":"19645","record_id":"<urn:uuid:11369480-bb33-4e1c-a317-0c7f52c8fa2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00035.warc.gz"} |
Performance Results
Impressive runtime results using parallel [scientific computing] programs are hard to achieve. It gets trickier when trying to report such results in scientific papers. Authors typically resort to
attention deflecting methods when presenting performance results of parallel programs on distributed systems. Though these methods become clear to a trained eye, for someone uninitiated about the
area, these go unnoticed sometimes. Of the various such performance-boosting reporting tricks, I found a few amusing and useful:
• Generally, 40% of an application runtime is spent on data movement. To show the results in the best possible light, only the time taken by the "core" part of the program is reported as the
complete runtime result.
• Employed in-line assembly code or direct machine code emissions are sometimes left unreported.
• Due to added runtime of 64-bit floating-point arithmetic, 32-bit performance results are shown.
• When comparing against a conventional supercomputer implementation, the parallel algorithm is heavily modified to suit underlying machine architecture.
• Unoptimized, non-vectorized code is compared with the parallel version.
• The performance results are projected to a full system instead of running on real hardware.
• Problems whose size increases as the number of processors increase are selected. Careful choosing is needed because most problems do not scale well in size with processor scale up.
• Runtime comparison results are presented between parallel code on new hardware and an obsolete system.
• Runtime comparison results are presented between parallel code on dedicated hardware and a sequential code on shared hardware.
• The utility is brought into the equation: instead of saying "performance is X mflops", something like: "performance is X mflops per dollar" is said. It is suggestive of the fact that the
processors are 100% utilized 100% of the time (even though they may be busy with context switching or syncing).
• The paper/presentation is splattered with cute pictures without mentioning performance (convergence rate is generally reported instead of runtime performance). | {"url":"https://densebit.com/posts/21.html","timestamp":"2024-11-12T05:42:59Z","content_type":"text/html","content_length":"4417","record_id":"<urn:uuid:efebabf5-1e53-4eaf-b5ce-149305df0a0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00044.warc.gz"} |
Text to Binary | ReadyTools
Binary Translator
A binary translator is enables users to convert binary numbers, which are expressed in the form of 0s and 1s, to other numeral systems. The most commonly used numeral systems that binary numbers are
converted to are decimal (base 10) and hexadecimal (base 16) systems. Decimal system uses 10 digits (0-9) to represent numbers, whereas hexadecimal system uses a combination of 16 digits (0-9 and
A-F). Binary translators are widely used in computer science and engineering fields to convert binary data to more human-readable formats.
The binary number system is a fundamental concept in computing. It is used to encode the data and instructions stored in computers in the form of 0 and 1 bits, also known as binary digits. However,
in human arithmetic and text representation, other number systems are typically used. For instance, in everyday life, the decimal number system is commonly used.
The binary transformation tool allows users to easily convert binary numbers to other number systems and vice versa. For example:
- Converting the binary number 1010 to the decimal number system results in 10.
- The binary number 110010 converted to hexadecimal will result in 32.
Binary transformers play a significant role in computing, programming, network communications, and various other fields where binary data needs to be represented in human-readable form or vice versa.
These tools are highly efficient and can perform conversions quickly, making them a valuable asset in various industries. They enable individuals to convert binary data into a more human-readable
format, making it easier to understand and process. Similarly, they allow users to convert human-readable data into binary format for efficient storage and transmission. | {"url":"https://www.readytools.co/converter/text-to-binary/","timestamp":"2024-11-09T02:45:34Z","content_type":"text/html","content_length":"196243","record_id":"<urn:uuid:14882cf9-10c8-4fc0-9e68-886e01f512d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00225.warc.gz"} |
xdrawarc(3x11) [osf1 man page]
xdrawarc(3x11) [osf1 man page]
XDrawArc(3X11) XDrawArc(3X11)
XDrawArc, XDrawArcs, XArc - draw arcs and arc structure
XDrawArc(display, d, gc, x, y, width, height, angle1, angle2)
Display *display;
Drawable d;
GC gc;
int x, y;
unsigned int width, height;
int angle1, angle2;
XDrawArcs(display, d, gc, arcs, narcs)
Display *display;
Drawable d;
GC gc;
XArc *arcs;
int narcs;
Specifies the start of the arc relative to the three-o'clock position from the center, in units of degrees * 64. Specifies the path and
extent of the arc relative to the start of the arc, in units of degrees * 64. Specifies an array of arcs. Specifies the drawable. Speci-
fies the connection to the X server. Specifies the GC. Specifies the number of arcs in the array. Specify the width and height, which
are the major and minor axes of the arc. Specify the x and y coordinates, which are relative to the origin of the drawable and specify the
upper-left corner of the bounding rectangle.
XDrawArc draws a single circular or elliptical arc, and XDrawArcs draws multiple circular or elliptical arcs. Each arc is specified by a
rectangle and two angles. The center of the circle or ellipse is the center of the rectangle, and the major and minor axes are specified by
the width and height. Positive angles indicate counterclockwise motion, and negative angles indicate clockwise motion. If the magnitude of
angle2 is greater than 360 degrees, XDrawArc or XDrawArcs truncates it to 360 degrees.
For an arc specified as [ x, y, width, height, angle1, angle2 ], the origin of the major and minor axes is at [ x+{width/2}, y+{height/2}
], and the infinitely thin path describing the entire circle or ellipse intersects the horizontal axis at [ x, y+{height/2} ] and [
x+width, y+{height/2} ] and intersects the vertical axis at [ x+{width/2 }, y ] and [ x+{width/2}, y+height ]. These coordinates can be
fractional and so are not truncated to discrete coordinates. The path should be defined by the ideal mathematical path. For a wide line
with line-width lw, the bounding outlines for filling are given by the two infinitely thin paths consisting of all points whose perpendicu-
lar distance from the path of the circle/ellipse is equal to lw/2 (which may be a fractional value). The cap-style and join-style are
applied the same as for a line corresponding to the tangent of the circle/ellipse at the endpoint.
For an arc specified as [ x, y, width, height, angle1, angle2 ], the angles must be specified in the effectively skewed coordinate system
of the ellipse (for a circle, the angles and coordinate systems are identical). The relationship between these angles and angles expressed
in the normal coordinate system of the screen (as measured with a protractor) is as follows:
skewed_angle = atan [ tan(normal_angle) * (width/height) ] + adjust
The skewed_angle and normal_angle are expressed in radians (rather than in degrees scaled by 64) in the range [ 0, 2pi ] and where atan
returns a value in the range [ -pi/2, pi/2 ] and adjust is:
0 for normal_angle in the range [ 0, pi/2 ] pi for normal_angle in the range [ pi/2, 3pi/2 ] 2pi for normal_angle in the
range [ 3pi/2, 2pi ]
For any given arc, XDrawArc and XDrawArcs do not draw a pixel more than once. If two arcs join correctly and if the line-width is greater
than zero and the arcs intersect, XDrawArc and XDrawArcs do not draw a pixel more than once. Otherwise, the intersecting pixels of inter-
secting arcs are drawn multiple times. Specifying an arc with one endpoint and a clockwise extent draws the same pixels as specifying the
other endpoint and an equivalent counterclockwise extent, except as it affects joins.
If the last point in one arc coincides with the first point in the following arc, the two arcs will join correctly. If the first point in
the first arc coincides with the last point in the last arc, the two arcs will join correctly. By specifying one axis to be zero, a hori-
zontal or vertical line can be drawn. Angles are computed based solely on the coordinate system and ignore the aspect ratio.
Both functions use these GC components: function, plane-mask, line-width, line-style, cap-style, join-style, fill-style, subwindow-mode,
clip-x-origin, clip-y-origin, and clip-mask. They also use these GC mode-dependent components: foreground, background, tile, stipple, tile-
stipple-x-origin, tile-stipple-y-origin, dash-offset, and dash-list.
XDrawArc and XDrawArcs can generate BadDrawable, BadGC, and BadMatch errors.
The XArc structure contains:
typedef struct {
short x, y;
unsigned short width, height;
short angle1, angle2; /* Degrees * 64 */ } XArc;
All x and y members are signed integers. The width and height members are 16-bit unsigned integers. You should be careful not to generate
coordinates and sizes out of the 16-bit ranges, because the protocol only has 16-bit fields for these values.
A value for a Drawable argument does not name a defined Window or Pixmap. A value for a GContext argument does not name a defined GCon-
text. An InputOnly window is used as a Drawable. Some argument or pair of arguments has the correct type and range but fails to match in
some other way required by the request.
XDrawLine(3X11), XDrawPoint(3X11), XDrawRectangle(3X11)
Xlib -- C Language X Interface | {"url":"https://www.unix.com/man-page/osf1/3x11/xdrawarc","timestamp":"2024-11-02T09:22:07Z","content_type":"text/html","content_length":"35786","record_id":"<urn:uuid:5dcbe22d-a5fa-4970-af52-da7d133c786f>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00064.warc.gz"} |
Can You Spot the Vampire in this Picture Puzzle? - EduViet Corporation
Can You Spot the Vampire in this Picture Puzzle?
It’s an enjoyable challenge to engage in puzzles that require lateral thinking. For those who enjoy solving tricky puzzles and finding solutions, this is an opportunity worth exploring. Keeping your
brain focused on these puzzles can be a way to relieve stress and fatigue. There are all kinds of fantasies, puzzles, and puzzles to delve into.
Can you find the vampire in this picture puzzle?
Math puzzles engage readers by presenting scenarios that require active application of problem-solving skills. The puzzles are carefully designed to encourage individuals to think critically, analyze
the information provided, and creatively apply mathematical principles to arrive at solutions.
You are watching: Can You Spot the Vampire in this Picture Puzzle?
The image above presents a puzzle, the key to solving it lies in identifying the hidden patterns that control its elements. However, this effort comes with an urgent sense of purpose, as you must
quickly discern the logic behind patterns. This challenge requires rapid cognitive responses and sharp analytical skills within a limited time frame. Accomplishing this task requires meticulous
attention to detail and keen observation of the individual components within the image.
This is a moderately complex challenge that individuals with sharp intellect and a keen eye for detail are well prepared to overcome quickly. A ticking clock marks the beginning of the countdown,
heightening the anticipation. This enhancement has profound implications for your future pursuits, providing you with valuable skills that can positively impact every aspect of your life. Research
has even shown that engaging in such puzzles can help maintain cognitive health. Improving your intelligence through challenges like this not only enhances your ability to solve immediate problems,
but also develops broader mental agility that can benefit you in your academic, professional, and personal pursuits.
See more : Brain Test: If you have Hawk Eyes Find the Word Bath among Path in 13 Secs
While the puzzle may seem difficult, the solver’s goal is to find a solution that exactly meets the specified conditions, thereby effectively cracking the code. The following sections will delve into
the exact nature of this mathematical puzzle and the satisfying solution that awaits.
Can you find the vampire in this picture puzzle?solution
This particular math puzzle presents a difficult challenge, and we encourage you to delve into it and try to figure out the solution.
The old lady wearing sunglasses was identified as a vampire because there was no shadow of her. Under typical lighting conditions, when light hits an object, it creates a shadow. However, as often
depicted in folklore and novels, vampires are said to lack reflections and shadows. Thus, in this case, the absence of shadows highlights the identity of the vampire, making her the supernatural
entity that stands out in the scene.
Brainteasers are a great way to test your IQ and can also help you improve your memory, concentration, and attention span. NEWSTARS Education is a treasure trove of such puzzles.
What is the result of the expression 760÷40×5+8?
Get into fun territory with math tests using the following equation: 760 ÷ 40 x 5 + 8. Your task is to carefully follow the order of operations and calculate the final result.
To solve this equation, follow the order of operations. First, perform division: 760 ÷ 40 equals 19. Then, multiply: 19 x 5 equals 95. Add 8 and 95 to get the final answer of 103. Therefore, the
equation 760 ÷ 40 x 5 + 8=103.
Can you find the answer to this puzzle? 11+11=5, 22+22=11, 33+33=?
Get into the fun realm of math testing with this equation. The pattern starts with: 11+11 equals 5 and 22+22 equals 11. Now the mystery deepens: What happens when 33+33 goes through this interesting
The sequence uses multiplication and addition to create unexpected but logical progressions. In the first equation, 11+11 equals 5, calculated as (1×1) + (1×1) + 3. Likewise, for 33+33 we have (3×3)
+ (3×3) + 3, which results in 21
Disclaimer: The above information is for general information purposes only. All information on this website is provided in good faith, but we make no representations or warranties, express or
implied, as to the accuracy, adequacy, validity, reliability, availability or completeness of any information on this website.
Source: https://truongnguyenbinhkhiem.edu.vn
Category: Brain Teaser
Leave a Comment | {"url":"https://truongnguyenbinhkhiem.edu.vn/can-you-spot-the-vampire-in-this-picture-puzzle","timestamp":"2024-11-03T03:49:04Z","content_type":"text/html","content_length":"124318","record_id":"<urn:uuid:fb14dc66-e71e-4bf4-a002-4f9ce81656cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00866.warc.gz"} |
The Ultimate Brain Teaser: Can You Solve This Viral Puzzle?
In recent weeks, a puzzle has taken social media by storm, challenging everyone’s problem-solving abilities. The brain teaser is deceptively simple at first glance, yet only a small percentage of
people manage to get the right answer. If you’re looking for a challenge and want to test your logic, this viral puzzle might be the perfect opportunity. So, what is the answer? Let’s dive in and see
if you can crack it.
What Makes This Puzzle So Popular?
Puzzles like this go viral because they play on our curiosity. It’s human nature to want to solve problems, especially when others are struggling with them. This viral brain teaser taps into that
competitive spirit, and everyone wants to prove they can find the answer. But what exactly makes it so challenging?
Understanding the Puzzle Structure
This puzzle appears simple, yet it hides a trick that trips up most people. The structure often follows a sequence or pattern that requires more than basic math or logic. Understanding the structure
is the first step in finding the answer. Look carefully at how the information is presented—there’s always a hidden element.
How to Approach a Brain Teaser
Approaching a brain teaser requires a mix of logic, creativity, and patience. Here’s a strategy you can use:
• Take Your Time: Rushing leads to mistakes.
• Break It Down: Analyze each part of the puzzle separately.
• Think Outside the Box: Sometimes, the answer isn’t what you expect.
• Test Your Theory: Once you have a possible solution, test it and see if it holds.
The Hidden Element Behind the Puzzle
The key to solving this viral puzzle is understanding that it’s not just about logic—it’s about how you interpret the information. Many times, puzzles like these rely on misdirection. The trick is to
avoid overthinking and focus on the simplest explanation.
Common Mistakes People Make
When trying to solve this type of puzzle, people often make common errors:
• Overthinking: Many individuals try to apply complex logic when the answer is simple.
• Ignoring Patterns: The answer often lies in identifying a specific sequence or pattern.
• Jumping to Conclusions: It’s easy to get caught up in initial assumptions without fully analyzing the puzzle.
Why Solving Puzzles Boosts Brain Power
Solving puzzles isn’t just for fun—it’s great for your brain! Engaging in puzzles helps to enhance cognitive abilities by improving problem-solving skills, memory, and creativity. Every time you
solve a puzzle, you’re giving your brain a workout, which helps in:
• Enhancing Focus: Puzzles force you to pay attention to detail.
• Improving Memory: Remembering patterns and sequences can help strengthen memory.
• Boosting Creativity: Sometimes, you need to think outside the box to solve a puzzle, which can boost creative thinking.
The Answer to the Viral Puzzle
Now, let’s talk about the answer. Spoiler alert—if you want to try solving the puzzle on your own, skip this section!
Many viral puzzles hinge on a hidden trick or unusual interpretation of the clues. In this case, it’s important to understand how the puzzle’s logic flows. Often, the solution lies in a simple
mathematical or conceptual pattern that you might overlook at first glance.
Why the Puzzle Solution Isn’t What You Expect
One of the reasons people find this puzzle so challenging is that it goes against your intuition. It’s designed to mislead you into thinking in a certain way, only to surprise you with a different
type of solution. Once you understand this, the answer becomes clearer, but until then, it remains elusive.
How to Use This Puzzle for Brain Training
Want to get better at solving puzzles like this? Here are a few tips to train your brain:
• Solve a Puzzle a Day: Keep your mind sharp by engaging in a new puzzle every day.
• Play Mind Games: Games like chess, Sudoku, or crosswords can help develop logical thinking.
• Challenge Yourself: Push your limits by trying more difficult puzzles over time.
The Psychology Behind Viral Puzzles
Why do certain puzzles go viral? It’s all about the thrill of the challenge and the sense of accomplishment when you solve it. When people post their answers on social media, they get immediate
feedback, which only fuels the desire to share it with others. The sense of community around solving these puzzles is another key factor.
Can You Come Up With Your Own Puzzle?
If you’ve solved this puzzle, why not try creating your own? Coming up with a brain teaser is a fun way to challenge others and test their problem-solving skills. Here’s a simple formula to create a
• Identify a Common Misconception: Base your puzzle on a widely misunderstood concept.
• Use Simple Language: Keep the instructions clear and straightforward.
• Add a Twist: Make sure there’s an unexpected element that surprises the solver.
Why You Should Keep Puzzling
Solving puzzles can become addictive, but in a good way! Puzzles keep your mind active, reduce stress, and can even improve your mood. They offer a sense of accomplishment, and solving them regularly
helps you build patience and resilience.
Ready to Solve More Puzzles?
Now that you’ve tackled this viral puzzle, why stop here? There are countless other puzzles and brain teasers out there waiting for you. Test your skills, challenge your friends, and keep pushing
your brain to its limits! | {"url":"https://bkafoods.com/the-ultimate-brain-teaser-can-you-solve-this-viral-puzzle/","timestamp":"2024-11-01T20:12:50Z","content_type":"text/html","content_length":"132184","record_id":"<urn:uuid:f6738663-8069-4513-9d12-920b6b619e31>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00893.warc.gz"} |
Curl and Divergence
Are Curl and Divergence tested on the mathematics GRE?
Are we expected to know that for R^3:
F is a gradient of g => Curl of F = 0
F is a curl of G => Divergence of F = 0
(and converses inside convex sets)
Greene's Theorem, 3 dimensional Stoke's formula, Divergence theorem?
And of course I would expect no coverage of generalized Stoke's Theorem which requires differential forms.
Re: Curl and Divergence
I don't think anyone can really answer this, because the Math GRE tests whatever it wants to, and is fairly unpredictable. If you look at some of the previously administered tests, there's usually
one or two questions that seem out of NOWHERE. There is a chance they would test something like this, but it's not necessarily worth your time to spend a lot of time studying it because it is
unlikely, and it would only be one question.
If you check out the ETS website you can see what they test on, and you will notice it's 50% calc/analysis, 25% algebra, and 25% miscellaneous. It's not very specific, and leaves a lot of room for
random stuff.
Personally, I think you should know every detail of calculus like the back of your hand for this test, including this stuff. Depending on when you are taking the test, however, it may not be in your
best interest to focus on such details. A solid understand of the basics of linear algebra, for example, is much more important.
Basically: Just know everything about math ever, and you'll do well. | {"url":"https://mathematicsgre.com/viewtopic.php?f=1&t=1231","timestamp":"2024-11-12T04:07:11Z","content_type":"text/html","content_length":"19263","record_id":"<urn:uuid:08663721-b307-4444-ba1e-9b18491e7e0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00519.warc.gz"} |
An infelicity with Value at Risk | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
An infelicity with Value at Risk
Posted on 2013/02/04 by Pat
More risk does not necessarily mean bigger Value at Risk.
“The incoherence of risk coherence” suggested that the failure of Value at Risk (VaR) to be coherent is of little practical importance.
Here we look at an attribute that is not a part of the definition of coherence yet is a desirable quality.
Thought experiment
We have a distribution of returns that has a standard deviation of 1, and we vary the degrees of freedom. If the degrees of freedom are infinite, then we have the normal distribution.
As the degrees of freedom decrease, the tail lengthens. I think there is consensus that — conditional on the same standard deviation — a longer tail means more risk. Hence we want VaR to be larger
for smaller degrees of freedom. Figure 1 shows that we almost have that for probability 1%.
Figure 1: Value at Risk with varying degrees of freedom of the t distribution for probability 1%. The VaR for 3 degrees of freedom is smaller than that for 4 degrees of freedom, but otherwise the
plot is as we would hope — more risk means more VaR. Figure 2 adds Expected Shortfall to the picture.
Figure 2: Value at Risk and Expected Shortfall with varying degrees of freedom of the t distribution for probability 1%. Expected Shortfall is well-behaved in this situation at least down to 3
degrees of freedom.
Now let’s change the probability level from 1% to 5%. Figure 3 shows the picture for VaR.
Figure 3: Value at Risk with varying degrees of freedom of the t distribution for probability 5%. In Figure 3 we see precisely the wrong thing. The longer the tail, the smaller the VaR. Figure 4
shows Expected Shortfall as well for this case.
Figure 4: Value at Risk and Expected Shortfall with varying degrees of freedom of the t distribution for probability 5%. The Expected Shortfall breaks down going from 4 to 3 degrees of freedom, but
otherwise has the proper orientation.
Figure 5 looks at the issue from a different point of view. Fix two distribution assumptions — in this case, t with 5 degrees of freedom and normal — and see how the difference changes as we vary
the probability level.
Figure 5: t 5 degrees of freedom VaR minus normal VaR for varying probabilities. This shows that VaR doesn’t have the problem (with this pair of distributions) when the probability is less than 3%.
There is an irony that the normal distribution is a conservative assumption for Value at Risk with 5% probability.
Expected Shortfall is not entirely immune to this problem, but does much better than Value at Risk.
Oh Maybellene, why can’t you be true?
from “Maybellene” by Chuck Berry
Appendix R
The computing and graphics were done in the R language.
create Figure 2
The R function to produce Figure 2 is:
function (filename = "risktdof01.png")
if(length(filename)) {
png(file=filename, width=512)
par(mar=c(5,4, 0, 2) + .1)
dseq <- 3:100
esseq <- sapply(dseq, es.tdistSingle, level=.01)
matplot(dseq, cbind(-qt(.01, df=dseq) *
sqrt((dseq - 2)/dseq), esseq), type="l",
col=c("steelblue", "forestgreen"), lwd=3,
log="x", lty=1, ylab="Risk",
xlab="Student t degrees of freedom")
text(50, 2.5, "VaR", col="steelblue")
text(50, 2.85, "ES", col="forestgreen")
text(50, 3.9,
"expected return = 0\nstandard deviation = 1\nprobability level = 1%")
if(length(filename)) {
The computations are done, the basic plot is done, and then pieces are added to the plot with additional commands.
compute Expected Shortfall
The function that computes the Expected Shortfall in the function above is:
es.tdistSingle <-
function (level, df, sd=1)
# placed in the public domain 2013
# by Burns Statistics
# single ES value for t-distribution
# testing status: slightly tested
integ <- integrate(function(x) x * dt(x, df=df),
-Inf, qt(level, df=df))$value
-integ * sqrt((df-2)/df) * sd / level
sapply trick
There’s a command in the function that creates Figure 2 that is a little trickier than it might appear at first sight. The command is:
sapply(dseq, es.tdistSingle, level=.01)
Usually what is given to the function being applied in sapply (and lapply, apply and friends) is the first argument to the function. Subsequent arguments to the function may be passed in, like what
is done in this case with level.
However, we’ve just seen that level is the first argument to the function. Since the first argument to the function is given, what is given to the function from dseq in the sapply command is the
second argument. This is what we want — we want the values in dseq to represent degrees of freedom.
The command works because of the argument matching rules in R which are explained in Circle 8.1.20 of The R Inferno.
create Figure 5
The function for Figure 5 is:
function (filename = "VaRdiff.png")
if(length(filename)) {
png(file=filename, width=512)
par(mar=c(5,4, 0, 2) + .1)
lseq <- seq(.005, .4, length=100)
vdiff <- -qt(lseq, df=5)*sqrt(3/5) + qnorm(lseq)
plot(lseq * 100, vdiff, type="l", col="steelblue",
lwd=3, xlab="Probability level (%)",
ylab="t5 VaR - normal VaR")
abline(h=0, col="gold", lwd=3, lty=2)
text(10, .1, "crossover at about 3%")
arrows(10, .08, 3.5, .01)
text(30, .4, "standard deviation = 1")
if(length(filename)) {
Again, the computations are done, the basic plot is done, and then there are additions to the plot.
3 Responses to An infelicity with Value at Risk
1. Rocco Claudio Cannizzaro says:
Hi Patric, very inspiring post!
I did some analysis to show why we are observing such behavior: A VaR paradox
□ Pat says:
Thanks for sharing that.
Leave a Reply Cancel reply
This entry was posted in R language, Risk and tagged Conditional Value at Risk, Expected Shortfall, Value at Risk. Bookmark the permalink. | {"url":"https://www.portfolioprobe.com/2013/02/04/an-infelicity-with-value-at-risk/","timestamp":"2024-11-06T22:01:07Z","content_type":"text/html","content_length":"121481","record_id":"<urn:uuid:6a19073c-0665-431f-979f-7ce49adec8a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00000.warc.gz"} |
The Calculus of Motion
Definition 12.3.2. Velocity, Speed and Acceleration.
Let \(\vec r(t)\) be a position function in \(\mathbb{R}^2\) or \(\mathbb{R}^3\text{.}\)
The instantaneous rate of position change, denoted \(\vec v(t)\text{;}\) that is, \(\vec v(t) = \vrp(t)\text{.}\)
The magnitude of velocity: \(\norm{\vec v(t)}\text{.}\)
The instantaneous rate of velocity change, denoted \(\vec a(t)\text{;}\) that is, \(\vec a(t) = \vec v\,'(t) = \vrp'(t)\text{.}\) | {"url":"https://opentext.uleth.ca/apex-standard/sec_vvf_motion.html","timestamp":"2024-11-05T16:39:43Z","content_type":"text/html","content_length":"369804","record_id":"<urn:uuid:a62ebead-0612-4b1b-b4ca-62716d26c99a>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00263.warc.gz"} |
Pinna illusion
From Scholarpedia
Baingio Pinna (2009), Scholarpedia, 4(2):6656. doi:10.4249/scholarpedia.6656 revision #128000 [link to/cite this article]
Pinna illusion
Pinna illusion is the first visual illusion showing a rotating motion effect. In Figure 1 the squares, delineated by two white and two black edges each, are grouped by proximity in two concentric
rings. All the squares have the same width, length, and orientation in relation to the center of their circular arrangements. The two rings differ only in the relative position of their narrow black
and white edges forming the vertexes. More precisely, the two rings show reversal of the vertex orientation and, consequently, opposite inclination of the virtual or implicit diagonal orientation
polarity obtained by joining the two vertexes where black and white lines meet (Pinna, 1990; Pinna & Brelstaff, 2000).
When the observer’s head is slowly moved towards the figure with the gaze fixed in the center, the inner ring of the squares appears to rotate counter-clockwise and the outer ring clockwise. The
direction of rotation is reversed when the observer moves away from the figure, the same squares of the inner ring appear to rotate clockwise, while those of the outer ring rotate counter-clockwise.
The apparent motion is perceived instantaneously and in a direction perpendicular to the true motion. The speed of the resultant illusory motion appears to be proportional to that of the motion
imparted by the observer. Figure 2 simulates the action of moving towards and away from the figure by physically expanding and contracting the pattern shown in Figure 1. The two concentric rings of
squares now appear to counter-rotate when the gaze is fixed on the center and the observer is stationary. When the same figure is physically rotated clockwise, the inner ring appears to contract and
the outer one appears to expand (Figure 3). The opposite effects are perceived when the figure is rotated counter-clockwise.
Figure 2: When the two concentric rings of squares physically expand and contract, they appear to Figure 3: When the two concentric rings of squares physically rotate clockwise, the inner ring
counter-rotate when the gaze is fixed on the center. appears to contract and the outer one appears to expand.
Phenomenology of the Pinna illusion
The role of peripheral vision
The apparent rotation is typically perceived in peripheral vision. The looming in and out of Figure 1, with the gaze fixed not on the center but on any square, destroys the counter-rotation effect
and the fixed square appears motionless. This is because motion detectors in the fovea are not sensitive to this type of stimulus pattern. Furthermore, with peripheral viewing the precise spatial
square form of the pattern elements ought to be blurred and the dominant motion cues ought to derive, not from the constituent line sections, but from the entire elements (see Figure 10).
Reversing implicit diagonals
By reversing the implicit diagonal polarity, rendered by the internal organization of luminance, the perceived rotation reverses accordingly (Figure 4). This result demonstrates that it is the
difference in slant of the implicit diagonals of the squares that provides the directional cues for biasing the global motion vector thus affecting the perceived motion direction.
Misaligning implicit diagonals
The apparent rotation does not require that the stimulus components be accurately aligned. When the squares are randomly shuffled and arranged approximately in concentric circles the apparent
rotation persists (Figure 5). Therefore, the organization of the squares does not imply oriented Fourier components that may serve as a secondary feature to the motion system.
This result suggests that apparent motion is individually assigned to each square, depending on the relative location of the black and white edges. Afterwards, the perceptual segregation of the
counter-rotation components into a central circular cluster within an irregular circular surround, not present in the stationary stimulus, requires perceptual grouping of the two populations of
squares according to the Gestalt principle of common fate. (Through the paper, the Gestalt principles are not considered as explanations of the reported illusions but as directions of the neural
integration processes occurring among local motion vectors. The neural questions underlying these principles, are “how are single-neuron signals transformed and elaborated by successive levels of
processing?” and “how are different processing levels and areas represented in large-scale patterns of neural activity?”)
Alternating implicit diagonals
In the previous figures, the principle of common fate is synergistic with other principles like good continuation or relative position (inside vs. outside). In fact, the two implicit orientation
polarities are organized circularly at different radii and then placed in different locations of the stimulus: one inside and the other outside. As a consequence, the squares with the same implicit
orientation polarity are grouped in the smoothest continuation. By alternating the orientation polarity of the squares within each ring and, therefore by breaking the direction and the good
continuation of the individual assignment of motion, the illusory counter-rotation is nulled Figure 6a. A similar result is obtained by alternating the orientation polarity of two squares, one
included in the other and arranged in two rings (Figure 6b; to find this figure, click on the blue right-pointing triangle below Figure 6a).
These nulling effects demonstrate the action of mutual inhibition of opposite orientation polarities and imply long-range interactions of motion vectors and of the grouping of input signals that
belong to the same global direction. In Figure 6c, while the single squares maintain their alternated orientation polarities from the outside to the inside of the figure as in Figure 1, the grouping
of the squares by proximity is not concentrically (as in Figure 1) but radially oriented (Pinna & Gregory, 2009). Due to the radial grouping, the counter rotating effect is reduced, and a waving or
twisting global motion through the radial grouping now appears.
Implicit vs. explicit diagonals
Two kinds of orientation cues, instilling “polarity” to the basic square elements, can be distinguished in the previous figures: (i) The orientation of the squares themselves – explicit orientation;
and (ii) the orientation of the invisible base of the vertex – implicit orientation. In Figure 7, the explicit orientation (tilt) of the squares can be either synergistically summed (Figure 7a) or
pitted against (subtracted to) the implicit orientation (Figure 7b). By moving the eye towards or away from the figure with the gaze fixed in the center, the effect of counter-rotation of Figure 1
can be either increased or decreased as in Figure 7a and Figure 7b, respectively.
Figure 6: (a) When the orientation polarity of the squares within each ring alternates, the illusion is destroyed. (b) (to find this and other versions of this figure, click on the blue
right-pointing triangle) When each square is surrounded by an opposite-polarity square, the illusion is destroyed. (c) When the elements are arranged more radially than concentrically, the counter
rotating effect is reduced, and a waving or twisting global motion through the radial grouping appears.
Apparent motion from asymmetric luminance profile
Another directional cue for biasing motion vectors is the asymmetric luminance profile. In Figure 8a, when the eye is moved towards the figure with the gaze fixed in the center, the inner ring of
squares appears to rotate counter-clockwise and the outer ring rotates in a clockwise direction. However, when the eye is moved away from the figure, the squares of the inner and outer rings appear
to rotate barely (or not at all) clockwise and counter-clockwise, respectively, as expected from Figure 1 (Pinna & Spillmann, 2002a). There is a clear asymmetry in the strength of the apparent
rotation in moving toward or away from the figure. This result depends on the asymmetric luminance profile (the black bars, the dark grey within each square, the white bars and the light grey
background) that represents a strong directional bias and a different kind of explicit polarity in the direction of the minimum luminance intensity change (i.e., in the direction of the white bars).
This cue is the basis of the staircase illusion (Fraser & Wilcox, 1979) and of the peripheral drift illusion (Faubert & Herbert, 1999; see also Kitaoka & Ashida, 2003). In the same way as the
explicit orientation of the squares, shown in Figure 7, can be synergistically summed or subtracted to the implicit orientation, the result of Figure 8a depends on the summation (when the eye is
moved toward the stimulus) and the subtraction (when the eye is moved away) of the asymmetric luminance profile from the implicit orientation. In Figure 8b, the summation/subtraction effect produces
the effect of counter-clockwise rotation and expansion alternated in a loop with clockwise rotation and contraction.
Alternating implicit diagonals in rows and columns
In Figure 1, the illusion is elicited by an optical flow field corresponding to radial expansion or contraction. However, it can also be produced by a translation of the stimulus, i.e., by a parallel
vector field. In Figure 9a-b, apparent sliding motion arises in a pattern of square elements arranged in rows and columns on a grey background (cf. Pinna 1990). In Figure 9a, by alternating the black
and white edges of the squares from one column to the other and from the bottom-half to the top-half of the pattern and by keeping the gaze fixed on the horizontally moving dot, sliding motion is
perceived in the perpendicular direction: the columns of squares appear to move up and down alternately (vertical inter-column shearing motion). (In order to perceive consistent sliding motion
effects it is necessary to track the black target carefully. Once the gaze strays off the target, the sliding effect is lost.) Apparent dynamic convergence and divergence of the columns is also
perceived. The loss of parallelism occurs in a direction opposite to the Zöllner illusion (1860). In Figure 9b, as one follows the vertically moving dot with one’s gaze, the columns of squares appear
to alternately move left and right (inter-column expansion and contraction) and to converge and diverge. In Figure 9c, by keeping the gaze fixed on the diagonally moving dot, the apparent motion is
strongly reduced or nulled. The role of the Gestalt factor of common fate is demonstrated in Figure 9d where, by examining the figure without fixating a particular point, a central cluster
immediately segregates from the irregular surround and appears as floating before the background. When tracking the moving target, a cluster of squares sliding together in the same plane as the other
squares are perceived. These results demonstrate that the difference in slant of the implicit diagonals provides the directional cues for biasing the global motion vector (Gurnsey et al., 2002;
Morgan, 2002; Pinna & Brelstaff, 2000; Pinna & Spillmann, 2005).
Neural mechanisms underlying the Pinna illusion
The important points useful to explain this illusion are the following: (i) the micropatterns have oriented low-frequency components, (ii) these engage low-level direction selective mechanisms, which
(iii) are subject to the aperture problem. The implicit orientation polarity in the micropatterns (i.e., the low frequency luminance gradients) and not the black and white edges (i.e., the
high-frequency components), is the basic attribute underlying this illusion. The notion of implicit orientation suggests that the illusion can be explained in terms of orthogonal biases (Grossberg,
Mingolla & Viswanathan, 2001; Gurnsey & Pagé, 2006; Gurnsey et al., 2002; Mather, 2000; Pinna & Brelstaff, 2000; Pinna & Spillmann, 2005), on the basis of which the visual system produces an
interpretation of image flow biased towards the strongest velocities perpendicular to the two-dimensional contours in the image. In Figure 10-left, the two bottom micropatterns show a blurred version
of the two above. Under these conditions the high frequencies have been removed from the micropatterns. By translating the micropatterns to the right, they will most strongly stimulate neurons
selective for the directions indicated by the white arrows. This bias can be considered to occur when the process of optical flow estimation is contaminated by spatiotemporal noise (Fermüler & Malm,
2004; Fermüller, Pless & Aloimonos, 2000; Weiss & Fleet, 2002; Weiss, Simoncelli & Adelson, 2002). More precisely, the interpretation of the motion effect depends on a step where image features such
as lines, intersections of lines, black and white edges like those of Figure 1 and local image movement are derived.
These features contain many sources of noise or uncertainty that can cause bias. As a result, the locations of features are perceived erroneously and the appearance of the patterns is altered. Thus,
the estimated flow vectors of Figure 1 are biased in the clockwise and in the counterclockwise directions as can be perceived in the outer and inner ring. The role of low-frequency luminance
gradients is demonstrated by replacing the micropatterns with Gabor patches (Bayerl & Neumann, 2002; Gurnsey & Pagé, 2006; Gurnsey et al., 2002; Morgan, 2002). In this case, the strength of the
illusion persists or is even enhanced (see Figure 10-right). Gurnsey et al. (2002) demonstrated that the strength of the illusion depends on the number of Gabor patches in the display, their
wavelengths, and the orientation difference between adjacent micropatterns in the inner and outer rings. The illusion can be explained by the response of direction-selective neurons at the earliest
cortical stage of visual processing, i.e., area V1. These neurons can signal the speed with which a line of its preferred orientation moves through its receptive field. This constraint may be
considered as akin to the aperture effect (cf. Nakayama & Silverman, 1988) by which a moving straight line seen through an aperture can be perceived to move only along the direction of its normal.
While this seems to explain how individual square elements receive a local illusory motion signal, the illusory rotational motion can be thought to be sensed by the higher cortical area such as MT
(medium scale motion analysis, inhibition of opponent directions) and dorsal MST (MSTd – large scale motion analysis, directional decomposition) which collates all the signals provided by the local
motion micropatterns. An FMRI study of the illusion showed activation of the motion specific complex hMT+ in addition to the V1/V2 areas to be involved in the perception of the illusion (Budnik et
al., 2006).
The illusory tilted squares
In Figure 11a, when viewed peripherally, alternating implicit diagonals, organized in columns, produce tilt distortions in the same direction as the implicit diagonal of each square; the squares
appear distorted like rhombic shapes. The illusory tilt can also be perceived by slowly moving the gaze along the columns. In Figure 11b, the rhombic distortion can be seen in the two global
concentric square shapes made up of small squares with black and white edges.
Illusory intertwining and spiral effect
Consequences of the illusory convergence and divergence (loss of parallelism) of Figure 9a-b are manifest in the two effects illustrated in Figure 12a-b, where the concentric circles, made up of
squares, appear (a) intertwined when the implicit diagonals are alternated among the circles or (b) like a spiral when all the implicit polarities have the same orientation. The two effects rotate in
opposite (clockwise vs. counter-clockwise) directions when the orientation polarities are explicit as illustrated in Figure 12c-d (Pinna & Gregory, 2002).
From the Pinna illusion to new effects of apparent motion: On the complex role of directional biases
Sliding motion from local and global explicit orientations: the Ouchi illusion and variations
The previous figures demonstrate the role of implicit orientations and, then, of directional biases in eliciting apparent motion and other illusory effects. Explicit orientations produce motion as
demonstrated in the well known Ouchi illusion (1977, first shown by Spillmann et al., 1993), where a disk-shaped inner region and an annular surround, made up from black and white rectangles oriented
at right angles, produce an apparent sliding motion of the disk when the stimulus is moved or shaken. Pinna (1990) described independently a variation of the Ouchi illusion (Figure 13a) in which a
disk and an annulus are comprised of horizontally and vertically parallel zigzag lines instead of rectangular checks. It is worthwhile noticing that the zigzagging lines in the disk and the annulus
have the same line segments within each wiggle with local orientations of 45 and 135 deg, respectively. The only difference is in the global orientation. Figure 13b shows that apparent sliding can be
elicited when the rectangles in the two inset and surrounding regions of the Ouchi pattern are replaced by rectangles with the same orientation but phase-shifted (Pinna & Spillmann, 2002a, 2002b). By
keeping the gaze fixed on the upwards moving dot, the inner disk appears to move up although the offset terminators move down and vice versa when the dot moves down. Some deformation of the
circumference of the disk is also perceived during the apparent translation. This effect is different from the previous ones and may require an explanation involving motion biases from offset
discontinuities and whole shapes that represent new classes of directional bias.
The accordion illusion
Explicit orientations can produce motion even if they are not at right angles as in the Ouchi illusion and, differently from all the previous conditions, even if there is only one specific explicit
orientation not phase-shifted. In Figure 14a, when the eye is moved towards the figure and away from it with the gaze fixed on the central dot, the checkerboard, made up of alternating black and
white rectangles, appears distorting and folding like an accordion: the rectangles within two regions above and below the dot appear alternately shrinking and expanding (Pinna, 2008; Pinna & Fantoni,
2004; Pinna & Spillmann, 2002a). While distorting the two regions changes their appearance, they emerge as regions with a clear shape subjected to apparent dynamic distortions. The illusory
distortion creates horizontally elongated ellipses or rhombic shapes as approximately represented by the blue ellipses of Figure 14b. The strength of the effect increases by increasing the size of
the stimulus.
It is worthwhile noticing that the checkerboards of the regions that appear distorted shrink while the observer moves closer and expand when the observer moves away. In other words, when the stimulus
expands on the retina the micropatterns shrink and, vice versa. This result is the opposite of what is expected on the basis of size-distance constancy. When the checkerboards are replaced by
parallel strips, expansion and contraction effects are perceived but following the size-distance constancy and similarly to the bulging grid and pincushion illusions (not illustrated). The accordion
effect also occurs by simulating the action of moving towards and away from the figure as shown in Figure 14c. In Figure 14d, by keeping the gaze fixed on the diagonally moving dot, the rectangles of
the checkerboard and the vertical stripes made up of aligned rectangles appear to move horizontally left or right following the main horizontal direction of the dot. This result is unexpected on the
basis of an aperture-type effect.
The accordion illusion disappears when the rectangles of the checkerboard are replaced with squares, which nulls the directional bias. Under these conditions the well known bulging grid and
pincushion illusions are perceived (Foster & Altschuler, 2001; Helmholtz, 1867/1962): a spherical bulge protrudes from the grid. Unlike the bulging grid and pincushion effects, the accordion illusion
(i) shows a different kind of dynamic distortion, (ii) it is not an illusion of depth but implies apparent motion and dynamic distortion, (iii) it is clearly perceived under an equiluminous colored
grid, and (iv) its strength does not change with monocular or binocular viewing. The accordion illusion depends on directional bias and may represent a good test to understand the retinal/cortical
magnification as a function of visual field location.
Sliding motion from continuous vs. segmented lines
In the following figures, apparent sliding motion can be obtained without perpendicular directional cues and through a new class of directional bias. In Figure 15a-c, a clear sliding motion in depth
is perceived when the eye follows the moving dot (Pinna, 2008). The sliding motion is mostly perceived in the inset regions of Figure 15a-b even if both regions can appear to move. In Figure 15c, the
sliding motion separates the two kinds of lines, continuous and segmented in black and white components, that appear to belong to different depth planes while moving in opposite directions. In Figure
15a the inner region made up of continuous lines appears as a window or as a hole through which the lines, placed behind in depth, slide in the right direction when the dot moves down. In Figure 15b
the inner region, composed of lines made up of alternating black and white segments, appears in front of the background of continuous lines and slide in a direction opposite to the one of the
continuous lines of Figure 15a (i.e. in the left direction when the dot moves down). This opposite result is related to the inverse figure-ground organization of the inset vs. outer regions of Figure
15a-b. Finally, although the inset region appears to slide against the outer one in opposite direction, the continuation of the continuous lines in the segmented ones is not broken but it remains
While in Figure 15a-b the local and global orientation of the two kinds of lines is the same, the directional bias along the lines of their element components is the opposite. In fact, when the eye
follows the tip of a pen moving up and down along a segmented line, the black and white elements clearly flow in the opposite direction (i.e. down and up). This effect can be perceived in Figure 15d,
where, by following the moving dot, the segmented circumferences and the single discontinuities appear respectively to rotate and flow in the opposite direction. When the eye moves along a continuous
line, because the information about motion is ambiguous, then the line can appear to flow in the same or in the opposite direction of the real motion. However, the juxtaposition of segmented lines
strengthens, by contrast, the motion information of the continuous line in the same direction. As a consequence, in Figure 15a-c there are indeed two opposite directional cues that elicit sliding
motion as previously described. These effects can shed light on other related motion effects like the boogie-woogie illusion (Cavanagh & Anstis, 2002).
Sliding motion from edge contrast
There is a further kind of directional bias that can induce apparent sliding motion: edge contrast. In Figure 16, by following the moving dot, the central region appears to move up when the dot moves
in the left direction and down when the dot moves to the right (Pinna & Fantoni, 2004). The pattern is made up of parallel zigzag oblique lines reversing their edge contrast in the central region;
each zigzag line changes from black/white to white/black and then to black/white again. The strength of the motion effect increases by increasing the size of the stimulus.
Sliding motion in depth without directional bias
There is another class of stimuli in which the elements have no oriented edges at all and yet elicit vivid apparent sliding motion. These stimuli suggest a motion bias based on motion sensors that
respond to other stimulus features such as boundary contour differences, contrast polarity, edge blur, demarcation by a frame, and shape. Such features are also responsible for surface and depth
segmentation (Pinna & Spillmann, 2002b, 2005). In Figure 17a, a central array of filled black squares, each surrounded by a thin narrow grey annulus, is presented within a larger surround of
similarly arranged, but empty circles. A thin black frame separates center and surround. Under these conditions, the black filled squares are perceived as lying behind the plane of the empty circles.
If the stimulus configuration is slowly moved about there is a strong sliding motion of the filled squares relative to the surround of empty circles. In Figure 17b, as the fixation dot moves
diagonally, the inset matrix is perceived to move diagonally alternately in counter-phase to the dot motion.
When the location of filled and empty elements is reversed Figure 17c, the depth percept reverses accordingly. The central array of empty squares is now clearly seen as hovering in front of the
surrounding filled squares. However, while in the first case the sliding motion is in the opposite direction to the dot motion, in the second it is in the same direction (this can be obtained by
moving the mouse arrow on Figure 17c. When the thin narrow grey annulus of each element is replaced by a bluish one, the resulting sliding motion is the strongest we have seen so far, showing a
complete in-depth dissociation between the inner and outer regions (Figure 17d-e). The sliding motion can also be perceived, even if it is diminished, in the limiting conditions illustrated by Figure
For an explanation of these effects, differences in spatio-temporal frequency between the two stimulus regions may be invoked. Such differences are known to elicit different speed signals (Thompson,
1982) and could originate from the juxtaposition of empty and filled elements (high and low spatial frequency components) in conjunction with a grey or bluish edge. Next to spatial frequency
differences, we suggest different speed signals resulting from figural properties that enhance figure-ground segregation and apparent depth.
Apparent motion from luminance modulation: The role of explicit orientations
Luminance contrast modulation elicits apparent motion under certain conditions. In the two frames of Figure 18a, while the green edges get brighter, the purple ones get darker and vice versa.
Phenomenally, by keeping the gaze on the center of the stimulus, the modulation appears as a pulsation of contrast. Some motion can be also perceived although it is weak. By adding to each square two
parallel segments tangent to the virtual annuli created by the squares (Figure 18b), the two annuli appear now to emit two ticks, like those of a clock, going counter-clockwise and clockwise. Some
apparent expansion and contraction can also be perceived but they are secondary with respect to the rotation. By replacing the tangent segments with perpendicular ones, a clear alternating expansion
and contraction is perceived Figure 18c, i.e., while one annulus expands the other contracts and vice versa (Pinna & Spillmann, 2002a).
These movies demonstrate the role of explicit orientations in polarizing the field of motion biases induced by the contrast modulation. The synergistic organization of both components is also
demonstrated in Figure 18d-e, where the black edges induce the ticking. In Figure 18f, tiny modulating dots, which are under threshold in peripheral viewing (with the gaze kept on the central dot),
are sufficient to instill motion in the black edges and in their main orientation. The contrast modulation instills motion not only in the direction of explicit orientations but also in the direction
of the asymmetric luminance profile shown in Figure 17. This is demonstrated in Figure 18g-h (Pinna & Spillmann, 2002a). It is worthwhile noticing that the spatial and dynamic organization of lines
and grey annuli of Figure 18h can be perceived as a continuous counter-clockwise ticking even if, on the basis of the contrast modulation of the juxtaposed edges, the ticking is expected to appear
going back and forth. Related to these last effects are the so-called “phenomenal phenomena” described by Gregory and Heard (1972), the reverse phi and four-stroke motion (Anstis & Rogers, 1975,
1986) and the visual illusions based on single-field contrast asynchronies (Shapiro, Charles & Shear-Heyman, 2005).
The windmill illusion
Contrast modulation can instill motion also under the following new conditions (Figure 19a-c). By alternately gradually increasing and suddenly reducing the degree of transparency of the grey annulus
superimposed on a radial arrangement of black and white radial sectors, the fine-grained matter or the paste of the annulus and the annulus itself appear to rapidly rotate (Pinna & Dasara, 2005,
The apparent rotation of the fluid or of the ethereal paste appears to flow within and along the annulus area ambiguously in both clockwise and counter-clockwise directions. Some apparent expansion
and contraction of the annuli can also be seen. The intentional motion of the gaze in one direction (e.g., clockwise) disambiguates the illusory rotation that follows the gaze direction. The loss of
transparency strongly reduces or nulls the apparent rotation implying that the figural organization plays an important role (Figure 19d). The apparent rotation is clearly perceived when alternating
dark and bright sectors are circularly arranged (Figure 19e). The physical rotation of the figure disambiguates the direction of the apparent rotation that clearly appears opposite to the physical
one (Figure 19f), which is constant, slow and whose speed and amplitude is proportional to the ramp of the contrast modulation. The physical rotation of the annulus, when the transparency
organization is lost, shows a very weak if any apparent rotation (compare Figure 19g-h). All else being equal, if the disk is smaller than the sectors, the apparent rotation belongs to the disk; if
it has the same size as the sectors, both rotate. If it is larger than the sectors, the rotation belongs to them (Figure 19i). This result demonstrates again the important role played by the figural
organization. It is worthwhile noticing that, under the conditions illustrated in Figure 19j, what appears to rotate is an ethereal light comprising both the dark and the bright sectors, whose
boundaries appear instead to rotate constantly and uniformly.
The windmill illusion shows the role played by the figural organizations (grouping of luminance contrast components, transparency, rotation of the whole shape or of the paste substance, spatial ratio
between surrounding and surrounded components). The basic neural substrate may be found in the motion-sensitive neurons in visual area MT (Thiele et al., 2000). This illusion can shed light on the
grouping rules operating at a higher cortical stage that organize the responses of motion-sensitive neurons.
In this work we showed the Pinna illusion and the complexity of the notion of directional bias and its limits in explaining motion illusions. Different classes of directional biases in eliciting the
motion illusions are presented. Although the described phenomena are similar in appearance, they constitute different classes of apparent motion likely requiring different explanations. This is
particularly evident if one considers the difference in stimulus cues eliciting these phenomena. Further experiments and computational modeling are needed to account for these differences.
The Pinna illusion and the related effects represent an opportunity within the context of vision science and cognitive neuroscience (Gazzaniga, 2004; Purves & Lotto, 2003). If the task of a sensory
system is to provide a faithful representation of biologically relevant events in the external world, the previous phenomena show that visual perception contrives, through complex neural
computations, to create informative and efficient representations of the external environment. These representations are at the same time simpler and richer than the raw signals transduced by
receptors. They are simpler because they simplify the enormous quantity of raw measurement information submitted to the central nervous system (see Section 2). They are richer because they contain
properties of events and objects abstracted from the primitive sensory signals (see Sections 3 and 4). Therefore, the first opportunity suggested by the previous effects concerns the basic encoding
of the features of the stimuli, i.e. the nature and meanings of the signals carried by single neurons, the maps and areas where they operate (see Section 2) and the pattern of motion of objects,
surfaces, and edges in a visual scene due to the relative motion between an observer and the scene (optical flow, Gibson, 1979). Furthermore, they are good tests to understand the perceptual context
within which a specific element is perceived, namely “what is ‘figure and what is ‘background”, “how separated elements of a visual event are combined and organized in a sensory representation” (see
Section 4).
Within modern visual and cognitive neuroscience, the Pinna illusion reveals two issues that challenge scientists and that deserve to be further investigated. The first issue is related to the basic
role of the observer’s motion of the head towards and away from the stimulus. In , the perception of motion depends on this action that on the contrary should cancel the effect as it is predicted by
analyses of observer’s head motion in natural scenes. In fact, the pattern of motion of objects, surfaces, and edges in a visual scene, due to the optical flow, is not perceived as illusory motion of
the objects but only as motion of the observer. Therefore, if normally in natural scene the final motion outcome of the objects is modulated by messages from other sensory systems (e.g.
proprioception), on the contrary in the Pinna illusion it is just the observer’s head movement that instills the illusory motion. The second issue follows the first one and is related to at least two
different perceptual levels emerging from the Pinna illusion: the illusion of motion and the illusoriness (Pinna, 2008) of the apparent motion clearly perceived in a pattern that, at the same time,
appear as static (likely due to proprioception). If in recent neuroscience all visual percepts are considered equally illusory, the Pinna illusion shows that not all the illusions appear
illusoriness. The study of the illusoriness is a further challenge for neuroscientists. Both issues raised by the Pinna illusion can shed a new light in the scientific land between “sensory” and
“cognitive” processes not fully explored yet.
• Anstis, S. M., & Rogers, B. J. (1975). Illusory reversal of visual depth and movement during changes of contrast. Vision Research, 15(8-9), 957-961.
• Anstis, S. M., & Rogers, B. J. (1986). Illusory continuous motion from oscillating positive-negative patterns: Implications for motion perception. Perception, 15(5), 627-640.
• Bayerl, P., & Neumann, H. (2002). Cortical mechanisms of processing visual flow - Insights from the Pinna-Brelstaff illusion. Proceedings of 4th Workshop Dynamische Perzeption, Bochum.
• Budnik U., Bayerl P., Attar C.H., Pinna B., Hennig J., & Speck O. (2006). The contribution of human brain areas for visual illusory effects. An fMRI-study on the illusory motion effect in the
Pinna-Figure. ECVP2006 Perception.
• Cavanagh, P., & Anstis, S. (2002). The boogie woogie illusion. Perception, 31, 1005-1011.
• Faubert, J., & Herbert, A. M. (1999). The peripheral drift illusion: A motion illusion in the visual periphery. Perception, 28, 617–621.
• Fermüller, C., & Malm, H. (2004). Uncertainty in visual processes predicts geometrical optical illusions. Vision Research, 44, 727-749.
• Fermüller, C., Pless, R., & Aloimonos, Y. (2000). The Ouchi illusion as an artifact of biased flow estimation. Vision Research, 40, 77-96.
• Foster, C., & Altschuler, E. L. (2001). The bulging grid. Perception, 30(3), 393–395.
• Fraser, A., & Wilcox, K. J. (1979). Perception of illusory movement. Nature, 281, 565–566.
• Gazzaniga, M.S. (2004). The Cognitive Neurosciences III. MIT Press, Cambridge, MA.
• Gibson, J.J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.
• Gregory, R. L., & Heard P. (1972). Visual dissociations of movement, position, and stereo depth: Some phenomenal phenomena. Quarterly Journal of Experimental Psychology, 35A, 217-237.
• Grossberg, S., Mingolla, E., & Viswanathan, L. (2001). Neural dynamics of motion integration and segmentation within and across apertures. Vision Research, 41, 2521-2553.
• Gurnsey, R., & Pagé, G.. (2006). Effects of local and global factors in the Pinna illusion. Vision Research, 46, 1823-1837.
• Gurnsey, R., Sally, S., Potechin, C., & Mancini, S. (2002). Optimising the Pinna - Brelstaff illusion. Perception, 31, 1275-1280.
• Helmholtz H von (1867/1962). Treatise on Physiological Optics, volume 3, New York, Dover, 1962); English translation by J P C Southall for the Optical Society of America (1925) from the 3rd
German edition of Handbuch der physiologischen Optik (Hamburg: Voss, 1910; first published in 1867, Leipzig, Voss).
• Kitaoka, A., & Ashida, H. (2003). Phenomenal Characteristics of the Peripheral Drift Illusion. Vision, 15(4), 261-262.
• Mather, G. (2000). Integration biases in the Ouchi and other visual illusions. Perception, 29, 721-727.
• Morgan, M. (2002). Running rings around the brain. The Guardian Thursday, 24 January 2002.
• Nakayama, K., & Silverman, G. (1988). The aperture problem – II. Spatial integration of velocity information along contours. Vision Research, 28, 747-753.
• Ouchi H. (1977). Japanese Optical and Geometrical Art. New York, Dover.
• Pinna, B. (1990). Il Dubbio sull’Apparire. Padua, Upsel Editore.
• Pinna, B. (2008). The illusion of art. Pinna, B. (Eds) (2008). Art and Perception: Towards a visual science of art. Brill, Leiden, The Netherlands.
• Pinna, B., & Brelstaff, G. J. (2000). A new visual illusion of relative motion. Vision Research, 40, 2091-2096.
• Pinna, B., & Dasara, M. (2005). The Windmill Illusion. Fifth Annual Meeting of the Vision Sciences Society, Journal of Vision, Sarasota.
• Pinna, B., & Dasara M. (2006). Figural organizations in the windmill illusion. The Fourth Asian Conference on Vision, Matsue, Japan
• Pinna, B., & Fantoni, C. (2004). Local and global motion by edge discontinuities. Fourth Annual Meeting of the Vision Sciences Society, Journal of Vision, Sarasota.
• Pinna, B., & Gregory, R.L. (2002). Shifts of Edges and Deformations of Patterns. Perception, 31, 1503-1508.
• Pinna, B., & Gregory, R.L. (2009). Continuities and discontinuities in motion perception. In Abram M., Minati G., & Pessa E. (Eds.) Proceedings of the fourth Conference on systems science. New
York, Scientific Word, Springer-Verlag, 765-775.
• Pinna, B., & Spillmann, L. (2002a). Apparent motion depending on luminance and hue variations. Second Annual Meeting of the Vision Sciences Society, Journal of Vision, Sarasota.
• Pinna, B., & Spillmann, L. (2002b). A new illusion of floating motion in depth. Perception, 31, 1501-1502.
• Pinna, B., & Spillmann, L. (2005). New illusions of sliding motion in depth. Perception, 34, 1441-1458.
• Purves, D., & Lotto, R.B. (2003). Why we see what we do: An empirical theory of vision. Sunderland, MA: Sinauer Associates.
• Shapiro, A. G., Charles, J. P., & Shear-Heyman, M. (2005). Visual illusions based on single-field contrast asynchronies. Journal of Vision, 5, 764-782.
• Spillmann L., Tulunay-Keesey U., & Olson J. (1993). Apparent floating motion in normal and stabilised vision. Investigative Ophthalmology & Visual Science 34 (4) Abstract 1611, 1031.
• Thiele A., Dobkins K.R., & Albright T.D. (2000). The neural basis of contrast detection at threshold: Neurophysiological evidence from macaque area MT. Neuron, 26, 715-724.
• Thompson P. G., (1982). Perceived rate of movement depends on contrast. Vision Research, 22, 377-380.
• Weiss, Y., & Fleet, D.J. (2002). Velocity likelihoods in biological and machine vision. In Rao, R. P. N., Olshausen, B. A., & Lewicki, M. S. (Eds). Probabilistic Models of the Brain: Perception
and Neural Function, MIT Press, 77-96.
• Weiss, Y., Simoncelli, E.P., & Adelson, E.H. (2002). Motion Illusions as Optimal Percepts. Nature Neuroscience, 5, 598-604.
• Zöllner, F. (1860). Über eine neue Art von Pseudoskopie und ihre Beziehungen zu den von Plateau und Oppel beschriebenen Bewegungsphänomenen. Annalen der Physik und Chemie, 110, 500-523.
Internal references
• Rodolfo Llinas (2008) Neuron. Scholarpedia, 3(8):1490.
• Jose-Manuel Alonso and Yao Chen (2009) Receptive field. Scholarpedia, 4(1):5393.
• John Dowling (2007) Retina. Scholarpedia, 2(12):3487.
Supported by: Fondazione Banco di Sardegna, Alexander von Humboldt Foundation and Fondo d’Ateneo (ex 60%). I thank the three anonymous Reviewers for their suggestions that greatly improved the paper.
I also thank Maria Tanca, Fabrizio and Francesco Deledda for computer assistance and John S. Werner for helpful discussions and comments on the manuscript.
Recommended reading
Gazzaniga, M.S. (Eds.) ( 2004). The Cognitive Neuroscience III. MIT Press, Cambridge, MA. ISBN13: 978-0-262-07254-0.
Palmer, S.E. (Eds.) ( 2004). Vision Science: Photons to Phenomenology. MIT Press, Cambridge, MA. ISBN13: 978-0-262-16183-1.
Pinna, B. (Eds.) ( 2006). Color, Line and Space: The Neuroscience of Spatio-Chromatic Vision. LEIDEN: Brill Academic Publisher (NETHERLANDS). ISBN13: 978-90-04-15306-6.
Pinna, B. (Eds.) ( 2008). Art and Perception. Towards a Visual Science of Art, Part 1. LEIDEN: Brill Academic Publisher (NETHERLANDS). ISBN13: 978-90-04-16629-5.
Pinna, B. (Eds.) ( 2008). Art and Perception. Towards a Visual Science of Art, Part 2. LEIDEN: Brill Academic Publisher (NETHERLANDS). ISBN13: 978-90-04-16630-1.
External links
See also | {"url":"http://scholarpedia.org/article/Pinna_illusion","timestamp":"2024-11-04T01:29:32Z","content_type":"text/html","content_length":"103037","record_id":"<urn:uuid:9c8c9168-abdc-4774-9332-1179681c0085>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00104.warc.gz"} |
Parametric identification in mechanical systems
YflK 621
V.S. Lovejkin, Doctor of Technical Sciences, Pr.,
Y.V. Chovnjuk, Candidate of Technical Sciences, Assoc. Prof., M.G. Dikterjuk, Candidate of Technical Sciences, Assoc. Prof., Kyiv National University of Construction and Architecture
Summary. Parametric identification in mechanical systems has been offered in order to improve mechatronic models.
Key words: parametric identification, mechanical systems.
Research objectives
It’s necessary to define the equivalent disturbance in order to consider the robust control of motion actuated by electric motor. The explanation and the interpretation of robustness and stiffness in
motion control lead to definition of disturbance. The general definition for singleinput and single-output (SISO) linear system is discussed.
There are various proposals to estimate the disturbance d. This paper introduces a disturbance observer. Gopinath’s method is a systematic way to construct such an observer [1]. Once d is estimated,
the input of the mechanical system u (t) will be sum of two parts:
U = uref +udls.
The first term in the right side is a driving input to excite the system. The second term is a compensation to suppress the equivalent disturbance and the system acquires robustness. To cancel the
equivalent disturbance, the compensation input is made by using the estimated equivalent-disturbance d. Since d will be delayed by the lag poles in the disturbance observer, the compensation of the
equivalent disturbance will be also delayed by the same amount. It’s possible to design such delay as small as possible not to make robust stability deteriorate. The compensation input udls will
change the original system into the nominal system without any disturbance.
It is noted that the design of uref comes from the motion reference generator.
Generally total controller will have cascade of the outer loop to bring the desired output and the inner loop by disturbance observer.
The previous design method is applied to the motion system described by:
jZr = KtIaref ~T!-dt
Here J - inertia; Kt - torque coefficient of electric motor; 1) - load torque. This approach is successes to design robust motion controller of mechanical system (such as building machine) as well.
The disturbance is load torque. The parameter variations are the change of inertia and the change of torque constant of motor. The output is position detected by position detector. The equivalent
disturbance is:
; T,
= —L + J
KL_K1 J J,
J ref
o y
Suppose the first derivative of d is zero. An augmented state equation is:
0 “0 10“ 0
CD = 0 0 1 CD +
d 0 0 0 d
By Gopinath’s method, the following estimation process is obtained:
= £j9 + Zj.
Zj should satisfy (6), where kx and k2 are free parameters:
d(j> _ ref _
J. to a
-U + uiSL-uy,
. (11)
The second term of (11) is the disturbance torque Td,s
Tdls=Tl+M^-AKtIa. (12) dt
Comparing (2), (3), and (12), the following equation holds:
1 JX) 1 1 O 1 1 1 Jxi 1
3_ 1 1 1 3_
(kl-k22)B + ^-Iaref
J n
Tdls c
I’Jis contains: 1) mechanical load ( = 2]); 2) va-(6) ried self-inertia torque = AJ(d<x>/dt) ; 3) torque ripple from motor (= AKtIa).
Equations (4) and (6) lead (7):
s ~\~ k2s + k^
s 4- k2s 4- k|
Two poles of the observer for the 5 - parameter of transfer function between input U(5) and output Y(s) are a and P, which are arbitrarily allocated in the complex plane. They satisfy (8):
[a +13 = -k2 I aP = ^j.
It is worthwhile reconsidering (2). The parameters in (2) are the inertia and the torque coefficient. The inertia will change according to the mechanical configuration of motion system. The torque
coefficient will vary according to the rotor position of electric motor due to irregular distribution of magnetic flux on the surface of rotor:
J = J0 + AJ
Kt ~Kto +AKt.
By substituting (9) and (10) into (2), (11) holds:
The robust motion controller is designed to cancel the disturbance torque as quickly as possible.
The estimated disturbance torque is obtained from the position 9 and the current reference. According to (1) and:
u =-■
compensation input is as follows:
jdis 'A j
dis ■
The schematic block diagram of robust motion controller has an integrator with high gain equivalently in the forward path. Therefore, the robust motion controller eliminates steady state error.
Equation (7) shows that the disturbance is estimated through low-pass filter. Generally, there is such a low-pass filter in the observer structure. The poles of the observer determine the delay of
the low-pass filter GT (s). GT (s) gives a certain effect to the control performance.
The disturbance observer in motion system (a robust motion controller) may be transformed into an acceleration controller as well. Such transformation is possible due to its ability to
clarify the feedback effect of the disturbance. If there is no delay in the estimation process, the disturbance is completely canceled out. In fact, since there is definitely some time-delay in the
estimation process, the controlled system is not robust in high frequency range determined by 1 -GT (5) = Gs (5). Gs (5) is called a sensitivity function which shows a performance limit of robust
control in high frequency range. In most of low frequency area covered by GT (s), the motion system is robust. The transformation: a robust motion controller —> an acceleration controller - shows
another interpretation. It’s possible to select nominal inertia and nominal torque coefficient as unity. This case shows that a current reference is also an acceleration reference.
The paper reaches a result that robust motion controller makes a motion system (for example, building machine) to be an acceleration control system. The result implies a versatility of robust motion
controller for both position and force control. If position signal is fed back, a high-gain feedback in the robust controller makes stiffness very high. On the contrary, only pure force error
feedback makes total stiffness zero since there is no gain to the position.
In the above discussion, the disturbance estimated by (7) is used for a realization of robust mechanical system. In the actual application, the estimated disturbance is effective for not only the
disturbance compensation but also the parameter identification in the mechanical system. As defined in (3), the equivalent disturbance d , which is estimated by the disturbance observer, includes the
load torque T and the parameter
variation torque (Kt/J)-(Kto/J0) //e/ . Here
the load torque T consists of friction and external force effects in the mechanical system as follows:
= T/nction + T/nctwn co + Text, (16)
where jfnctwn _ C0L1i0mb friction effect, jJnction(S)_visCOsity friction effect, ^-external force effect.
This equation means that the output of the disturbance observer is only the friction effect under the constant angular velocity motion.
This feature makes it possible to identify the friction effect in the mechanical system. The friction effects are well identified in many cases as Stribeck friction model [2]. The external force
effect is also identified by using the estimated disturbance. Here it is assumed that the friction effects are known beforehand by the above identification process. By implementing the angular
accelerated motion, the system parameter Kto / J0 is adjusted in the observer design so that it is close to the actual value Kt / J . As a result, the disturbance observer estimates only the external
force effect as follows:
d = —:——---------------------d\ =
s + k2s + kx 'Kto lJa ->£< u0
к T
_________1_____ external (17)
The identified external force is applicable to sensorless force feedback control in mechanical system (for example, building machines) [3] and is utilized for a realization of mechanical vibration
control as well.
1. Gopinath B. On the control of linear multiple
input-output systems//Bell System Tech. J. - 1971. - Vol. 50. - №. 3. - P. 1063-1081.
2. Southward S., Radcliffe C., MacCluer C. Ro-
bust nonlinear stick-slip friction compensation // ASME J. Dynamics Syst., Measurement, Contr. - 1991. - Vol. 113. -P.639-645.
3. Murakami T., Ohnishi K. Torque sensorless
control in multi-degree-of-freedom manipulator // IEEE Trans. Ind. Electron. -1993. - Vol. 40. - № 2.
Рецензент: О.П. Алексеев, профессор, д.т.н., ХНАДУ.
Статья поступила в редакцию 20 июня 2007 г. | {"url":"https://cyberleninka.ru/article/n/parametric-identification-in-mechanical-systems","timestamp":"2024-11-04T14:13:07Z","content_type":"application/xhtml+xml","content_length":"65171","record_id":"<urn:uuid:efe75e46-d729-4cd5-88ef-c73462811590>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00577.warc.gz"} |
Using Square Roots to Solve Word Problem Involving Areas of Squares
Question Video: Using Square Roots to Solve Word Problem Involving Areas of Squares Mathematics
Given that the area of each square on the chessboard is 21 cm², find the length of the chessboard’s sides.
Video Transcript
Given that the area of each square on the chessboard is 21 centimetres squared, find the length of the chessboard’s sides.
So if the area of each square is 21, centimetres squared to be exact, then each side length will be the square root of 21 because the area of a square is just equal to the side length squared. So the
side length times the side length. So if we know the area is 21, then we set 𝑆 squared equal to 21. We square root both sides and find that the side length is equal to the square root of 21.
So if this side of the square is equal to the square root of 21, each of these squares are equal, so each one will have a side length that is the square root of 21. So in order to find the entire
side length of the big square, we’ll need to add them all together. And there are eight of them, so the square of 21 added together eight times or just eight multiplied to the square root of 21.
Therefore, the side length would be eight square root 21 centimetres. | {"url":"https://www.nagwa.com/en/videos/943190845140/","timestamp":"2024-11-06T17:22:00Z","content_type":"text/html","content_length":"242205","record_id":"<urn:uuid:13aa0a6a-f316-4f75-a845-8f93b5f2854e>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00701.warc.gz"} |
Binary Search: An Introduction | Cratecode
Binary Search: An Introduction
Note: this page has been created with the use of AI. Please take caution, and note that the content of this page does not necessarily reflect the opinion of Cratecode.
Programming is filled with a plethora of algorithms, but one of the most efficient searching techniques is the binary search. It's like looking for a word in a dictionary, but for computers. So,
buckle up and let's dive into the world of binary search!
What is Binary Search?
Binary search is an algorithm that efficiently searches for a specific value within a sorted array or list by repeatedly dividing it in half. If the desired value is less than the middle element, it
searches in the left half, otherwise in the right half, until it finds the target value or exhausts the search space.
You may have used a similar process when searching for a name in a phone book (remember those?). Start at the middle, and if the name you're looking for is before the middle, you focus on the first
half, otherwise, you check the second half. You keep narrowing down the search space until you find the name or realize it's not there.
Binary Search Algorithm
Here's a simple outline of the binary search algorithm:
1. Set the lower and upper bounds of the search space (initially, the whole array).
2. Calculate the middle index of the search space.
3. Compare the middle element with the desired value.
4. If the middle element is the desired value, you've found it!
5. If the desired value is less than the middle element, update the upper bound to be one less than the middle index.
6. If the desired value is greater than the middle element, update the lower bound to be one more than the middle index.
7. Repeat steps 2-6 until you find the desired value or the lower bound is greater than the upper bound, meaning the value is not in the array.
Here's an example in Python:
def binary_search(arr, target):
lower = 0
upper = len(arr) - 1
while lower <= upper:
middle = (lower + upper) // 2
mid_val = arr[middle]
if mid_val == target:
return middle
elif mid_val < target:
lower = middle + 1
upper = middle - 1
return -1 # Target value not found
Use Cases
Binary search is a powerful algorithm that can be applied to various programming tasks, such as:
• Searching for a specific value in a large, sorted dataset.
• Finding the closest value to a given number in a sorted array.
• Determining the position of a new element to maintain a sorted order in a sorted array.
• Solving problems involving search spaces with monotonic properties, such as finding the square root of a number.
Binary search's efficiency comes from its O(log n) complexity, which means that for every doubling of the input size, it only takes one additional step to search for a value. This makes it an ideal
choice when dealing with large datasets or when performance is a priority.
In Summary
Binary search is an efficient searching algorithm that saves time and resources by dividing and conquering sorted arrays or lists. Its wide range of use cases and performance benefits make it a
valuable asset in any programmer's toolkit. Now that you're equipped with the knowledge of binary search, you're ready to employ this powerful algorithm in your own projects! Happy searching!
Hey there! Want to learn more? Cratecode is an online learning platform that lets you forge your own path. Click here to check out a lesson: Rust - A Language You'll Love (psst, it's free!). | {"url":"https://cratecode.com/info/binary-search","timestamp":"2024-11-11T04:56:11Z","content_type":"text/html","content_length":"86055","record_id":"<urn:uuid:28f49564-a818-454d-b6a6-abc12d8d0c48>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00179.warc.gz"} |
Johan van Benthem
Philosophy 359, ADVANCED MODAL LOGIC, Spring 2008
Purpose of the Course
This course will present the technical background in modal logic to current work on logics of rational agency and intelligent interaction.
For a
historical paper
describing my idiosyncratic (though much more true than most) views on how logic developed in the 20th century,
see this chapter in the
Handbook of the Philosophy of Logic
. For some lively reports from the research front, see this year's
We will also tie up with recent dissertations by students in Amsterdam and at Stanford:
Patrick Girard
Fenrong Liu
Olivier Roy
are a few names, but there are many more people whose work will come by, e.g.,
Audrey Yap
Tomohiro Hoshi
Eric Pacuit
, and others.
The course will quickly enable you to understand a range of research taking place right now. See the
Supporting Activities
Feel free to email questions and comments: lots of interesting things have been brought up in class by many of you -- and as
you must already have noticed, I have not been able to answer all of them, since many questions are still open in this area.
Preliminary Schedule
Week 1 Introduction: grand program and technical basics
For a
survey paper
of the Grand Program, see this invited lecture at the 13th Congress of
Logic, Methodology and Philosophy of Science
held in Beijing, August 2008. Some Stanford philosophers have really very similar aims, cf. John Perry's
Admit Seminar
on April 4th.
For the technical basics in modal logic, bisimulation, complexity, connections with classical logics, etc., see this introductory chapter by
Blackburn & van Benthem in the Handbook of Modal Logic.
Week 2 Epistemic logic: statics
Languages, bisimulation, expressive power, axiomatic completeness.
Some basics are in this paper on a Farewell to Loneliness which appeared in Proceedings Logic Colloquium Muenster 2002.
Background paper on Logic and Information, with Maricarmen Martinez, to appear in the Handbook of the Philosophy of Information.
We worked through the technical basics of modal and epistemic logic that we will need,
emphasizing multi-agent aspects of epistemic logic beyond standard discussions of
the formalism, and then moving up to forms of knowledge for groups. Now we are
in a position to see how the dynamic aspects can be brought into logical focus.
Week 3 Epistemic logic and dynamics of public hard information
Logic of public announcement: dynamic logic, updates, PAL completeness: see paper for last week.
Dynamic epistemic logic DEL and completeness. Standard textbook, but we will explain things in class.
Special challenge: dealing with common knowledge, involves Kleene's Theorem and automata
(van Benthem, van Eijck & Kooi 2006) or modal mu-calculus (van Benthem & Ikegami 2008).
We have gone through the basics of public announcement logic PAL, with an emphasis on understanding
the basic methodology of 'recursion equations' for knowledge achieved after update, and how this preserves
bisimulation invariance and completeness. Following that, we have looked at a recent 'protocol version' TPAL
of PAL which describes constrained historical scenarios for learning and communication, mixing purely
epistemic information with irreducibly procedural information. This was presented at TARK 2007, but here
is the most recent draft. These logics also have philosophical applications, cf. this paper on the Fitch Paradox.
Week 4 Dynamic-epistemic logic of partial observation
We surveyed the 'postcard' version of EL + PAL, then looked at connections with epistemology (Fitch
Paradox) which suggest extensions of PAL in turn [Hoshi and by now 6 co-authors], then returned to
'link cutting' versions of update, and eventually full product update using event models: examples
(email: cc versus bcc; master's thesis of Ji Ruan), number of worlds can grow, event models and
preconditions, language, logic. A few issues: (a) background in branching trees of events (we will
return to this in epistemic-temporal logic; of which DEL forms a well-chosen fragment), (b) special
case: describing games (e.g., van Ditmarsch' thesis on "Clue": model size grows from start to mid-
game, then shrinks toward end game, (c) 'protocols' can be dealt with to some extent by preconditions
(but delicate issue), (d) which properties of M and E are preserved in the product model MxE?, (e)
epistemology once more: see 7 Bridges paper. Give up the usual uniformity: describing successful
functioning in interaction with different agents should count as a hallmark of 'rationality'.
Week 5 Beliefs, conditional logic, and belief revision in dynamic logic
We discussed combined logics of knowledge, belief, and conditionals, for the next step of the description of
rational agents: their capacity for 'self-correction'. This adds plausibility orderings to epistemic ranges, and
we discussed some possible interaction axioms such as epistemic belief introspection. Then we gave a
complete logic for belief change under hard information, which hides some 'ugly ' scenarios, such as
misleading true information leading to false beliefs. As a result, we raised the issue whether perhaps
a larger repertoire of epistemic-doxastic notions is needed, with the example of 'safe belief' in between
K and B. Next, we turned to belief revision, now done in dynamic logic style as an account of belief
change under soft information, viewed as transformations of the plausibility order. We gave some
complete logics, for radical and conservative revision. More details in the 2007 DLBR paper, and
the 2008 Stanford thesis of Patrick Girard. Contrast this with the postulational approach of 'AGM'.
Finally, we discussed a general format for revision using DEL-style technology shiting the different
'policies' into a plausibility-based event model as input: Priority Update proposed by Baltag & Smets.
Evening talks by Tomohiro Hoshi on adding protocols as form of 'procedural information' to PAL/DEL,
its technical theory, and philosophical applications: e.g., K phi now becomes different from K <!phi>T,
throwing new light on the problem of logical omniscience: Modal Distribution holds for thee first notion,
but no longer for the second. Assaf Sharon discussed evidence and knowledge, and provided arguments
against omniscience, or even Hawthorne's weaker variants, showing how even upward monotonicity
of the evidence relation fails if you take Carnap-style probabilistic (or related more qualitative) scenarios.
Week 6 Preference statics and dynamics
Dissertations by Patrick Girard, Fenrong Liu, Olivier Roy.
For Better or For Worse
: a survey paper in a
forthcoming book edited by Gruene & Hanson on preference change
Springer Theory and Decision Library
We discussed how preference can be reprented between worlds, and then lifted to propositions, but also vice versa, from priority graphs to induced world order. Then we looked at dynamic actions
changing such orders, and the resulting complete logics of preference change. But eventually, it seems that preference is entangled with knowledge and belief inside models, so we considered how that
works, too. Finally, we briefly discussed extensive games as a setting where information update, belief revision, and preference change play at the same time, as a prelude to a dynamic analysis of
multi-agent scenarios which steps back from the usual hard-wired assumptions of 'standard rationality' for agents, allowing for alternatives.
Week 7 Games structure, solution procedures, and information flow
Check papers under Logic and Games at my research website. Our topics: games in dynamic-
epistemic logic, game solution as iterated public announcement (Rational Dynamics), dynamic
logics that analyze belief change in extensive games - bringign together our earlier approaches.
Cf. papers by Krzysztof Apt, Cedric Degremont, and Jonathan Zvesper; and talk by Sonja Smets.
Week 8 Temporal logics, protocols, and infinite behaviour over time
Papers with Eric Pacuit, Jelle Gerbrandy, Tomohiro Hoshi. This class will be taught by
Week 9 Further topics in current research
Guest presentations by Sonja Smets (games and dynamic rationality), Cedric Degremont
(dynamic doxastic logic and doxastic temporal logic): information below, as well as
former Amsterdam & Bloomington student Joshua Sack on probabilitic DEL.
You can audit the course, but if you need credit, get in touch by early May about a small individual project resulting in a paper.
Supporting Activities
We will have some guests after every block of topics in the course: epistemic dynamics, temporal structure, etc.
Format: Thursday evening sessions, 7:30 - at most 9 PM, with brief presentations aimed toward discussion.
Speakers lined up so far:
May 1
Assaf Sharon (Stanford)
The presentation will be based on the paper "Evidence and the Openness of Knowledge" by Assaf Sharon and Levi Spectre.
"This article is driven by a simple idea: in the analysis of knowledge, the logic of evidence must have a pivotal role.
A proper account of empirical knowledge, in other words, must march in step with the relation of evidential support.
Appealing as this idea may seem, even among contemporary epistemologists who address evidence in their theories,
little attention has been given to the actual workings of empirical evidence. Founding the theory of knowledge on
the basis of empirical evidence, we argue, has ramifications for epistemology that are wide-ranging as they are
fundamental. Specifically, we argue that, since the relation of evidential support is not closed under known
entailment, empirical knowledge is also open.
Our argument proceeds in the following form. We inspect some of the most promising arguments in favor of
epistemic closure and argue that, in face of a proper understanding of empirical knowledge and its relation to
evidence, they fail. Reflecting on this failure and on the logic of evidence to which it is traced, we present a
positive argument against the validity of closure and specify its advantage over the standard argument for
epistemic openness, namely the argument from particular externalist theories of knowledge. In contrast to
common opinion, we claim, it is not externalist "belief-sensitivity" that is most congenial to epistemic
openness, but rather an evidentialist account of knowledge.
Without attempting at a full-fledged theory of evidence, we show that on the modest assumption that evidence
cannot support both a proposition and its negation, or, alternatively, that information that reduces the probability
of some proposition cannot constitute evidence for its truth, the relation of evidential support is not closed under
known entailment. We then turn to argue that given a minimal dependence of knowledge of empirical truths on
evidence, there is good reason to reject a number of intuitively appealing epistemic principles, including not only
the principle of epistemic closure, but also other, weaker principles. We present a number of significant benefits
of this position, namely, offering a unified solution to a range of central epistemological puzzles as well as
an account of their force and resilience to solution outside an evidential framework. Finally, we turn to
confront potential oppositions to our position.
Another way of stating the objective of this article is as setting a challenge for epistemic closure: if, as
we argue, the openness of evidence can be established, how can knowledge of empirical truths be closed?
Tomohiro Hoshi (Stanford)
I will talk about the system TPAL, which has been introduced in one of Johan's lectures. TPAL introduces a
new semantic structure of 'protocols' on PAL that constrains the permissible sequences of public announcements.
Giving the framework of TPAL, I will point to some of the research directions that people (in cluding myself)
have been pursuing recently. Two specific examples from my current dissertation research are: technical
extensions of TPAL, and epistemological applications, such as representation of explicit knowledge.
May 8 Audrey Yap (University of Victoria, formerly Stanford)
What I will talk about on Thursday is the addition of a past-looking operator to both DEL and TDEL.
In the context of my own work in adding a past operator to DEL, I will talk about some reasons why
we might want such an operator, and some of the expressive power it adds, such as the way in which
it allows for talk about learning. But when we look at the way in which DEL models need to be
modified in order to keep track of the history, we find structures that look very much like ETL
models. This suggests adding such operators to TDEL, whose models already have a temporal
structure. So I will also mention joint work with Tomohiro Hoshi extending his work on TDEL
with the addition of a past modal operator, as well as some further things we can do in adding the
Past operator to TDEL, for instance, the interaction between past operators and protocols.
[Paper has been circulated by email.]
May 15
No guest lecture!
May 22
Hans van Ditmarsch: University of Otago, New Zealand & IRIT, France
Quantifying over Announcements
Public announcement logic is an extension of multi-agent epistemic logic with dynamic operators
to model the informational consequences of announcements to the entire group of agents. Arbitrary
announcement logic is an extension of public announcement logic with a dynamic modal operator
that expresses what is true after arbitrary announcements. Intuitively, [] phi expresses that phi is
true after an arbitrary announcement psi. An example validity is <> (Kp v K ~p):
there is always a way to make the value of an atom p known.
I will also present various syntactic fragments and semantic notions involving knowledge and change,
such as the successful formulas as those for which [phi]phi is valid (after announcing phi, phi is true),
the knowable as those for which, for all agents, phi -> <> K phi is valid, and so on: positive, preserved, ...
Some variants of the language provide new opportunities. Instead of 'announcements', we can consider
'informative events', or even 'events'. Some results persist for such generalizations. Instead of quantifying
over announcements, we can quantify over announcements made by specific subgroups of agents only.
This interpretation provides a link from knowledge, via knowability, to ability. Given that announcements
may contain announcements, this interpretation also allows us to describe protocols, and specify
postconditions of, for example, security protocols between a sender and receiver.
Yet another 'version' (quoted to scare: it is technically quite different) interprets 'an informative event
has taken place' with *refinement* of the current information state, in the formal sense that is the dual
of simulation. (I.e., from the three conditions for bisimulation, only 'atoms' and 'back' are required.)
This version has promising theoretical results, and can be translated to bisimulation quantified logics.
May 29
Cedric Degremont: Belief Change in Temporal Doxastic Logic.
I will present a recent paper written with Johan van Benthem in which we compare two modal frameworks
for multi-agent belief revision: dynamic doxastic logics computing stepwise updates and temporal doxastic
logics describing global system evolutions, both based on plausibility pre-orders. We prove representation
theorems showing under which conditions a doxastic temporal model can be represented as the stepwise
evolution of a doxastic model under successive 'priority updates'. I will also discuss a protocol version of
one of Johan's dynamic logic of belief revision (which is a natural "belief"-based counterpart to TPAL).
I might discuss a few concrete applications as well.
Joshua Sack: Extending Probabilistic Dynamic Epistemic Logic
Abstract: Probabilistic epistemic logic (PEL) brings together qualitative representations of uncertainty
from epistemic logic and quantitative representations of uncertainty from probabilistic logic. Combining
these two, one can express sentences such as ``Ann believes the probability the coin landed heads is
either 1 or 0". In his paper "Probabilistic Dynamic Epistemic Logic", Kooi added dynamics to PEL,
allowing us to express qualitative and quantitative uncertainty of agents that would result from a
public announcement. This and further work on probabilistic dynamic epistemic logic (PDEL)
has assumed that every subset of the sample space is measurable.
This talk will propose a method for relaxing the restriction that every set is measurable in PDEL.
We will look at two potential sources of motivation for doing this in a dynamic setting. One is
that it may shed light on model transformations, that could be useful in proving completeness of
a system that adds a previous time operator to PDEL. Another is from an example given by
Halpern, Fagin, and Tuttle that was meant to motivate why an agent's sample space should
in some cases be different than the set of worlds the agent considers possible.
Sonja Smets: From Dynamic Belief Revision to Dynamic Rationality.
My talk (based on joint work with Alexandru Baltag and Jonathan Zvesper) is about using recent
developments in Logic to better understand and model ``rationality" in extensive games, by taking
into account the dynamics of belief. The main idea is that, in order to correctly factor in the evolution
of players' beliefs (about each other) throughout the game, we need a novel notion of ``dynamic
rationality". This is a context-dependent (time-dependent), knowledge-dependent, belief-based
and ``future-oriented" concept, that presupposes what one may call ``epistemic freedom of choice".
To formalize this notion, we use a belief-revision-friendly version of Dynamic Epistemic Logic,
obtained by extending Johan van Benthem's logic of conditional belief and public announcements
with an operator for ``arbitrary announcements" (quantifying over public announcements). Each
move (to a node v') in an extensive game with perfect information can be ``simulated" by a public
announcement (that node v' is reachable during the current play). The arbitrary announcement
operator can be used to capture properties that are ``stable" during the current play.
I apply the concept of dynamic rationality to propose a new solution to a famous debate (between
Aumann on one hand, and Stalnaker and Reny), concerning the epistemic conditions for the
so-called backward-induction solution. ``Backward induction" is the oldest, simplest and perhaps
the most natural solution concept in Game Theory. But the reasoning underlying this solution
seems to give rise to a fundamental paradox (the so-called ``BI paradox"). I use the concepts
developed in this talk to address the paradox, and I argue that the correct epistemic condition
underlying the backward induction method is more general and weaker than Aumann's: `
`common knowledge of stable belief in (dynamic) rationality".
There will also be a Stanford Workshop on Logic and Formal Epistemology May 31 - June 1st. | {"url":"https://staff.fnwi.uva.nl/j.vanbenthem/Phil359.2008.html","timestamp":"2024-11-05T03:31:54Z","content_type":"text/html","content_length":"54196","record_id":"<urn:uuid:686686dd-1d70-41a1-bd40-f13ee84e2185>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00863.warc.gz"} |
Bitwise Operators – Best Computer Institute, Rajahmundry
About Lesson
Bitwise Operators
These operators are used to perform bit wise operations on the operands. In other words, bitwise operators work on binary representation of the value. Except NOT(~) operator all the remaining
bitwise operators are binary operators. Bitwise operators only work with integer data type not with real values. Bitwise operators can work with numbers, variables and even conditions.
Operator Symbol Operation
& AND
| OR
^ XOR
~ NOT/COMPLIMENT
>> RIGHT SHIFT
<< LEFT SHIFT
Bitwise AND (&)
Bitwise AND works similar to Logical AND in the evaluation but the difference is Bitwise AND works on the binary representation of operand.
Difference between Logical AND (&&) and Bitwise AND (&)
In the case of numbers or variables, Logical AND(&&) works on the boolean values(non-zero is treated as ‘1’) while Bitwise AND works on binary representation.
4&&7, 4&7
In the case of conditions, Logical AND(&&) works on the short-circuit AND while Bitwise AND works as regular AND evaluation.
a>b && b>c
a>b & b>c
&& is known as short-circuit operator since it may or may not evaluate 2^nd argument based on the evaluation of its 1^st argument.
Left Shift Operator
This is binary operator that moves each bit in the binary representation of given number towards left to specified number of positions. The blank trailing positions will be filled with ‘0’ after
left shift operation.
//example code how left shift operator works
void main()
int var=3;
printf(“%d”,var); //output 6
Left shifting is equivalent to multiplication of left operand by 2 power right operand.
In the above example, 3<<1 means 3×2^1
Right Shift Operator
void main()
int var=3;
printf(“%d”,var); //OUTPUT 1
Right shifting is equivalent to division of left operand by 2 power right operand.
In the above example 3>>1 means 3/2^1 | {"url":"https://www.anilcomputers.guru/courses/programming-with-c-language/lesson/bitwise-operators/","timestamp":"2024-11-04T09:08:22Z","content_type":"text/html","content_length":"141326","record_id":"<urn:uuid:7a76403c-2415-41ed-a9d0-b6fc05d1d676>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00085.warc.gz"} |
Numpy - Check If an Array contains a NaN value - Data Science Parichay
When working with Numpy arrays, it can happen that they contain one or more NaN (not a number) values. In this tutorial, we will look at how to check if an array contains a NaN value or not.
Steps to check if a Numpy array contains a NaN value
To check if an array contains a NaN value or not, use a combination of the numpy.isnan() function and the Python built-in any() function. The idea is to essentially check whether any value in the
array is NaN or not.
Use the following steps –
1. Apply the numpy.isnan() function to the entire array, this will result in a boolean array with True for the values that are NaN and False for the values that are not NaN.
2. Then, apply the any() function on the above boolean array to check if there are any True values in the above array.
The following is the syntax –
import numpy as np
# check if array contains a NaN value
Let’s now look at some examples of using the above syntax –
Example 1 – Check if a Numpy array has any NaN Values using numpy.isnan()
Let’s create a Numpy array containing a NaN value and use the above method to see if it gives us the correct result or not.
import numpy as np
# create an array
ar = np.array([1, 2, 3, np.nan, 5])
# result of np.isnan
# check if the array contains any nan values
[False False False True False]
In the above example, we print the result of the np.isnan() function, which you can see is a boolean array. We then use the any() function to check if there are any True values in the boolean array.
📚 Data Science Programs By Skill Level
Introductory ⭐
Intermediate ⭐⭐⭐
Advanced ⭐⭐⭐⭐⭐
🔎 Find Data Science Programs 👨💻 111,889 already enrolled
Disclaimer: Data Science Parichay is reader supported. When you purchase a course through a link on this site, we may earn a small commission at no additional cost to you. Earned commissions help
support this website and its team of writers.
We get True as the output which indicates that there is at least one NaN value present in the array.
Example 2 – Numpy array without any NaN values
Let’s now create a Numpy array without any NaN values and apply the above method to see if we get the correct result or not.
import numpy as np
# create an array
ar = np.array([1, 2, 3, 4, 5])
# result of np.isnan
# check if the array contains any nan values
[False False False False False]
We get False as the output of the any() function. This means that are were no True values in the array resulting from np.isnan() which you can verify from the printed array.
If, on the other hand, you want to check whether all the values in a numpy array are NaN or not, use the Python built-in all() function instead of the any() function.
import numpy as np
# create an array
ar = np.array([np.nan, np.nan, np.nan])
# result of np.isnan
# check if the array contains only nan values
[ True True True]
We get True as the output.
For more on the numpy isnan() function, refer to its documentation.
You might also be interested in –
Subscribe to our newsletter for more informative guides and tutorials.
We do not spam and you can opt out any time. | {"url":"https://datascienceparichay.com/article/numpy-check-array-contains-nan/","timestamp":"2024-11-13T19:06:23Z","content_type":"text/html","content_length":"264018","record_id":"<urn:uuid:bd4e273e-34c2-431b-9b4e-72dd62bfcf62>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00461.warc.gz"} |
[Sample Paper] If 6 is the critical point of the function, then find
Question 36 (ii) - CBSE Class 12 Sample Paper for 2023 Boards - Solutions of Sample Papers and Past Year Papers - for Class 12 Boards
Last updated at April 16, 2024 by Teachoo
If 6 is the critical point of the function, then find the value of the constant m. | {"url":"https://www.teachoo.com/19150/4115/Question-36-ii/category/CBSE-Class-12-Sample-Paper-for-2023-Boards/","timestamp":"2024-11-06T04:18:37Z","content_type":"text/html","content_length":"164434","record_id":"<urn:uuid:c9cd2989-5c3d-4cf3-b780-a1c2880c2ac8>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00463.warc.gz"} |
How To Solve a Mirror Cube?
The Mirror Cube is an unofficial puzzle. It has pieces of different shapes which allows it to shape shift. When you scramble it, the puzzle takes a chaotic form and loses its cubic shape which makes
the solving process quite complicated.
To solve a Mirror cube one must know how to solve a regular 3x3 cube and should have a general understanding of how a cube works.
We’ll be needing these notations to solve a Mirror cube-
R- Move the right layer upwards
R’- Move the right layer downwards
L- Move the left layer downwards
L’- Move the left layer upwards
U – Move the top layer in the clockwise direction
U’- Move the top layer in the anti-clockwise direction
D- Move the bottom layer in the clockwise direction
D’- Move the bottom layer in the anti- clockwise direction
F- Move the front face in the clockwise direction
F’- Move the front face in the anti-clockwise direction
The first step to solve the Mirror cube is to make a cross on the bottom layer.
To do this, search for the edge which is the largest. Try to find the most suitable edges for each centre, this will take some patience as the mirror cube is a shape shifter. You’ll get better at it
once you’ll solve it 2-3 times. The more you practice finding edges for the cross the better you’ll get at it.
Corners of the first layer-
This step involves solving the corners of the first layer.
To solve the corners, one has to find the right corner for each slot and insert it. Again, this will take some time as the corners become quite hard to distinguish from one another during this stage.
Use this algorithm while keeping the cross at the bottom to insert a corner piece which is in front of the slot-
R U’ R’ U
Middle Layer-
To solve the middle layer a cuber should know how to solve a 3x3 cube and also solve it faster.
The cube must be held in such a way that the solved face should be at the bottom.
Look for the biggest edge and try to find the largest gap in the middle layer. This will help you to solve your first edge. Keep looking for the edges and insert them.
This step is like the one before it. You’ll have to practice solving the middle layer multiple times to get better at it.
These steps will complete the middle layer and the cube will look something like this.
Orienting the edges on the top layer-
This step is extremely easy! All you have to do is perform this algorithm.
Keep doing this algorithm until the edges get permutated.
F R U R’ U’ F’
You might face some problems with the permutation of the edges in this step.
A simple algorithm to resolve this issue is the sune.
Do this algorithm while keep keeping one unsolved edge on the front and the other unsolved edge on the left-
R U R’ U R U2 R’
Corner permutation-
For this step we will use commutators.
The commutator will help you moving the corners of the top layer to your desired slot and the corner from that slot will get inserted into the other slot.
R’ D’ R U/U’/U2(DEPENDS ON THE SITUATION R’D R U’
Keep moving the corners till you get them at the right places.
Corner orientation-
Once you have inserted the corners at the correct positions , you can orient them.
To orient the corners use this algorithm while keeping the unsolved layer to your left-
R’ U’ R U
Keep doing this algorithm till the corner gets oriented.
To orient the next corner, use the L moves to bring the corner to the slot and orient it.
Related Read - How to Solve Skewb Cube [2 Simple Methods]
37 comments
This Is fucking hard
It is awesome 👌
What the hell does depend on the situation means at least tell us the situations.
😄😄😄😄 my mind is crused after this mna saying. 😁o😁😁
die mortals!!!
Hard af
Hahahhahahahha its so is tooooooioooo ezezez
Eazzy thanks too cubelolo
It’ soooo EASY all NOOBS.
I mean like DEPENDS On SitUAtion what does that mean?!
I mean like DEPENDS On SitUAtion what does that mean?!
Corner permutation is totally useless
sooo hard
But with diffrent pieces
Bro its like 3×3×3 basic cube
if you cant solve it your dum
If you can’t solve one you are very stupid😂
Super EZ😎
I can’t solve it cause i couldn’t find blocks of any layer
this doesn’t help at all!
This is not easy I am 11 years old
It’s very easy, doesn’t need a tutorial
How to solve a 2×2 mirror cube
huh! tooooooooooooooooo ez for me
i know how to solve it
da hek doesnt work for some reason
Very easy
How to slove megaminx on mirror cube?
How to slove megaminx on mirror cube
Thankyou for your help….
U can even solve by beginner method
its very easy just like 3 by 3 cube. if you know how to solve 3 by 3 then u will be able to solve the mirror
very complicated
Too complex. Maybe im not smart:(
Good Method Like It | {"url":"https://www.cubelelo.com/blogs/cubing/how-to-solve-mirror-cube","timestamp":"2024-11-12T04:00:26Z","content_type":"text/html","content_length":"367873","record_id":"<urn:uuid:80743272-4bff-4869-bd48-13e14b142d67>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00511.warc.gz"} |
GAMS (General Algebraic Modeling System)
GAMS (General Algebraic Modeling System) is a high-level modeling system specifically designed for mathematical programming and optimization. It was first introduced in the early 1980s by Alexander
Meeraus at the World Bank as a tool for solving complex linear, nonlinear, and mixed-integer optimization problems. The language is designed to work with large-scale models, particularly in fields
such as economics, engineering, energy, agriculture, and management science. Its primary purpose is to help modelers formulate mathematical problems and then solve them using various optimization
What makes GAMS unique is its ability to handle a wide variety of mathematical problems by providing a consistent environment for model formulation and solution. It supports various optimization
types, including linear programming (LP), nonlinear programming (NLP), mixed-integer programming (MIP), dynamic optimization, and more. This makes it a flexible tool for researchers and professionals
working with complex mathematical models across diverse domains.
The structure of GAMS is relatively straightforward. A GAMS program consists of declarations of sets, parameters, variables, and equations, followed by model definition and execution commands to
solve the model. The language emphasizes ease of use, allowing the modeler to concentrate on the mathematical structure of the problem rather than algorithmic details.
Here is an example of a simple linear programming problem formulated in GAMS:
i /1*3/;
a(i) / 1 1, 2 2, 3 3 /;
x(i), z;
objective, constraint;
objective .. z =e= sum(i, a(i) * x(i));
constraint .. sum(i, x(i)) =l= 10;
Model simpleModel /all/;
Solve simpleModel using lp maximizing z;
In this example, a simple linear optimization model is defined. The objective equation maximizes the value of z, and the constraint ensures that the sum of variables x(i) is less than or equal to 10.
The model is then solved using linear programming (lp).
GAMS is particularly beneficial when working with:
• Large-scale models: It is ideal for handling optimization problems that involve thousands of variables and constraints.
• Scenario analysis: GAMS is frequently used to explore multiple scenarios and perform sensitivity analysis, often used in energy planning and economic modeling.
• Complex economic and engineering systems: Professionals in fields like energy, transportation, and agriculture use GAMS to model complex systems and optimize resource use and planning.
Many industries use GAMS for decision-making processes, such as power systems optimization, logistics, investment planning, and resource allocation. Given its extensive library of solvers and its
ability to handle multiple types of optimization problems, GAMS remains a popular tool in academic research and industry, offering both flexibility and scalability for optimization challenges. | {"url":"http://catencode.com/codes/GAMS","timestamp":"2024-11-05T22:31:18Z","content_type":"text/html","content_length":"28081","record_id":"<urn:uuid:4a14ab62-0dae-4d7a-a323-c32d8c37be2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00526.warc.gz"} |
Data fitting - (Programming for Mathematical Applications) - Vocab, Definition, Explanations | Fiveable
Data fitting
from class:
Programming for Mathematical Applications
Data fitting is the process of constructing a mathematical model that represents a set of data points, aiming to achieve the best approximation of the underlying relationship between variables. This
technique is essential for analyzing trends, making predictions, and understanding the dynamics in various fields such as engineering, economics, and the sciences. In particular, data fitting helps
in refining models by minimizing the differences between observed values and those predicted by the model.
congrats on reading the definition of data fitting. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Data fitting is commonly used to analyze relationships in experimental data, allowing researchers to derive equations that describe those relationships.
2. Spline interpolation is a specific type of data fitting that uses piecewise polynomial functions to create a smooth curve that passes through all given data points.
3. Different types of spline functions, like cubic splines, can provide varying levels of flexibility and smoothness for modeling data effectively.
4. The choice of the fitting method can greatly affect the accuracy and interpretability of the model, making it crucial to select an appropriate approach based on data characteristics.
5. Overfitting can occur in data fitting when a model becomes too complex and captures noise instead of the underlying trend, leading to poor predictions on new data.
Review Questions
• How does spline interpolation enhance the process of data fitting compared to traditional linear methods?
□ Spline interpolation enhances data fitting by providing a more flexible approach than traditional linear methods. Unlike linear fitting that connects points with straight lines, spline
interpolation uses piecewise polynomials to create smooth curves that can accurately capture complex patterns in the data. This allows for better representation of local behavior while
maintaining overall continuity, making it particularly useful in applications where accuracy at specific intervals is crucial.
• Evaluate the impact of selecting different types of spline functions on the quality of data fitting in practical applications.
□ Selecting different types of spline functions can significantly impact the quality of data fitting in practical applications. For instance, cubic splines are popular due to their balance
between flexibility and smoothness, but they may not perform as well with very oscillatory datasets. Conversely, higher-degree splines might fit more closely to the data but can lead to
overfitting. Therefore, understanding the dataset and choosing an appropriate spline type is essential for achieving accurate predictions and reliable results.
• Propose a strategy for addressing overfitting in data fitting when using spline interpolation techniques.
□ To address overfitting in data fitting with spline interpolation techniques, one effective strategy is to apply regularization methods, such as penalizing excessive complexity in the spline
model. By introducing constraints on the spline coefficients or limiting the degree of splines used, we can simplify the model while still capturing important trends. Additionally, validating
the model with separate test datasets can help identify overfitting by ensuring that predictions remain accurate outside of the original training dataset.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/programming-for-mathematical-applications/data-fitting","timestamp":"2024-11-06T06:08:48Z","content_type":"text/html","content_length":"153702","record_id":"<urn:uuid:0b0e623e-e56d-483f-a4ec-dc387f04281d>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00433.warc.gz"} |
Vector Fields
16.1 Vector Fields
Vector Fields
The vectors in Figure 1 are air velocity vectors that indicate the wind speed and direction at points 10 m above the
surface elevation in the San Francisco Bay area.
We see at a glance from the largest arrows in part (a) that the greatest wind speeds at that time occurred as the winds entered the bay across the Golden Gate Bridge. Part (b)
shows the very different wind pattern 12 hours earlier.
Vector Fields
Vector Fields
Associated with every point in the air we can imagine a
wind velocity vector. This is an example of a velocity vector field.
Other examples of velocity vector fields are illustrated in Figure 2: ocean currents and flow past an airfoil.
Velocity vector fields
Vector Fields
Another type of vector field, called a force field, associates a force vector with each point in a region. An example is the gravitational force field.
In general, a vector field is a function whose domain is a set of points in (or ) and whose range is a set of vectors in V[2] (or V[3]).
Vector Fields
The best way to picture a vector field is to draw the arrow representing the vector F(x, y) starting at the point (x, y).
Of course, it’s impossible to do this for all points (x, y), but we can gain a reasonable impression of F by doing it for a few representative points in D as in Figure 3.
Vector field on
Vector Fields
Since F(x, y) is a two-dimensional vector, we can write it in terms of its component functions P and Q as follows:
F(x, y) = P(x, y) i + Q(x, y) j =
P(x, y), Q(x, y)
or, for short, F = P i + Q j
Notice that P and Q are scalar functions of two variables and are sometimes called scalar fields to distinguish them from vector fields.
Vector Fields
A vector field F on is pictured in Figure 4.
We can express it in terms of its component functions P, Q, and R as
F(x, y, z) = P(x, y, z) i + Q(x, y, z) j + R(x, y, z) k
Figure 4
Vector field on
Vector Fields
As with the vector functions, we can define continuity of vector fields and show that F is continuous if and only if its component functions P, Q, and R are continuous.
We sometimes identify a point (x, y, z) with its position vector x =
^x, y, z
and write F(x) instead of F(x, y, z).
Then F becomes a function that assigns a vector F(x) to a vector x.
Example 1
A vector field on is defined by F(x, y) = –y i + x j.
Describe F by sketching some of the vectors F(x, y) as in Figure 3.
Figure 3
Vector field on
Example 1 – Solution
Since F(1, 0) = j, we draw the vector j =
0, 1
starting at the point (1, 0) in Figure 5.
F(x, y) = –y i + x j
Example 1 – Solution
Continuing in this way, we calculate several other
representative values of F(x, y) in the table and draw the corresponding vectors to represent the vector field in Figure 5.
Example 1 – Solution
It appears from Figure 5 that each arrow is tangent to a circle with center the origin.
Example 1 – Solution
To confirm this, we take the dot product of the position vector x = x i + y j with the vector F(x) = F(x, y):
x F(x) = (x i + y j) (–y i + x j)
= –xy + yx
= 0
This shows that F(x, y) is perpendicular to the position
^x, y
and is therefore tangent to a circle with center the origin and radius
Example 1 – Solution
Notice also that
so the magnitude of the vector F(x, y) is equal to the radius
Example 3
Imagine a fluid flowing steadily along a pipe and let V(x, y, z) be the velocity vector at a point (x, y, z).
Then V assigns a vector to each point (x, y, z) in a certain domain E (the interior of the pipe) and so V is a vector field on called a velocity field.
A possible velocity field is illustrated in Figure 13.
Velocity field in fluid flow
Example 3
The speed at any given point is indicated by the length of the arrow.
Velocity fields also occur in other areas of physics.
For instance, the vector field in Example 1 could be used as the velocity field describing the counterclockwise
rotation of a wheel.
Example 4
Newton’s Law of Gravitation states that the magnitude of
the gravitational force between two objects with masses m and M is
where r is the distance between the objects and G is the gravitational constant. (This is an example of an inverse square law.)
Let’s assume that the object with mass M is located at the origin in . (For instance, M could be the mass of the earth and the origin would be at its center.)
Example 4
Let the position vector of the object with mass m be x =
^x, y, z
. Then r = | x |, so r^2 = | x |^2
The gravitational force exerted on this second object acts toward the origin, and the unit vector in this direction is
Therefore the gravitational force acting on the object at x =
^x, y, z
Example 4
[Physicists often use the notation r instead of x for the position vector, so you may see Formula 3 written in the form F = –(mMG/r^3)r.]
The function given by Equation 3 is an example of a vector field, called the gravitational field, because it associates a vector [the force F(x)] with every point x in space.
Example 4
Formula 3 is a compact way of writing the gravitational field, but we can also write it in terms of its component functions by using the facts that x = x i + y j + z k and
| x | = :
Example 4
The gravitational field F is pictured in Figure 14.
Figure 14
Gravitational force field
Example 5
Suppose an electric charge Q is located at the origin.
According to Coulomb’s Law, the electric force F(x) exerted by this charge on a charge q located at a point (x, y, z) with position vector x =
^x, y, z
where ε is a constant (that depends on the units used).
For like charges, we have qQ > 0 and the force is repulsive;
Example 5
Notice the similarity between Formulas 3 and 4. Both vector fields are examples of force fields.
Instead of considering the electric force F, physicists often consider the force per unit charge:
Then E is a vector field on called the electric field of Q.
Gradient Fields
Gradient Fields
If f is a scalar function of two variables, recall that its gradient ∇f (or grad f ) is defined by
∇f(x, y) = f[x](x, y) i + f[y](x, y) j
Therefore ∇f is really a vector field on and is called a gradient vector field.
Likewise, if f is a scalar function of three variables, its gradient is a vector field on given by
∇f(x, y, z) = f[x](x, y, z) i + f[y](x, y, z) j + f[z](x, y, z) k
Example 6
Find the gradient vector field of f(x, y) = x^2y – y^3. Plot the gradient vector field together with a contour map of f. How are they related?
The gradient vector field is given by
Example 6 – Solution
Figure 15 shows a contour map of f with the gradient vector field.
Notice that the gradient vectors are perpendicular to the level curves.
Figure 15
Example 6 – Solution
Notice also that the gradient vectors are long where the level curves are close to each other and short where the curves are farther apart.
That’s because the length of the gradient vector is the value of the directional derivative of f and closely spaced level curves indicate a steep graph.
Gradient Fields
A vector field F is called a conservative vector field if it is the gradient of some scalar function, that is, if there exists a function f such that F = ∇f.
In this situation f is called a potential function for F.
Not all vector fields are conservative, but such fields do arise frequently in physics.
Gradient Fields
For example, the gravitational field F in Example 4 is conservative because if we define | {"url":"https://9lib.co/document/zgw69712-vector-fields.html","timestamp":"2024-11-14T07:15:16Z","content_type":"text/html","content_length":"150831","record_id":"<urn:uuid:8273ccca-8717-4130-8f33-f7b225160e68>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00689.warc.gz"} |
A Lesser-known Advantage of Using L2 Regularization — Part II
An untaught advantage of L2 regularization that most data scientists don't know.
A couple of days back, we discussed how L2 regularization helps us address multicollinearity.
Multicollinearity arises when two (or more) features are highly correlated OR two (or more) features can predict another feature:
The core idea we discussed in that post was that, say, we are using linear regression, and the dataset has multicollinearity.
• Without L2 regularization, obtaining a single value for the parameters (θ) that minimizes the RSS is impossible.
• L2 regularization helps us address this issue and obtain an optimum solution.
This is demonstrated below:
Note: If you want to learn about the probabilistic origin of L2 regularization, check out this article: The Probabilistic Origin of Regularization.
I mentioned that post here because other than reducing overfitting and addressing multicollinearity, there’s one more lesser-talked problem that L2 regularization neatly handles, which is related to
Today, I want to talk about that.
Let’s begin!
Consider the OLS solution we obtain from linear regression:
• X is the matrix of independent variable (Shape: n*m).
• y is the vector of targets (Shape: m*1).
• θ is the weight vector (Shape: m*1).
If we look closely, we see that the OLS solution will only exist if our matrix (XᵀX) is invertible.
But a problem arises when the number of features (m) exceeds the number of data points (n) — m>n, in which case, this matrix will not be invertible.
Of course, it is very unlikely to run into such a situations in ML. However, I have seen this in many other fields which actively use ML, such as genomics.
Nonetheless, we never know when we may encounter such a situation in ML so it’s good to be aware of it. Moreover, today’s post will show you one more to understand how L2 regularization
eliminates multicollinearity.
For simplicity, consider we have the following input matrix X, which has more features than rows:
Here, it is ALWAYS possible to represent, say, the third column as a linear combination of the first two columns. This is depicted below:
The above system of linear equations has a solution for ‘a’ and ‘b’.
This shows that all datasets with the number of features exceeding the number of rows possess perfect multicollinearity.
This is because any column will ALWAYS be a linear combination of two other columns.
Speaking in a more mathematical sense, when (m>n), we always get a system of linear equations where the number of unknowns (a and b above) is equal to the number of equations we can generate.
However, had our dataset contained more rows than columns (n>m), it would not have been possible to solve that system of linear equations (unless the dataset itself possessed multicollinearity):
Now, given that any dataset in which the number of columns exceeds the number of rows has perfect multicollinearity, we can go back to what we discussed in the earlier post.
Without the L2 penalty, the residual plot will result in a valley. As a result, obtaining a single value for the parameters (θ) that minimizes the RSS will be impossible. This is depicted below:
With the L2 penalty, however, we get the following plot, which removes the valley we saw earlier and provides a global minima to the RSS error:
Now, obtaining a single value for the parameters (θ) that minimizes the RSS is possible.
In fact, there is one more way to prove that if X (irrespective of its dimensions) has perfect multicollinearity, then L2 regularization helps us eliminate it.
Consider the above 2*4 matrix again, and let’s apply the linear combination to the third and fourth columns:
Computing XᵀX on the final result, we get a noninvertible matrix:
This shows that if the dataset X (irrespective of the shape) has perfect multicollinearity, XᵀX will not be invertible (remember this as we’ll refer back to this shortly).
Moving on, let’s consider the loss function with the L2 penalty:
The objective is to find the parameters θ that minimize this.
Thus, we differentiate the above expression with respect to θ, and set it to zero:
Next, we solve for θ to get the following:
Notice something here.
The OLS solution DOES NOT invert XᵀX now. Instead, there’s an additional diagonal matrix, which gets added to XᵀX, and it makes the result invertible (λ!=0):
That was another proof of how regularization helps us eliminate multicollinearity.
So remember that L2 regularization is not just a remedy to reduce overfitting. It is also useful to eliminate multicollinearity.
Hope you learned something new today :)
If you want to learn about the probabilistic origin of L2 regularization, check out this article: The Probabilistic Origin of Regularization.
• While most of the community appreciates the importance of regularization, in my experience, very few learn about its origin and the mathematical formulation behind it.
• It can’t just appear out of nowhere, can it?
Also, if you love to dive into the mathematical details, here are some topics that would interest you:
👉 Over to you: What are some other advantages of using L2 regularization?
Thanks for reading Daily Dose of Data Science! Subscribe below and receive a free data science PDF (530+ pages) with 150+ core data science and machine learning lessons.
For those who want to build a career in DS/ML on core expertise, not fleeting trends:
Every week, I publish no-fluff deep dives on topics that truly matter to your skills for ML/DS roles.
For instance:
Join below to unlock all full articles:
Get your product in front of ~90,000 data scientists and other tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters — thousands of leaders, senior data scientists, machine learning engineers, data analysts, etc., who have
influence over significant tech decisions and big purchases.
To ensure your product reaches this influential audience, reserve your space here or reply to this email to ensure your product reaches this influential audience. | {"url":"https://blog.dailydoseofds.com/p/a-lesser-known-advantage-of-using-dbe","timestamp":"2024-11-07T22:14:20Z","content_type":"text/html","content_length":"276356","record_id":"<urn:uuid:d8ee69e5-29e9-42de-8153-a52d3eccdbfc>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00032.warc.gz"} |
Parsimonious module inference in large networks
Abstract (may include machine translation)
We investigate the detectability of modules in large networks when the number of modules is not known in advance. We employ the minimum description length principle which seeks to minimize the total
amount of information required to describe the network, and avoid overfitting. According to this criterion, we obtain general bounds on the detectability of any prescribed block structure, given the
number of nodes and edges in the sampled network. We also obtain that the maximum number of detectable blocks scales as √𝑁, where 𝑁 is the number of nodes in the network, for a fixed average
degree ⟨𝑘⟩. We also show that the simplicity of the minimum description length approach yields an efficient multilevel Monte Carlo inference algorithm with a complexity of 𝑂(𝜏𝑁log𝑁), if the number
of blocks is unknown, and 𝑂(𝜏𝑁) if it is known, where 𝜏 is the mixing time of the Markov chain. We illustrate the application of the method on a large network of actors and films with
over 106 edges, and a dissortative, bipartite block structure.
Dive into the research topics of 'Parsimonious module inference in large networks'. Together they form a unique fingerprint. | {"url":"https://research.ceu.edu/en/publications/parsimonious-module-inference-in-large-networks","timestamp":"2024-11-11T09:44:03Z","content_type":"text/html","content_length":"48711","record_id":"<urn:uuid:724d7d95-d027-4931-9a66-b733faf4f252>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00153.warc.gz"} |
Question #4444b | Socratic
Question #4444b
3 Answers
Kinetic energy is equal to:
$K = \left(m \cdot {v}^{2}\right) \cdot \frac{7}{10}$
Assuming there are no losses, the speed is constant = 8m/s
So, $K = \left(7 \cdot {8}^{2}\right) \cdot \frac{7}{10}$ = 313,6 Joules
If we also take into account the rotational kinetic energy due to the rolling motion, then we get :
${\left({E}_{K}\right)}_{T o t a l} = {\left({E}_{k}\right)}_{T r a n s l a t i o n} + {\left({E}_{k}\right)}_{R o t a t i o n}$
$= \frac{1}{2} m {v}^{2} + \frac{1}{2} I {\omega}^{2}$, where I is the moment of inertia of the sphere, and omega is the angular velocity at which the sphere is rotating.
$= \left(\frac{1}{2} \times 7 \times {8}^{2}\right) + \left(\frac{1}{2} \times \frac{2}{5} \times 7 \times {R}^{2} \times {\omega}^{2}\right)$
$= \left(224 + \frac{7}{5} {R}^{2} {\omega}^{2}\right)$ Joules
where R is the radius of the solid sphere.
But since the linear and angular velocities may be related by the equation $v = r \omega$, we may substitute this into the expression to yield :
$= \left(224 + \frac{7}{5} {R}^{2} {\left(\frac{v}{R}\right)}^{2}\right) = 224 + \frac{7 \times {8}^{2}}{5} = 313 , 6 J$
KE =$\frac{1}{2} m {v}^{2}$
The question phrasing implies frictionless motion, and NO dimensions of the mass are given, so shape is irrelevant (and also rotational motion).
Some equations posted omitted the "1/2" and are incorrect.
Impact of this question
2769 views around the world | {"url":"https://socratic.org/questions/561d8dd011ef6b60b734444b","timestamp":"2024-11-13T21:36:32Z","content_type":"text/html","content_length":"35931","record_id":"<urn:uuid:f5d74567-a27f-4a58-90cf-25df00f221c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00082.warc.gz"} |
cs193p - Project #3 Assignment #3 Task #9 - cs193p assignment solutions et al
cs193p – Project #3 Assignment #3 Task #9
Please note, this blog entry is from a previous course. You might want to check out the current one.
Your graphing calculator must be able to graph discontinuous functions properly (i.e. it must only draw lines to or from points which, for a given value of M, the program being graphed evaluates
to a Double (i.e. not nil) that .isNormal or .isZero).
Part of this task we have already handled in the previous one, not drawing when the data source returns zero or “not normal” values. But e.g. for 1/x we still had a line from negative infinity to
positive infinity. To avoid this move to new points instead of drawing lines there, when we had “impossible” previous values:
override func drawRect(rect: CGRect) {
if let y = dataSource?.y((point.x - origin.x) / scale) {
if !y.isNormal && !y.isZero {
firstValue = true
} else {
firstValue = true
The complete code for the task #9 is available on GitHub.
3 thoughts on “cs193p – Project #3 Assignment #3 Task #9”
1. This doesn’t seem to actually work. If you try and graph 1/x it still connects the line at the two extremes because the value returned from the y(x) method is always either zero or normal. It’s
hard to see because the connection is being drawn right on top of the y axis, but if you try this function, it’ll be more obvious: 1 / (M – 1). I haven’t found a way to solve this yet.
1. My implementation to solve that:
Instead of just continuing and adding a line, I move to the next point without drawing the line.
var path = UIBezierPath()
var onlyMove = true
for var pixelX = (bounds.minX * contentScaleFactor); pixelX <= (bounds.maxX * contentScaleFactor); pixelX++ {
let pointX = CGFloat(pixelX) / contentScaleFactor
var currentXInGraph = (pointX – origin!.x) / scale
let first = pixelX == bounds.minX
if let currentYInGraph = source.yForX(currentXInGraph) {
if currentYInGraph.isNormal || currentYInGraph.isZero {
let pointY = origin!.y – (currentYInGraph * scale)
let next = CGPointMake(pointX, pointY)
if first || onlyMove {
} else {
onlyMove = false
} else {
onlyMove = true
2. Thanks for posting all of this! As someone doing CS 193P self paced via iTunes U, it’s really helpful to have this material available for help when I’m stuck!
This solution works for certain values of ‘scale’. But after zooming in or out on the graph ‘scale’ can end up as a longer decimal value. In the example of 1 / (M – 1), you can get into a
situation where ‘currentXInGraph’ never hits the undefined value, like so:
x = .9674
x = .9934
x = 1.0192
In this scenario, the drawRect will end up connecting the point at x = .9934 to the point at x = 1.0192, since it never evaluated y at x = 1. Do you have any ideas on how to solve that? | {"url":"https://cs193p.m2m.at/cs193p-project-3-assignment-3-task-9-winter-2015/","timestamp":"2024-11-03T12:44:10Z","content_type":"text/html","content_length":"95524","record_id":"<urn:uuid:1ab11b0d-d419-4ace-a027-537f23e58964>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00372.warc.gz"} |
All publications sorted by year
All publications sorted by year
1. E.D. Sontag. Notes on Mathematical Systems Biology. 2024. [WWW] Keyword(s): systems biology, mathematical biology.
2. A. C. B. de Oliveira, M. Siami, and E. D. Sontag. Regularising numerical extremals along singular arcs: a Lie-theoretic approach. In M.A. Belabbas, editor, Geometry and Topology in Control,
Proceedings of BIRS Workshop. American Institute of Mathematical Sciences Press, 2024. Note: To appear.[PDF] Keyword(s): optimal control, nonlinear control, Lie algebras, robotics.
│ │Numerical ``direct'' approaches to time-optimal control often fail to find solutions that are singular in the sense of the Pontryagin Maximum Principle. These approaches behave better│
│ │when searching for saturated (bang-bang) solutions. In previous work by one of the authors, singular solutions were theoretically shown to exist for the time-optimal problem for │
│ │two-link manipulators under hard torque constraints. The theoretical results gave explicit formulas, based on Lie theory, for singular segments of trajectories, but the global │
│ │structure of solutions remains unknown. In this work, we show how to effectively combine these theoretically found formulas with the use of general-purpose optimal control softwares. │
│ │By using the explicit formula given by theory in the intervals where the numerical solution enters a singular arcs, we not only obtain an algebraic expression for the control in that │
│ │interval, but we are also able to remove artifacts present in the numerical solution. In this way, the best features of numerical algorithms and theory complement each other and │
│ │provide a better picture of the global optimal structure. We showcase the technique on a 2 degrees of freedom robotic arm example, and also propose a way of extending the analyzed │
│ │method to robotic arms with higher degrees of freedom through partial feedback linearization, assuming the desired task can be mostly performed by a few of the degrees of freedom of │
│Abstract:│the robot and imposing some prespecified trajectory on the remaining joints. │
3. M.A. Al-Radhawi, D. Angeli, and E.D. Sontag. On structural contraction of biological interaction networks. 2024. Note: To be submitted. Preprint in: arXiv https://doi.org/10.48550/
arXiv.2307.13678.Keyword(s): contractions, contractive systems, matrix measures, logarithmic norms.
│ │In previous work, we have developed an approach to understanding the long-term dynamics of classes of chemical reaction networks, based on rate-dependent Lyapunov functions. In this │
│ │paper, we show that stronger notions of convergence can be established by proving contraction with respect to non-standard norms. This enables us to show that such networks entrain to│
│Abstract:│periodic inputs. We illustrate our theory with examples from signaling pathways and genetic circuits. │
4. Z. An, M.A. Al-Radhawi, W. Cho, and E.D. Sontag. Inferring causal connections through embedded physics-informed neural networks (ePINNs): An application to synthetic biology resource competition.
2024. Note: In preparation.
│ │Biological systems have been widely studied as complex dynamic systems that evolve with time in response to the internal resources abundance and external perturbations due to their │
│ │common features. Integration of systems and synthetic biology provides a consolidated framework that draws system-level connections among biology, mathematics, engineering, and │
│ │computer sciences. One major problem in current synthetic biology research is designing and controlling the synthetic circuits to perform reliable and robust behaviors as they utilize│
│ │common transcription and translational resources among the circuits and host cells. While cellular resources are often limited, this results in a competition for resources by │
│ │different genes and circuits, which affect the behaviors of synthetic genetic circuits. The manner competition impacts behavior depends on the “bottleneck” resource. With knowledge of│
│ │physics laws and underlying mechanisms, the dynamical behaviors of the synthetic circuits can be described by the first principle models, usually represented by a system of ordinary │
│ │differential equations (ODEs). In this work, we develop the novel embedded PINN (ePINN), which is composed of two nested loss-sharing neural networks to target and improve the unknown│
│ │dynamics prediction from quantitative time series data. We apply the ePINN approach to identify the mathematical structures of competition phenotypes. Firstly, we use the PINNs │
│ │approach to infer the model parameters and hidden dynamics from partially known data (including a lack of understanding of the reaction mechanisms or missing experimental data). │
│ │Secondly, we test how well the algorithms can distinguish and extract the unknown dynamics from noisy data. Thirdly, we study how the synthetic and competing circuits behave in │
│Abstract:│various cases when different particles become a limited resource. │
5. L. Cui, Z.P. Jiang, and E. D. Sontag. Small-disturbance input-to-state stability of perturbed gradient flows: Applications to LQR problem. Systems and Control Letters, 188:105804, 2024. [PDF]
[doi:https://doi.org/10.1016/j.sysconle.2024.105804] Keyword(s): gradient systems, direct optimization, input-to-state stability, ISS.
│ │This paper studies the effect of perturbations on the gradient flow of a general constrained nonlinear programming problem, where the perturbation may arise from inaccurate gradient │
│ │estimation in the setting of data-driven optimization. Under suitable conditions on the objective function, the perturbed gradient flow is shown to be small-disturbance input-to-state│
│ │stable (ISS), which implies that, in the presence of a small-enough perturbation, the trajectory of the perturbed gradient flow must eventually enter a small neighborhood of the │
│ │optimum. This work was motivated by the question of robustness of direct methods for the linear quadratic regulator problem, and specifically the analysis of the effect of │
│ │perturbations caused by gradient estimation or round-off errors in policy optimization. Interestingly, we show small-disturbance ISS for three of the most common optimization │
│Abstract:│algorithms: standard gradient flow, natural gradient flow, and Newton gradient flow. │
6. A. Duvall, M. Ali Al-Radhawi, Dhruv D. Jatkar, and E. D. Sontag. Interplay between contractivity and monotonicity for reaction networks. SIAM J Applied Dynamical Systems, 2024. Note: Submitted.
Also preprint in arXiv.Keyword(s): monotone systems, chemical reaction networks, contractive systems.
│ │This work studies relationships between monotonicity and contractivity, and applies the results to establish that many reaction networks are weakly contractive, and thus, under │
│ │appropriate compactness conditions, globally convergent to equilibria. Verification of these properties is achieved through a novel algorithm that can be used to generate cones for an│
│Abstract:│accompanying monotone system. The results given here allow a unified proof of global convergence for several classes of networks that had been previously studied in the literature. │
7. J.L Gevertz, J.M Greene, S. Prosperi, N. Comandante-Lou, and E.D. Sontag. Understanding therapeutic tolerance through a mathematical model of drug-induced resistance. 2024. Note: Under review by
npj Systems Biology and Applications. Preprint in biorxiv https://www.biorxiv.org/content/10.1101/2024.09.04.611211v1.[PDF] Keyword(s): cancer, therapy resistance, phenotypic plasticity,
mathematical models, optimal control.
│ │There is growing recognition that phenotypic plasticity enables cancer cells to adapt to various environmental conditions. An example of this adaptability is the persistence of an │
│ │initially sensitive population of cancer cells in the presence of therapeutic agents. Understanding the implications of this drug-induced resistance is essential for predicting │
│ │transient and long-term tumor tumor dynamics subject to treatment. This paper introduces a mathematical model of this phenomenon of drug-induced resistance which provides excellent │
│ │fits to time-resolved in vitro experimental data. From observational data of total numbers of cells, the model unravels the relative proportions of sensitive and resistance │
│ │subpopulations, and quantifies their dynamics as a function of drug dose. The predictions are then validated using data on drug doses which were not used when fitting parameters. The │
│Abstract:│model is then used, in conjunction with optimal control techniques, in order to discover dosing strategies that might lead to better outcomes as quantified by lower total cell volume.│
8. M. D. Kvalheim and E. D. Sontag. Why should autoencoders work?. Transactions on Machine Learning Research, 2024. Note: See also 2023 preprint in https://arxiv.org/abs/2310.02250.[WWW] [PDF]
Keyword(s): autoencoders, neural networks, differential topology, model reduction.
│ │Deep neural network autoencoders are routinely used computationally for model reduction. They allow recognizing the intrinsic dimension of data that lie in a k-dimensional subset K of│
│ │an input Euclidean space $\R^n$. The underlying idea is to obtain both an encoding layer that maps $\R^n$ into $\R^k$ (called the bottleneck layer or the space of latent variables) │
│ │and a decoding layer that maps $\R^k$ back into $\R^n$, in such a way that the input data from the set K is recovered when composing the two maps. This is achieved by adjusting │
│ │parameters (weights) in the network to minimize the discrepancy between the input and the reconstructed output. Since neural networks (with continuous activation functions) compute │
│ │continuous maps, the existence of a network that achieves perfect reconstruction would imply that K is homeomorphic to a k-dimensional subset of $\R^k$, so clearly there are │
│ │topological obstructions to finding such a network. On the other hand, in practice the technique is found to "work" well, which leads one to ask if there is a way to explain this │
│ │effectiveness. We show that, up to small errors, indeed the method is guaranteed to work. This is done by appealing to certain facts from differential geometry. A computational │
│Abstract:│example is also included to illustrate the ideas. │
9. Z. Liu, N. Ozay, and E. D. Sontag. Properties of immersions for systems with multiple limit sets with implications to learning Koopman embeddings. Automatica, 2024. Note: Under revision. Preprint
in https://arxiv.org/abs/2312.17045, 2023/2024.[PDF] Keyword(s): linear systems, nonlinear systems, observables, Koopman embedding, duality.
│ │Linear immersions (or Koopman eigenmappings) of a nonlinear system have wide applications in prediction and control. In this work, we study the non-existence of one-to-one linear │
│ │immersions for nonlinear systems with multiple omega-limit sets. While previous research has indicated the possibility of discontinuous one-to-one linear immersions for such systems, │
│ │it remained uncertain whether continuous one-to-one linear immersions are attainable. Under mild conditions, we prove that any continuous one-to-one immersion to a class of systems │
│ │including linear systems cannot distinguish different omega-limit sets, and thus cannot be one-to-one. Furthermore, we show that this property is also shared by approximate linear │
│Abstract:│immersions learned from data as sample size increases and sampling interval decreases. Multiple examples are studied to illustrate our results. │
10. J.P. Padmakumar, J. Sun 2, W. Cho 3, Y. Zhou, C. Krenz, Zhong Han W.Z, D. Densmore, E. D. Sontag, and C.A. Voigt. Partitioning of a 2-bit hash function across 66 communicating cells. Nature
Chemical Biology, 20, 2024. [PDF] Keyword(s): synthetic biology, distributed computation, Boolean functions.
│ │Powerful distributed computing can be achieved by communicating cells that individually perform simple operations. We have developed design software to divide a large genetic circuit │
│ │across cells as well as the genetic parts to implement the subcircuits in their genomes. These tools were demonstrated using a 2-bit version of the MD5 hashing algorithm, an early │
│ │predecessor to the cryptographic functions underlying cryptocurrency. One iteration requires 110 logic gates, which were partitioned across 66 strains of Escherichia coli, requiring │
│ │the introduction of a total of 1.1 Mb of recombinant DNA into their genomes. The strains are individually experimentally verified to integrate their assigned input signals, process │
│ │this information correctly, and propagate the result to the cell in the next layer. This work demonstrates the potential to obtain programmable control of multicellular biological │
│Abstract:│processes. │
11. M. Sadeghi, I. Kareva, G. Pogudin, and E.D. Sontag. Quantitative pharmacology methods for bispecific T cell engagers. 2024. Note: Submitted.Keyword(s): identifiability, model-driven antibody
design, ODE models, quantitative systems pharmacology, systems biology.
│ │Bispecific T Cell Engagers (BTC) constitute an exciting antibody design in immuno-oncology that acts to bypass antigen presentation and forms a direct link between cancer and immune │
│ │cells in the tumor microenvironment (TME). By design, BTCs are efficacious only when the drug is bound to both immune and cancer cell targets, and therefore approaches to maximize │
│ │drug-target trimer in the TME should maximize the drug's efficacy. In this study, we quantitatively investigate how the concentration of ternary complex and its distribution depend on│
│ │both the targets' specific properties and the design characteristics of the BTC, and specifically on the binding kinetics of the drug to its targets. A simplified mathematical model │
│ │of drug-target interactions is considered here, with insights from the "three-body" problem applied to the model. Parameter identifiability analysis performed on the model │
│ │demonstrates that steady-state data, which is often available at the early pre-clinical stages, is sufficient to estimate the binding affinity of the BTC molecule to both targets. The│
│ │model is used to analyze several existing antibodies that are either clinically approved or are under development, and to explore the common kinetic features. We conclude with a │
│ │discussion of the limitations of the BTCs, such as the increased likelihood of cytokine release syndrome, and an assessment for a full quantitative pharmacology model that accounts │
│Abstract:│for drug distribution into the peripheral compartment. │
12. E.D. Sontag. A concept of antifragility for dynamical systems. arXiv, 2024. [WWW] [PDF] Keyword(s): antifragility, nonlinear systems, monotone systems, cancer, systems biology.
│ │This paper defines antifragility for dynamical systems as convexity of a newly introduced "logarithmic rate" of dynamical systems. It shows how to compute this rate for positive │
│Abstract:│linear systems, and it interprets antifragility in terms of pulsed alternations of extreme strategies in comparison to average uniform strategies. │
13. S. Wang, M.A. Al-Radhawi, D.A. Lauffenburger, and E.D. Sontag. How many time-points of single-cell omics data are necessary for recovering biomolecular network dynamics?. npj Systems Biology and
Applications, 10, 2024. [PDF] Keyword(s): single-cell data, identifiability, network reconstruction, dynamical systems.
│ │Single-cell omics technologies can measure millions of cells for up to thousands of biomolecular features, which enables the data-driven study of highly complex biological networks. │
│ │However, these high-throughput experimental techniques often cannot track individual cells over time, thus complicating the understanding of dynamics such as the time trajectories of │
│ │cell states. These ``dynamical phenotypes'' are key to understanding biological phenomena such as differentiation fates. We show by mathematical analysis that, in spite of │
│ │high-dimensionality and lack of individual cell traces, three timepoints of single-cell omics data are theoretically necessary and sufficient in order to uniquely determine the │
│ │network interaction matrix and associated dynamics. Moreover, we show through numerical simulations that an interaction matrix can be accurately determined with three or more │
│ │timepoints even in the presence of sampling and measurement noise typical of single-cell omics. Our results can guide the design of single-cell omics time-course experiments, and │
│Abstract:│provide a tool for data-driven phase-space analysis. │
14. B. de Freitas Magalhes, G. Fan, E.D. Sontag, K. Josic, and M.R. Bennett. Pattern formation and bistability in a synthetic intercellular genetic toggle. ACS Synthetic Biology, 13:2844-2860, 2024.
[PDF] Keyword(s): synthetic biology, pattern formation, quorum sensing, systems biology, toggle switch.
│ │Differentiation within multicellular organisms is a complex process that helps to establish spatial patterning and tissue formation within the body. Often, the differentiation of │
│ │cells is governed by morphogens and intercellular signaling molecules that guide the fate of each cell, frequently using toggle-like regulatory components. Synthetic biologists have │
│ │long sought to recapitulate patterned differentiation with engineered cellular communities, and various methods for differentiating bacteria have been invented. Here, we couple a │
│ │synthetic corepressive toggle switch with intercellular signaling pathways to create a “quorum-sensing toggle”. We show that this circuit not only exhibits population-wide bistability│
│ │in a well-mixed liquid environment but also generates patterns of differentiation in colonies grown on agar containing an externally supplied morphogen. If coupled to other metabolic │
│Abstract:│processes, circuits such as the one described here would allow for the engineering of spatially patterned, differentiated bacteria for use in biomaterials and bioelectronics. │
15. A. C. B. de Oliveira, M. Siami, and E. D. Sontag. Edge selections in bilinear dynamic networks. IEEE Transactions on Automatic Control, 69(1):331-338, 2024. [PDF] [doi:10.1109/TAC.2023.3269323]
Keyword(s): bilinear systems, networks, robustness.
│ │We develop some basic principles for the design and robustness analysis of a continuous-time bilinear dynamical network, where an attacker can manipulate the strength of the │
│ │interconnections/edges between some of the agents/nodes. We formulate the edge protection optimization problem of picking a limited number of attack-free edges and minimizing the │
│ │impact of the attack over the bilinear dynamical network. In particular, the H2-norm of bilinear systems is known to capture robustness and performance properties analogous to its │
│ │linear counterpart and provides valuable insights for identifying which edges arem ost sensitive to attacks. The exact optimization problem is combinatorial in the number of edges, │
│ │and brute-force approaches show poor scalability. However, we show that the H2-norm as a cost function is supermodular and, therefore, allows for efficient greedy approximations of │
│Abstract:│the optimal solution. We illustrate and compare the effectiveness of our theoretical findings via numerical simulation │
16. A.C.B de Olivera, M. Siami, and E.D. Sontag. Convergence analysis of overparametrized LQR formulations. Automatica, 2024. Note: Submitted. Preprint in arXiv 2408.15456. [PDF] Keyword(s): learning
theory, singularities in optimization, gradient systems, overparametrization, neural networks, overparametrization, gradient descent, input to state stability, feedback control, LQR.
│ │Motivated by the growing use of Artificial Intelligence (AI) tools in control design, this paper takes the first steps towards bridging the gap between results from Direct Gradient │
│ │methods for the Linear Quadratic Regulator (LQR), and neural networks. More specifically, it looks into the case where one wants to find a Linear Feed-Forward Neural Network (LFFNN) │
│ │feedback that minimizes a LQR cost. This paper starts by computing the gradient formulas for the parameters of each layer, which are used to derive a key conservation law of the │
│ │system. This conservation law is then leveraged to prove boundedness and global convergence of solutions to critical points, and invariance of the set of stabilizing networks under │
│ │the training dynamics. This is followed by an analysis of the case where the LFFNN has a single hidden layer. For this case, the paper proves that the training converges not only to │
│ │critical points but to the optimal feedback control law for all but a set of measure-zero of the initializations. These theoretical results are followed by an extensive analysis of a │
│ │simple version of the problem (the ``vector case''), proving the theoretical properties of accelerated convergence and robustness for this simpler example. Finally, the paper presents│
│ │numerical evidence of faster convergence of the training of general LFFNNs when compared to traditional direct gradient methods, showing that the acceleration of the solution is │
│Abstract:│observable even when the gradient is not explicitly computed but estimated from evaluations of the cost function. │
17. M. Ali Al-Radhawi, K. Manoj, D. Jatkar, A. Duvall, D. Del Vecchio, and E.D. Sontag. Competition for binding targets results in paradoxical effects for simultaneous activator and repressor action.
In Proc. 63rd IEEE Conference on Decision and Control (CDC), 2024. Note: To appear. Preprint in arXiv.[PDF] Keyword(s): resource competition, epigenetics, systems biology, synthetic biology, gene
regulatory systems.
│ │In the context of epigenetic transformations in cancer metastasis, a puzzling effect was recently discovered, in which the elimination (knock-out) of an activating regulatory element │
│ │leads to increased (rather than decreased) activity of the element being regulated. It has been postulated that this paradoxical behavior can be explained by activating and repressing│
│ │transcription factors competing for binding to other possible targets. It is very difficult to prove this hypothesis in mammalian cells, due to the large number of potential players │
│ │and the complexity of endogenous intracellular regulatory networks. Instead, this paper analyzes this issue through an analogous synthetic biology construct which aims to reproduce │
│ │the paradoxical behavior using standard bacterial gene expression networks. The paper first reviews the motivating cancer biology work, and then describes a proposed synthetic │
│ │construct. A mathematical model is formulated, and basic properties of uniqueness of steady states and convergence to equilibria are established, as well as an identification of │
│ │parameter regimes which should lead to observing such paradoxical phenomena (more activator leads to less activity at steady state). A proof is also given to show that this is a │
│Abstract:│steady-state property, and for initial transients the phenomenon will not be observed. This work adds to the general line of work of resource competition in synthetic circuits. │
18. D. Biswas, E.D Sontag, and N.J. Cowan. An exact active sensing strategy for a class of bio-inspired systems. In Proc. 23rd European Control Conference, 2024. Note: Submitted. Also preprint in
arXiv.Keyword(s): active sensing, systems biology, observability, nonlinear control, nonlinear systems.
│ │We consider a general class of translation-invariant systems with a specific category of output nonlinearities motivated by biological sensing. We show that no dynamic output feedback│
│ │can stabilize this class of systems to an isolated equilibrium point. To overcome this fundamental limitation, we propose a simple control scheme that includes a low-amplitude │
│ │periodic forcing function akin to so-called "active sensing" in biology, together with nonlinear output feedback. Our analysis shows that this approach leads to the emergence of an │
│ │exponentially stable limit cycle. These findings offer a provably stable active sensing strategy and may thus help to rationalize the active sensing movements made by animals as they │
│Abstract:│perform certain motor behaviors. │
19. A. Duvall and E.D. Sontag. A remark on omega limit sets for non-expansive dynamics. In Proc. 63rd IEEE Conference on Decision and Control (CDC), 2024. Note: To appear. Preprint in arXiv.[PDF]
Keyword(s): contractive systems, contractions, non-expansive systems.
│ │In this paper, we study systems of time-invariant ordinary differential equations whose flows are non-expansive with respect to a norm, meaning that the distance between solutions may│
│ │not increase. Since non-expansiveness (and contractivity) are norm-dependent notions, the topology of $\omega$-limit sets of solutions may depend on the norm. For example, and at │
│ │least for systems defined by real-analytic vector fields, the only possible $\omega$-limit sets of systems that are non-expansive with respect to polyhedral norms (such as $\ell^p$ │
│ │norms with $p =1$ or $p=\infty$) are equilibria. In contrast, for non-expansive systems with respect to Euclidean ($\ell^2$) norm, other limit sets may arise (such as │
│ │multi-dimensional tori): for example linear harmonic oscillators are non-expansive (and even isometric) flows, yet have periodic orbits as $\omega$-limit sets. This paper shows that │
│ │the Euclidean linear case is what can be expected in general: for flows that are contractive with respect to any strictly convex norm (such as $\ell^p$ for any $p ot=1,\infty$), and │
│Abstract:│if there is at least one bounded solution, then the $\omega$-limit set of every trajectory is also an omega limit set of a linear time-invariant system. │
20. A. Duvall and E. D. Sontag. Global exponential stability or contraction of an unforced system do not imply entrainment to periodic inputs. In Proc. 2024 Automatic Control Conference, 2024. Note:
Preprint in arXiv:2310.03241.
│ │It is often of interest to know which systems will approach a periodic trajectory when given a periodic input. Results are available for certain classes of systems, such as │
│ │contracting systems, showing that they always entrain to periodic inputs. In contrast to this, we demonstrate that there exist systems which are globally exponentially stable yet do │
│ │not entrain to a periodic input. This could be seen as surprising, as it is known that globally exponentially stable systems are in fact contracting with respect to some Riemannian │
│Abstract:│metric. The paper also addresses the broader issue of entrainment when an input is added to a contractive system. │
21. I. Incer, A. Pandey, E. Peterson, N. Nolan, K. E. Galloway, R. M. Murray, E. D. Sontag, and D. Del Vecchio. Guaranteeing system-level properties in genetic circuits subject to context effects. In
Proc. 2024 63rd IEEE Conference on Decision and Control (CDC), 2024. Note: To appear.[PDF]
│ │The identification of constraints on system parameters that will ensure that a system achieves desired requirements remains a challenge in synthetic biology, where components │
│ │unintendedly affect one another by perturbing the cellular environment in which they operate. This paper shows how to solve this problem optimally for a class of input/output │
│ │system-level specifications, and for unintended interactions due to resource sharing. Specifically, we show how to solve the problem based on the input/output properties of the │
│ │subsystems and on the unintended interaction map. Our approach is based on the elimination of quantifiers in monotone properties of the system. We illustrate applications of this │
│Abstract:│methodology to guaranteeing system-level performance of multiplexed and sequential biosensing and of bistable genetic circuits. │
22. P. Yu and E.D. Sontag. A necessary condition for non-monotonic dose response, with an application to a kinetic proofreading model. In Proc. 2024 63rd IEEE Conference on Decision and Control (CDC)
, 2024. Note: To appear. Note: there is an extended version in arXiv; journal paper in preparation.[PDF] Keyword(s): systems biology, IFFL, dose response.
│ │Steady state non-monotonic ("biphasic") dose responses are often observed in experimental biology, which raises the control theoretic question of identifying which possible mechanisms│
│ │might underlie such behaviors. It is well known that the presence of an incoherent feedforward loop (IFFL) in a network may give rise to a non-monotonic response, and it has been │
│ │informally conjectured that this condition is also necessary. However, this conjecture has been disproved with an example of a system in which input and output nodes are the same. In │
│ │this paper, we show that the converse implication does hold when the input and output are distinct. Towards this aim, we give necessary and sufficient conditions for when minors of a │
│Abstract:│symbolic matrix have mixed signs. Finally, we study in full generality when a model of immune T-cell activation could exhibit a steady state non-monotonic dose response. │
23. A.C.B de Olivera, M. Siami, and E.D. Sontag. Remarks on the gradient training of linear neural network based feedback for the LQR Problem. In Proc. 2024 63rd IEEE Conference on Decision and
Control (CDC), 2024. Note: To appear. Preprint in arXiv. [PDF] Keyword(s): neural networks, overparametrization, gradient descent, input to state stability, gradient systems, feedback control,
│ │Motivated by the current interest in using Artificial intelligence (AI) tools in control design, this paper takes the first steps towards bridging results from gradient methods for │
│ │solving the LQR control problem, and neural networks. More specifically, it looks into the case where one wants to find a Linear Feed-Forward Neural Network (LFFNN) that minimizes the│
│ │Linear Quadratic Regulator (LQR) cost. This work develops gradient formulas that can be used to implement the training of LFFNNs to solve the LQR problem, and derives an important │
│ │conservation law of the system. This conservation law is then leveraged to prove global convergence of solutions and invariance of the set of stabilizing networks under the training │
│ │dynamics. These theoretical results are then followed by and extensive analysis of the simplest version of the problem (the ``scalar case'') and by numerical evidence of faster │
│ │convergence of the training of general LFFNNs when compared to traditional direct gradient methods. These results not only serve as indication of the theoretical value of studying │
│Abstract:│such a problem, but also of the practical value of LFFNNs as design tools for data-driven control applications. │
1. M.A. Al-Radhawi, D. Del Vecchio, and E.D. Sontag. Identifying competition phenotypes in synthetic biochemical circuits. IEEE Control Systems Letters, 7:211-216, 2023. Note: (Online published in
2022; in print 2023.). [PDF] Keyword(s): Resource competition, model discrimination, synthetic biology, system identification.
│ │Synthetic gene circuits require cellular resources, which are often limited. This leads to competition for resources by different genes, which alter a synthetic genetic circuit's │
│ │behavior. However, the manner in which competition impacts behavior depends on the identity of the "bottleneck" resource which might be difficult to discern from input-output data. In│
│ │this paper, we aim at classifying the mathematical structures of resource competition in biochemical circuits. We find that some competition structures can be distinguished by their │
│ │response to different competitors or resource levels. Specifically, we show that some response curves are always linear, convex, or concave. Furthermore, high levels of certain │
│ │resources protect the behavior from low competition, while others do not. We also show that competition phenotypes respond differently to various interventions. Such differences can │
│ │be used to eliminate candidate competition mechanisms when constructing models based on given data. On the other hand, we show that different networks can display mathematically │
│Abstract:│equivalent competition phenotypes. │
2. S. Wang, E.D. Sontag, and D.A. Lauffenburger. What cannot be seen correctly in 2D visualizations of single-cell 'omics data?. Cell Systems, 14:723-731, 2023. [WWW] [PDF] Keyword(s):
visualization, single-cell data, tSNE, UMAP.
│ │Single-cell -omics datasets are high-dimensional and difficult to visualize. A common strategy for exploring such data is to create and analyze 2D projections. Such projections may be│
│ │highly nonlinear, and implementation algorithms are designed with the goal of preserving aspects of the original high-dimensional shape of data such as neighborhood relationships or │
│ │metrics. However, important aspects of high-dimensional geometry are known from mathematical theory to have no equivalent representation in 2D, or are subject to large distortions, │
│ │and will therefore be misrepresented or even invisible in any possible 2D representation. We show that features such as quantitative distances, relative positioning, and qualitative │
│ │neighborhoods of high-dimensional data points will always be misrepresented in 2D projections. Our results rely upon concepts from differential geometry, combinatorial geometry, and │
│ │algebraic topology. As an illustrative example, we show that even a simple single-cell RNA sequencing dataset will always be distorted, no matter what 2D projection is employed. We │
│ │also discuss how certain recently developed computational tools can help describe the high-dimensional geometric features that will be necessarily missing from any possible 2D │
│Abstract:│projections. │
3. Z. Liu, N. Ozay, and E. D. Sontag. On the non-existence of immersions for systems with multiple omega-limit sets. In 22nd IFAC World Congress, IFAC-PapersOnLine, volume 56, pages 60-64, 2023.
Note: This is a preliminary version of the journal paper Properties of immersions for systems with multiple limit sets with implications to learning Koopman embeddings.[PDF] [doi:https://doi.org/
10.1016/j.ifacol.2023.10.1408] Keyword(s): linear systems, nonlinear systems, observables, Koopman embedding, duality.
│ │Linear immersions (or Koopman eigenmappings) of a nonlinear system have wide applications in prediction and control. In this work, we study the existence of one-to-one linear │
│ │immersions for nonlinear systems with multiple omega-limit sets. For this class of systems, existing work shows that a discontinuous one-to-one linear immersion may exist, but it is │
│ │unclear if a continuous one-to-one linear immersion exists. Under mild conditions, we prove that systems with multiple omega-limit sets cannot admit a continuous one-to-one immersion │
│Abstract:│to a class of systems including linear systems. │
4. A.C.B de Olivera, M. Siami, and E.D. Sontag. Dynamics and perturbations of overparameterized linear neural networks. In Proc. 2023 62st IEEE Conference on Decision and Control (CDC), pages
7356-7361, 2023. Note: Extended version is On the ISS property of the gradient flow for single hidden-layer neural networks with linear activations, arXiv https://arxiv.org/abs/2305.09904. [PDF]
[doi:10.1109/CDC49753.2023.10383478] Keyword(s): neural networks, overparametrization, gradient descent, input to state stability, gradient systems.
│ │Recent research in neural networks and machine learning suggests that using many more parameters than strictly required by the initial complexity of a regression problem can result in│
│ │more accurate or faster-converging models -- contrary to classical statistical belief. This phenomenon, sometimes known as ``benign overfitting'', raises questions regarding in what │
│ │other ways might overparameterization affect the properties of a learning problem. In this work, we investigate the effects of overfitting on the robustness of gradient-descent │
│ │training when subject to uncertainty on the gradient estimation. This uncertainty arises naturally if the gradient is estimated from noisy data or directly measured. Our object of │
│ │study is a linear neural network with a single, arbitrarily wide, hidden layer and an arbitrary number of inputs and outputs. In this paper we solve the problem for the case where the│
│ │input and output of our neural-network are one-dimensional, deriving sufficient conditions for robustness of our system based on necessary and sufficient conditions for convergence in│
│ │the undisturbed case. We then show that the general overparametrized formulation introduces a set of spurious equilibria which lay outside the set where the loss function is │
│Abstract:│minimized, and discuss directions of future work that might extend our current results for more general formulations. │
1. M.A. Al-Radhawi and E.D. Sontag. Analysis of a reduced model of epithelial-mesenchymal fate determination in cancer metastasis as a singularly-perturbed monotone system. In C.A. Beattie, P.
Benner, M. Embree, S. Gugercin, and S. Lefteriu, editors, Realization and model reduction of dynamical systems. Springer Nature, 2022. Note: (Previous version: 2020 preprint in
arXiv:1910.11311.). [PDF] Keyword(s): epithelial-mesenchymal transition, miRNA, singular perturbations, monotone systems, oncology, cancer, metastasis, chemical reaction networks, systems
│ │Metastasis can occur after malignant cells transition from the epithelial phenotype to the mesenchymal phenotype. This transformation allows cells to migrate via the circulatory │
│ │system and subsequently settle in distant organs after undergoing the reverse transition. The core gene regulatory network controlling these transitions consists of a system made up │
│ │of coupled SNAIL/miRNA-34 and ZEB1/miRNA-200 subsystems. In this work, we formulate a mathematical model and analyze its long-term behavior. We start by developing a detailed reaction│
│ │network with 24 state variables. Assuming fast promoter and mRNA kinetics, we then show how to reduce our model to a monotone four-dimensional system. For the reduced system, monotone│
│ │dynamical systems theory can be used to prove generic convergence to the set of equilibria for all bounded trajectories. The theory does not apply to the full model, which is not │
│ │monotone, but we briefly discuss results for singularly-perturbed monotone systems that provide a tool to extend convergence results from reduced to full systems, under appropriate │
│Abstract:│time separation assumptions. │
2. M.A. Al-Radhawi, M. Sadeghi, and E.D. Sontag. Long-term regulation of prolonged epidemic outbreaks in large populations via adaptive control: a singular perturbation approach. IEEE Control
Systems Letters, 6:578-583, 2022. [PDF] Keyword(s): epidemiology, COVID-19, COVID, systems biology.
│ │In order to control highly-contagious and prolonged outbreaks, public health authorities intervene to institute social distancing, lock-down policies, and other Non-Pharmaceutical │
│ │Interventions (NPIs). Given the high social, educational, psychological, and economic costs of NPIs, authorities tune them, alternatively tightening up or relaxing rules, with the │
│ │result that, in effect, a relatively flat infection rate results. For example, during the summer of 2020 in parts of the United States, daily COVID-19 infection numbers dropped to a │
│ │plateau. This paper approaches NPI tuning as a control-theoretic problem, starting from a simple dynamic model for social distancing based on the classical SIR epidemics model. Using │
│ │a singular-perturbation approach, the plateau becomes a Quasi-Steady-State (QSS) of a reduced two-dimensional SIR model regulated by adaptive dynamic feedback. It is shown that the │
│ │QSS can be assigned and it is globally asymptotically stable. Interestingly, the dynamic model for social distancing can be interpreted as a nonlinear integral controller. Problems of│
│ │data fitting and parameter identifiability are also studied for this model. This letter also discusses how this simple model allows for a meaningful study of the effect of population │
│Abstract:│size, vaccinations, and the emergence of second waves. │
3. M.A. Al-Radhawi, S. Tripathi, Y. Zhang, E.D. Sontag, and H. Levine. Epigenetic factor competition reshapes the EMT landscape. Proc Natl Acad Sci USA, 119:e2210844119, 2022. [WWW] [PDF] Keyword
(s): gene networks, Epithelial-Mesenchymal Transition, EMT, epigenetics, systems biology, cancer.
│ │The emergence of and transitions between distinct phenotypes in isogenic cells can be attributed to the intricate interplay of epigenetic marks, external signals, and gene regulatory │
│ │elements. These elements include chromatin remodelers, histone modifiers, transcription factors, and regulatory RNAs. Mathematical models known as Gene Regulatory Networks (GRNs) are │
│ │an increasingly important tool to unravel the workings of such complex networks. In such models, epigenetic factors are usually proposed to act on the chromatin regions directly │
│ │involved in the expression of relevant genes. However, it has been well-established that these factors operate globally and compete with each other for targets genome-wide. Therefore,│
│ │a perturbation of the activity of a regulator can redistribute epigenetic marks across the genome and modulate the levels of competing regulators. In this paper, we propose a │
│ │conceptual and mathematical modeling framework that incorporates both local and global competition effects between antagonistic epigenetic regulators in addition to local │
│ │transcription factors, and show the counter-intuitive consequences of such interactions. We apply our approach to recent experimental findings on the Epithelial-Mesenchymal Transition│
│Abstract:│(EMT). We show that it can explain the puzzling experimental data as well provide new verifiable predictions. │
4. D. Angeli, M.A. Al-Radhawi, and E.D. Sontag. A robust Lyapunov criterion for non-oscillatory behaviors in biological interaction networks. IEEE Transactions on Automatic Control, 67(7):3305-3320,
2022. [PDF] [doi:10.1109/TAC.2021.3096807] Keyword(s): oscillations, dynamical systems, enzymatic cycles, systems biology.
│ │This paper introduces a notion of non-oscillation, proposes a constructive method for its robust verification, and studies its application to biological interaction networks. The │
│ │paper starts by revisiting Muldowney's result on non-existence of periodic solutions based on the study of the variational system of the second additive compound of the Jacobian of a │
│ │nonlinear system. It then shows that exponential stability of the latter rules out limit cycles, quasi-periodic solutions, and broad classes of oscillatory behavior. The focus then │
│ │turns ton nonlinear equations arising in biological interaction networks with general kinetics, the paper shows that the dynamics of the variational system can be embedded in a linear│
│ │differential inclusion. This leads to algorithms for constructing piecewise linear Lyapunov functions to certify global robust non-oscillatory behavior. Finally, the paper applies the│
│Abstract:│new techniques to study several regulated enzymatic cycles where available methods are not able to provide any information about their qualitative global behavior. │
5. M. Bin, J. Huang, A. Isidori, L. Marconi, M. Mischiati, and E. D. Sontag. Internal models in control, bioengineering, and neuroscience. Annual Review of Control, Robotics, and Autonomous Systems,
5:20.1-20.25, 2022. [PDF] Keyword(s): feeedback, internal model principle, nonlinear systems.
│ │Internal models are nowadays customarily used in different domains of science and engineering to describe how living organisms or artificial computational units embed their acquired │
│ │knowledge about recurring events taking place in the surrounding environment. This article reviews the internal model principle in control theory, bioengineering, and neuroscience, │
│Abstract:│illustrating the fundamental concepts and theoretical developments of the last few decades of research. │
6. E.D. Sontag. Remarks on input to state stability of perturbed gradient flows, motivated by model-free feedback control learning. Systems and Control Letters, 161:105138, 2022. Note: Important:
there is an error in the paper. For the LQR application, the paper only shows iISS, not ISS. See the paper Small-disturbance input-to-state stability of perturbed gradient flows: Applications to
LQR problem for details.[PDF] Keyword(s): iss, input to state stability, data-driven control, gradient systems, steepest descent, model-free control.
│ │Recent work on data-driven control and reinforcement learning has renewed interest in a relatively old field in control theory: model-free optimal control approaches which work │
│ │directly with a cost function and do not rely upon perfect knowledge of a system model. Instead, an "oracle" returns an estimate of the cost associated to, for example, a proposed │
│ │linear feedback law to solve a linear-quadratic regulator problem. This estimate, and an estimate of the gradient of the cost, might be obtained by performing experiments on the │
│ │physical system being controlled. This motivates in turn the analysis of steepest descent algorithms and their associated gradient differential equations. This paper studies the │
│ │effect of errors in the estimation of the gradient, framed in the language of input to state stability, where the input represents a perturbation from the true gradient. Since one │
│ │needs to study systems evolving on proper open subsets of Euclidean space, a self-contained review of input to state stability definitions and theorems for systems that evolve on such│
│Abstract:│sets is included. The results are then applied to the study of noisy gradient systems, as well as the associated steepest descent algorithms. │
7. J M Greene and E D Sontag. Minimizing the infected peak utilizing a single lockdown: a technical result regarding equal peaks. In Proc. 2022 Automatic Control Conference, pages 3640-3647, 2022. [
PDF] Keyword(s): epidemiology, COVID-19, COVID, systems biology.
│ │Due to the usage of social distancing as a means to control the spread of the novel coronavirus disease COVID-19, there has been a large amount of research into the dynamics of │
│ │epidemiological models with time-varying transmission rates. Such studies attempt to capture population responses to differing levels of social distancing, and are used for designing │
│ │policies which both inhibit disease spread but also allow for limited economic activity. One common criterion utilized for the recent pandemic is the peak of the infected population, │
│ │a measure of the strain placed upon the health care system; protocols which reduce this peak are commonly said to "flatten the curve". In this work, we consider a very specialized │
│ │distancing mandate, which consists of one period of fixed length of distancing, and addresses the question of optimal initiation time. We prove rigorously that this time is │
│ │characterized by an equal peaks phenomenon: the optimal protocol will experience a rebound in the infected peak after distancing is relaxed, which is equal in size to the peak when │
│ │distancing is commenced. In the case of a non-perfect lockdown (i.e. disease transmission is not completely suppressed), explicit formulas for the initiation time cannot be computed, │
│ │but implicit relations are provided which can be pre-computed given the current state of the epidemic. Expected extensions to more general distancing policies are also hypothesized, │
│Abstract:│which suggest designs for the optimal timing of non-overlapping lockdowns. │
8. M. Sznaier, A. Olshevsky, and E.D. Sontag. The role of systems theory in control oriented learning. In Proc. 25th Int. Symp. Mathematical Theory of Networks and Systems (MTNS 2022), 2022. Note:
To appear.[PDF] Keyword(s): control oriented learning, neural networks, reinforcement learning, feedback control, machine learning.
│ │Systems theory can play an important in unveiling fundamental limitations of learning algorithms and architectures when used to control a dynamical system, and in suggesting │
│ │strategies for overcoming these limitations. As an example, a feedforward neural network cannot stabilize a double integrator using output feedback. Similarly, a recurrent NN with │
│ │differentiable activation functions that stabilizes a non-strongly stabilizable system must be itself open loop unstable, a fact that has profound implications for training with │
│ │noisy, finite data. A potential solution to this problem, motivated by results on stabilization with periodic control, is the use of neural nets with periodic resets, showing that │
│ │indeed systems theoretic analysis is instrumental in developing architectures capable of controlling certain classes of unstable systems. This short conference paper also argues that │
│ │when the goal is to learn control oriented models, the loss function should reflect closed loop, rather than open loop model performance, a fact that can be accomplished by using │
│Abstract:│gap-metric motivated loss functions. │
9. A.C.B de Olivera, M. Siami, and E.D. Sontag. Sensor and actuator scheduling in bilinear dynamical networks. In Proc. 2022 61st IEEE Conference on Decision and Control (CDC), pages WeCT09.4, 2022.
│ │In this paper, we investigate the problem of finding a sparse sensor and actuator (S/A) schedule that minimizes the approximation error between the input-output behavior of a fully │
│ │sensed/actuated bilinear system and the system with the scheduling. The quality of this approximation is measuredby an H2-like metric, which is defined for a bilinear (time-varying) │
│ │system with S/A scheduling based on the discrete Laplace transform of its Volterra kernels. First, we discuss the difficulties of designing S/A schedules for bilinear systems, which │
│ │prevented us from finding a polynomial time algorithmfor solving the problem. We then propose a polynomial-time S/A scheduling heuristic that selects a fraction of sensors and node │
│ │actuators at each time step while maintaining a small approximation error between the input-output behavior of thefully sensed/actuated system and the one with S/A scheduling in this │
│Abstract:│H2-based sense. Numerical experiments illustrate the good approximation quality of our proposed methods. │
10. E.D. Sontag, D. Biswas, and N.J. Cowan. An observability result related to active sensing. Technical report, 2022. Note: ArXiv 2210.03848. [PDF] Keyword(s): nonlinear systems, observability,
active sensing.
│ │For a general class of translationally invariant systems with a specific category of nonlinearity in the output, this paper presents necessary and sufficient conditions for global │
│ │observability. Critically, this class of systems cannot be stabilized to an isolated equilibrium point by dynamic output feedback. These analyses may help explain the active sensing │
│ │movements made by animals when they perform certain motor behaviors, despite the fact that these active sensing movements appear to run counter to the primary motor goals. The │
│ │findings presented here establish that active sensing underlies the maintenance of observability for such biological systems, which are inherently nonlinear due to the presence of the│
│Abstract:│high-pass sensor dynamics. │
1. M.A. Al-Radhawi, M. Margaliot, and E. D. Sontag. Maximizing average throughput in oscillatory biochemical synthesis systems: an optimal control approach. Royal Society Open Science, 8(9):210878,
2021. [PDF]
│ │A dynamical system entrains to a periodic input if its state converges globally to an attractor with the same period. In particular, for a constant input, the state converges to a │
│ │unique equilibrium point for any initial condition. We consider the problem of maximizing a weighted average of the system's output along the periodic attractor. The gain of │
│ │entrainment is the benefit achieved by using a non-constant periodic input relative to a constant input with the same time average. Such a problem amounts to optimal allocation of │
│ │resources in a periodic manner. We formulate this problem as a periodic optimal control problem, which can be analyzed by means of the Pontryagin maximum principle or solved │
│ │numerically via powerful software packages. We then apply our framework to a class of nonlinear occupancy models that appear frequently in biological synthesis systems and other │
│ │applications. We show that, perhaps surprisingly, constant inputs are optimal for various architectures. This suggests that the presence of non-constant periodic signals, which │
│Abstract:│frequently appear in biological occupancy systems, is a signature of an underlying time-varying objective functional being optimized. │
2. T. Chen, M. A. Al-Radhawi, C.A. Voigt, and E.D. Sontag. A synthetic distributed genetic multi-bit counter. iScience, 24:103526, 2021. [PDF] Keyword(s): counters, synthetic biology,
transcriptional networks, gene networks, boolean circuits, boolean gates, systems biology.
│ │A design for genetically-encoded counters is proposed via repressor-based circuits. An N-bit counter reads sequences of input pulses and displays the total number of pulses, modulo $2│
│ │^N$. The design is based on distributed computation, with specialized cell types allocated to specific tasks. This allows scalability and bypasses constraints on the maximal number of│
│ │circuit genes per cell due to toxicity or failures due to resource limitations. The design starts with a single-bit counter. The N-bit counter is then obtained by interconnecting │
│ │(using diffusible chemicals) a set of N single-bit counters and connector modules. An optimization framework is used to determine appropriate gate parameters and to compute bounds on │
│ │admissible pulse widths and relaxation (inter-pulse) times, as well as to guide the construction of novel gates. This work can be viewed as a step toward obtaining circuits that are │
│Abstract:│capable of finite-automaton computation, in analogy to digital central processing units. │
3. J. Hanson, M. Raginsky, and E.D. Sontag. Learning recurrent neural net models of nonlinear systems. Proc. of Machine Learning Research, 144:1-11, 2021. [PDF] Keyword(s): machine learning,
empirical risk minimization, recurrent neural networks, dynamical systems, continuous time, system identification, statistical learning theory, generalization bounds.
│ │This paper considers the following learning problem: given sample pairs of input and output signals generated by an unknown nonlinear system (which is not assumed to be causal or │
│ │time-invariant), one wishes to find a continuous-time recurrent neural net, with activation function tanh, that approximately reproduces the underlying i/o behavior with high │
│ │confidence. Leveraging earlier work concerned with matching derivatives up to a finite order of the input and output signals the problem is reformulated in familiar system-theoretic │
│ │language and quantitative guarantees on the sup-norm risk of the learned model are derived, in terms of the number of neurons, the sample size, the number of derivatives being │
│Abstract:│matched, and the regularity properties of the inputs, the outputs, and the unknown i/o map. │
4. E. A. Hernandez-Vargas, G. Giordano, E.D. Sontag, J. G. Chase, H. Chang, and A. Astolfi. Second special section on systems and control research efforts against COVID-19 and future pandemics.
Annual Reviews in Control, 51:424-425, 2021. [WWW] [doi:https://doi.org/10.1016/j.arcontrol.2021.04.005] Keyword(s): COVID-19, epidemiology, epidemics.
5. E. A. Hernandez-Vargas, G. Giordano, E.D. Sontag, J. G. Chase, H. Chang, and A. Astolfi. Third special section on systems and control research efforts against COVID-19 and future pandemics.
Annual Reviews in Control, 52:446-447, 2021. [WWW] [doi:https://doi.org/10.1016/j.arcontrol.2021.10.015] Keyword(s): COVID-19, epidemiology, epidemics.
6. H. Hong, J. Kim, M.A. Al-Radhawi, E.D. Sontag, and J. K. Kim. Derivation of stationary distributions of biochemical reaction networks via structure transformation. Communications Biology, 4:620-,
2021. [PDF] Keyword(s): stationary distribution, chemical reaction networks, network translation, biochemical reaction networks, chemical master equation, stochastic, probabilistic, systems
│ │Long-term behaviors of biochemical reaction networks (BRNs) are described by steady states in deterministic models and stationary distributions in stochastic models. Unlike │
│ │deterministic steady states, stationary distributions capturing inherent fluctuations of reactions are extremely difficult to derive analytically due to the curse of dimensionality. │
│ │Here, we develop a method to derive analytic stationary distributions from deterministic steady states by transforming BRNs to have a special dynamic property, called complex │
│ │balancing. Specifically, we merge nodes and edges of BRNs to match in- and out-flows of each node. This allows us to derive the stationary distributions of a large class of BRNs, │
│ │including autophosphorylation networks of EGFR, PAK1, and Aurora B kinase and a genetic toggle switch. This reveals the unique properties of their stochastic dynamics such as │
│ │robustness, sensitivity, and multimodality. Importantly, we provide a user-friendly computational package, CASTANET, that automatically derives symbolic expressions of the stationary │
│Abstract:│distributions of BRNs to understand their long-term stochasticity. │
7. K. Johnson, G. Howard, D. Morgan, E. Brenner, A. Gardner, R. Durrett, W. Mo, A. Al'Khafaji, E.D. Sontag, A. Jarrett, T. Yankeelov, and A. Brock. Integrating transcriptomics and bulk time course
data into a mathematical framework to describe and predict therapeutic resistance in cancer. Physical Biology, 18:016001, 2021. [PDF] Keyword(s): oncology, cancer, chemoresistance, resistance,
intratumor heterogeneity, population dynamics, DNA barcoding, evolution, systems biology.
│ │The development of resistance to chemotherapy is a major cause of treatment failure in cancer. Intratumoral heterogeneity and phenotypic plasticity play a significant role in │
│ │therapeutic resistance. Individual cell measurements such as flow and mass cytometry and single cell RNA sequencing (scRNA-seq) have been used to capture and analyze this cell │
│ │variability. In parallel, longitudinal treatment-response data is routinely employed in order to calibrate mechanistic mathematical models of heterogeneous subpopulations of cancer │
│ │cells viewed as compartments with differential growth rates and drug sensitivities. This work combines both approaches: single cell clonally-resolved transcriptome datasets │
│ │(scRNA-seq, tagging individual cells with unique barcodes that are integrated into the genome and expressed as sgRNA's) and longitudinal treatment response data, to fit a mechanistic │
│ │mathematical model of drug resistance dynamics for a MDA-MB-231 breast cancer cell line. The explicit inclusion of the transcriptomic information in the parameter estimation is │
│Abstract:│critical for identification of the model parameters and enables accurate prediction of new treatment regimens. │
8. M. Sadeghi, J.M. Greene, and E.D. Sontag. Universal features of epidemic models under social distancing guidelines. Annual Reviews in Control, 51:426-440, 2021. Note: Also in bioRxiv, 2020,
https://www.biorxiv.org/content/10.1101/2020.06.21.163931v2.[WWW] [PDF] [doi:https://doi.org/10.1016/j.arcontrol.2021.04.004] Keyword(s): epidemiology, COVID-19, COVID, systems biology.
│ │Different epidemiological models, from the classical SIR system to more sophisticated ones involving population compartments for socially distanced, quarantined, infection aware, │
│ │asymptomatic infected, and other individuals, share some remarkable dynamic characteristics when contact rates are subject to periodic or one-shot changes. In simple pulsed isolation │
│ │policies, a linear relationship is found among optimal start time and duration for reduction of the infected peak. If a single interval social distancing starts too early or too late │
│ │it will be ineffective with respect to decreasing the peak of infection. On the other hand, the nonlinearity of epidemic models leads to non-monotone behavior of the peak of infected │
│ │population under periodic relaxation policies. This observation led us to hypothesize that an additional single interval social distancing at a proper time can significantly decrease │
│Abstract:│the infected peak of periodic policies, and we verified this improvement. │
9. E.D. Sontag. An explicit formula for minimizing the infected peak in an SIR epidemic model when using a fixed number of complete lockdowns. International Journal of Robust and Nonlinear Control,
Special Issue on Control-Theoretic Approaches for Systems in the Life Sciences, pp 1-24, 2021. [PDF] Keyword(s): epidemiology, COVID-19, COVID, systems biology, epidemics.
│ │Careful timing of NPIs (non-pharmaceutical interventions) such as social distancing may avoid high ``second waves'' of infections of COVID-19. This paper asks what should be the │
│ │timing of a set of K complete-lockdowns of prespecified lengths (such as two weeks) so as to minimize the peak of the infective compartment. Perhaps surprisingly, it is possible to │
│ │give an explicit and easily computable rule for when each lockdown should commence. Simulations are used to show that the rule remains fairly accurate even if lockdowns are not │
│Abstract:│perfect. │
10. A.P. Tran, M.A. Al-Radhawi, E. Ernst, and E.D. Sontag. Optimization of heuristic logic synthesis by iteratively reducing circuit substructures using a database of optimal implementations. 2021.
Note: Submitted. Keyword(s): Heuristic logic minimizer, Boolean circuit reduction, optimal synthesis, logic optimization, synthetic biology.
│ │Minimal synthesis of Boolean functions is an NP-hard problem, and heuristic approaches typically give suboptimal circuits. However, in the emergent field of synthetic biology, genetic│
│ │logic designs that use even a single additional Boolean gate can render a circuit unimplementable in a cell. This has led to a renewed interest in the field of optimal multilevel │
│ │Boolean synthesis. For small numbers (1-4) of inputs, an exhaustive search is possible, but this is impractical for large circuits. In this work, we demonstrate that even though it is│
│ │challenging to build a database of optimal implementations for anything larger than 4-input Boolean functions, a database of 4-input optimal implementations can be used to greatly │
│ │reduce the number of logical gates required in larger heuristic logic synthesis implementations. The proposed algorithm combines the heuristic results with an optimal implementation │
│ │database and yields average improvements of 5.16% for 5-input circuits and 4.54% for 6-input circuits on outputs provided by the logic synthesis tool extit{ABC}. In addition to the │
│ │gains in the efficiency of the implemented circuits, this work also attests to the importance and practicality of the field of optimal synthesis, even if it cannot directly provide │
│ │results for larger circuits. The focus of this work is on circuits made exclusively of 2-input NOR gates but the presented results are readily applicable to 2-input NAND circuits as │
│ │well as (2-input) AND/NOT circuits. In addition, the framework proposed here is likely to be adaptable to other types of circuits. An implementation of the described algorithm, HLM │
│Abstract:│(Hybrid Logic Minimizer), is available at https://github.com/sontaglab/HLM/. │
11. A.P. Tran, J.H. Meldon, and E.D. Sontag. Transient diffusion into a bi-layer membrane with mass transfer resistance: Exact solution and time lag analysis. Frontiers in Chemical Engineering, 2:25,
2021. [PDF] Keyword(s): Bi-layer membrane, transient diffusion, heat conduction, mass transfer resistance.
│ │Exact analytical and closed-form solutions to a problem involving transient diffusion in a bi-layer membrane with external transfer resistance are presented. In addition to the │
│ │solutions of the transient response, the lead and lag times that are often of importance in the characterization of membranes and arise from the analysis of the asymptotic behavior of│
│ │the mass permeated through the membrane are also provided. The solutions presented here are also compared to previously derived limiting cases of the diffusion in a bi-layer with an │
│ │impermeable wall and constant concentrations at the upstream and downstream boundaries. Analysis of the time lag shows that this membrane property is independent of the direction of │
│ │flow. Finally, an outline is provided of how these solutions, which characterize the response to a step function increase in concentration, can be also used to derive more complex │
│ │input conditions. Adequately handling boundary layer effects has a wide array of potential applications such as the study of bi-layer undergoing phenomena of heat convection, gas film│
│Abstract:│resistance, and absorption/desorption. │
12. N. Trendel, P. Kruger, S. Gaglione, J. Nguyen, J. Pettmann, E.D. Sontag, and O. Dushek. Perfect adaptation of CD8+ T cell responses to constant antigen input over a wide range of affinity is
overcome by costimulation. Science Signaling, 14:eaay9363, 2021. [PDF] Keyword(s): immunology, cell signaling, T cells, systems biology.
│ │Maintaining and limiting T cell responses to constant antigen stimulation is critical to control pathogens and maintain self-tolerance, respectively. Antigen recognition by T cell │
│ │receptors (TCRs) induces signalling that activates T cells to produce cytokines and also leads to the downregulation of surface TCRs. In other systems, receptor downregulation can │
│ │induce perfect adaptation to constant stimulation by a mechanism known as state-dependent inactivation that requires complete downregulation of the receptor or the ligand. However, │
│ │this is not the case for the TCR, and therefore, precisely how TCR downregulation maintains or limits T cell responses is controversial. Here, we observed that in vitro expanded │
│ │primary human T cells exhibit perfect adaptation in cytokine production to constant antigen stimulation across a 100,000-fold variation in affinity with partial TCR downregulation. By│
│ │directly fitting a mechanistic model to the data, we show that TCR downregulation produces imperfect adaptation, but when coupled to a switch produces perfect adaptation in cytokine │
│ │production. A pre diction of the model is that pMHC-induced TCR signalling continues after adaptation and this is confirmed by showing that, while costimulation cannot prevent │
│ │adaptation, CD28 and 4-1BB signalling reactivated adapted T cells to produce cytokines in a pMHC-dependent manner. We show that adaptation also applied to 1st generation chimeric │
│ │antigen receptor (CAR)-T cells but is partially avoided in 2nd generation CARs. These findings highlight that even partial TCR downregulation can limit T cell responses by producing │
│Abstract:│perfect adaptation rendering T cells dependent on costimulation for sustained responses. │
13. A.L. Williams, J.E. Fitzgerald, F. Ivich, E.D. Sontag, and M. Niedre. Comment on In vivo flow cytometry reveals a circadian rhythm of circulating tumor cells. npg Light: Science & Applications,
10:188, 2021. [PDF] Keyword(s): circulating tumor cells, liquid biopsy, cancer, oncology, multiple myeloma, systems biology.
│Abstract:│Correspondence regarding circulating tumor cell detection │
14. J. Miller, M.A. Al-Radhawi, and E.D. Sontag. Mediating ribosomal competition by splitting pools. In Proc. 2021 Automatic Control Conference, pages 1897-1902, 2021. [PDF] Keyword(s): systems
biology, synthetic biology, ribosomes, RFM, ribosome flow model.
│Abstract:│Conference version of paper published in IEEE Control Systems Letters, 2020 │
15. A.C.B de Olivera, M. Siami, and E.D. Sontag. Bilinear dynamical networks under malicious attack: an efficient edge protection method. In Proc. 2021 Automatic Control Conference, pages 1210-1216,
2021. [PDF] Keyword(s): Bilinear systems, adversarial attacks, robustness measures, supermodular optimization.
│ │In large-scale networks, agents and links are often vulnerable to attacks. This paper focuses on continuous-time bilinear networks, where additive disturbances model attacks or │
│ │uncertainties on agents/states (node disturbances), and multiplicative disturbances model attacks or uncertainties on couplings between agents/states (link disturbances). It │
│ │investigates network robustness notion in terms of the underlying digraph of the network, and structure of exogenous uncertainties and attacks. Specifically, it defines a robustness │
│ │measure using the $\mathcal H_2$-norm of the network and calculates it in terms of the reachability Gramian of the bilinear system. The main result is that under certain conditions, │
│ │the measure is supermodular over the set of all possible attacked links. The supermodular property facilitates the efficient solution finding of the optimization problem. Examples │
│Abstract:│illustrate how different structures can make the system more or less vulnerable to malicious attacks on links. │
16. A.C.B de Olivera, M. Siami, and E.D. Sontag. Eminence in noisy bilinear networks. In Proc. 2021 60th IEEE Conference on Decision and Control (CDC), pages 4835-4840, 2021. [PDF] Keyword(s):
Bilinear systems, H2 norm, centrality, adversarial attacks, robustness measures.
│ │When measuring importance of nodes in a network, the interconnections and dynamics are often supposed to be perfectly known. In this paper, we consider networks of agents with both │
│ │uncertain couplings and dynamics. Network uncertainty is modeled by structured additive stochastic disturbances on each agent's update dynamics and coupling weights. We then study how│
│ │these uncertainties change the network's centralities. Disturbances on the couplings between agents resul in bilinear dynamics, and classical centrality indices from linear network │
│ │theory need to be redefined. To do that, we first show that, similarly to its linear counterpart, the squared H2 norm of bilinear systems measures the trace of the steady-state error │
│ │covariance matrix subject to stochastic disturbances. This makes the H2 norm a natural candidate for a performance metric of the system. We propose a centrality index for the agents │
│ │based on the H2 norm, and show how it depends on the network topology and the noise structure. Finally, we simulate a few graphs to illustrate how uncertainties on different couplings│
│Abstract:│affect the agents' centrality rankings compared to a linearized model of the same system. │
1. E.D. Sontag. Input-to-State Stability. In J. Baillieul and T. Samad, editors, Encyclopedia of Systems and Control, pages 1-9. Springer-Verlag, 2020. [PDF] Keyword(s): input to state stability,
integral input to state stability, iISS, ISS, input to output stability.
│ │The notion of input to state stability (ISS) qualitatively describes stability of the mapping from initial states and inputs to internal states (and more generally outputs). This │
│ │encyclopedia-style article entry gives a brief introduction to the definition of ISS and a discussion of equivalent characterizations. It is an update of the article in the 2015 │
│Abstract:│edition, including additional citations to recent PDE work. │
2. E.D. Sontag. Scale-invariance in biological sensing. In J. Baillieul and T. Samad, editors, Encyclopedia of Systems and Control, pages 1-4. Springer-Verlag, 2020. [PDF] [doi:10.1007/
978-1-4471-5102-9_100090-1] Keyword(s): adaptation, biological adaptation, perfect adaptation, fold-change detection.
│ │The phenomenon of fold-change detection, or scale-invariance, is exhibited by a variety of sensory systems, in both bacterial and eukaryotic signaling pathways. This │
│Abstract:│encyclopedia-style article gives a brief introduction to the subject. │
3. D.K. Agrawal, E.M. Dolan, N.E. Hernandez, K.M. Blacklock, S.D. Khare, and E.D. Sontag. Mathematical models of protease-based enzymatic biosensors. ACS Synthetic Biology, 9:198-208, 2020. [PDF]
Keyword(s): synthetic biology, protease-based circuits, enzymatic circuits, systems biology, Boolean circuits.
│ │An important goal of synthetic biology is to build biosensors and circuits with well-defined input-output relationships that operate at speeds found in natural biological systems. │
│ │However, for molecular computation, most commonly used genetic circuit elements typically involve several steps from input detection to output signal production: transcription, │
│ │translation, and post-translational modifications. These multiple steps together require up to several hours to respond to a single stimulus, and this limits the overall speed and │
│ │complexity of genetic circuits. To address this gap, molecular frameworks that rely exclusively on post-translational steps to realize reaction networks that can process inputs at a │
│ │time scale of seconds to minutes have been proposed. Here, we build mathematical models of fast biosensors capable of producing Boolean logic functionality. We employ protease-based │
│ │chemical and light-induced switches, investigate their operation, and provide selection guidelines for their use as on-off switches. As a proof of concept, we implement a │
│ │rapamycin-induced switch in vitro and demonstrate that its response qualitatively agrees with the predictions from our models. We then use these switches as elementary blocks, │
│ │developing models for biosensors that can perform OR and XOR Boolean logic computation while using reaction conditions as tuning parameters. We use sensitivity analysis to determine │
│ │the time-dependent sensitivity of the output to proteolytic and protein-protein binding reaction parameters. These fast protease-based biosensors can be used to implement complex │
│ │molecular circuits with a capability of processing multiple inputs controllably and algorithmically. Our framework for evaluating and optimizing circuit performance can be applied to │
│Abstract:│other molecular logic circuits. │
4. M.A. Al-Radhawi, D. Angeli, and E.D. Sontag. A computational framework for a Lyapunov-enabled analysis of biochemical reaction networks. PLoS Computational Biology, pp 16(2): e1007681, 2020. [PDF
] Keyword(s): MAPK cascades, Lyapunov functions, stability, chemical networks, chemical rection networks, systems biology, RFM, ribosome flow model.
│ │This paper deals with the analysis of the dynamics of chemical reaction networks, developing a theoretical framework based only on graphical knowledge and applying regardless of the │
│ │particular form of kinetics. This paper introduces a class of networks that are "structurally (mono) attractive", by which we mean that they are incapable of exhibiting multiple │
│ │steady states, oscillation, or chaos by the virtue of their reaction graphs. These networks are characterized by the existence of a universal energy-like function which we call a │
│ │Robust Lyapunov function (RLF). To find such functions, a finite set of rank-one linear systems is introduced, which form the extremals of a linear convex cone. The problem is then │
│ │reduced to that of finding a common Lyapunov function for this set of extremals. Based on this characterization, a computational package, Lyapunov-Enabled Analysis of Reaction │
│ │Networks (LEARN), is provided that constructs such functions or rules out their existence. An extensive study of biochemical networks demonstrates that LEARN offers a new unified │
│ │framework. We study basic motifs, three-body binding, and transcriptional networks. We focus on cellular signalling networks including various post-translational modification │
│Abstract:│cascades, phosphotransfer and phosphorelay networks, T-cell kinetic proofreading, ERK signaling, and the Ribosome Flow Model. │
5. M.A. Al-Radhawi, A.P. Tran, E. Ernst, T. Chen, C.A. Voigt, and E.D. Sontag. Distributed implementation of Boolean functions by transcriptional synthetic circuits. ACS Synthetic Biology,
9:2172-2187, 2020. [PDF] [doi:10.1021/acssynbio.0c00228] Keyword(s): synthetic biology, transcriptional networks, gene networks, boolean circuits, boolean gates, systems biology.
│ │Starting in the early 2000s, sophisticated technologies have been developed for the rational construction of synthetic genetic networks that implement specified logical │
│ │functionalities. Despite impressive progress, however, the scaling necessary in order to achieve greater computational power has been hampered by many constraints, including repressor│
│ │toxicity and the lack of large sets of mutually-orthogonal repressors. As a consequence, a typical circuit contains no more than roughly seven repressor-based gates per cell. A │
│ │possible way around this scalability problem is to distribute the computation among multiple cell types, which communicate among themselves using diffusible small molecules (DSMs) and│
│ │each of which implements a small sub-circuit. Examples of DSMs are those employed by quorum sensing systems in bacteria. This paper focuses on systematic ways to implement this │
│ │distributed approach, in the context of the evaluation of arbitrary Boolean functions. The unique characteristics of genetic circuits and the properties of DSMs require the │
│ │development of new Boolean synthesis methods, distinct from those classically used in electronic circuit design. In this work, we propose a fast algorithm to synthesize distributed │
│ │realizations for any Boolean function, under constraints on the number of gates per cell and the number of orthogonal DSMs. The method is based on an exact synthesis algorithm to find│
│ │the minimal circuit per cell, which in turn allows us to build an extensive database of Boolean functions up to a given number of inputs. For concreteness, we will specifically focus │
│ │on circuits of up to 4 inputs, which might represent, for example, two chemical inducers and two light inputs at different frequencies. Our method shows that, with a constraint of no │
│ │more than seven gates per cell, the use of a single DSM increases the total number of realizable circuits by at least 7.58-fold compared to centralized computation. Moreover, when │
│ │allowing two DSM's, one can realize 99.995\% of all possible 4-input Boolean functions, still with at most 7 gates per cell. The methodology introduced here can be readily adapted to │
│Abstract:│complement recent genetic circuit design automation software. │
6. T. Chen, M.A. Al-Radhawi, and E.D. Sontag. A mathematical model exhibiting the effect of DNA methylation on the stability boundary in cell-fate networks. Epigenetics, 15:1-22, 2020. Note: PMID:
32842865. [PDF] [doi:10.1080/15592294.2020.1805686] Keyword(s): methylation, differentiation, epigenetics, pluripotent cells, gene regulatory networks, bistability, bistability, systems biology.
│ │Cell-fate networks are traditionally studied within the framework of gene regulatory networks. This paradigm considers only interactions of genes through expressed transcription │
│ │factors and does not incorporate chromatin modification processes. This paper introduces a mathematical model that seamlessly combines gene regulatory networks and DNA methylation, │
│ │with the goal of quantitatively characterizing the contribution of epigenetic regulation to gene silencing. The ``Basin of Attraction percentage'' is introduced as a metric to │
│ │quantify gene silencing abilities. As a case study, a computational and theoretical analysis is carried out for a model of the pluripotent stem cell circuit as well as a simplified │
│Abstract:│self-activating gene model. The results confirm that the methodology quantitatively captures the key role that methylation plays in enhancing the stability of the silenced gene state.│
7. J.L. Gevertz, J.M. Greene, C Hixahuary Sanchez Tapia, and E D Sontag. A novel COVID-19 epidemiological model with explicit susceptible and asymptomatic isolation compartments reveals unexpected
consequences of timing social distancing. Journal of Theoretical Biology, 510:110539, 2020. [WWW] [PDF] Keyword(s): epidemiology, COVID-19, COVID, systems biology.
│ │Motivated by the current COVID-19 epidemic, this work introduces an epidemiological model in which separate compartments are used for susceptible and asymptomatic "socially distant" │
│ │populations. Distancing directives are represented by rates of flow into these compartments, as well as by a reduction in contacts that lessens disease transmission. The dynamical │
│ │behavior of this system is analyzed, under various different rate control strategies, and the sensitivity of the basic reproduction number to various parameters is studied. One of the│
│ │striking features of this model is the existence of a critical implementation delay in issuing separation mandates: while a delay of about four weeks does not have an appreciable │
│ │effect, issuing mandates after this critical time results in a far greater incidence of infection. In other words, there is a nontrivial but tight "window of opportunity" for │
│ │commencing social distancing. Different relaxation strategies are also simulated, with surprising results. Periodic relaxation policies suggest a schedule which may significantly │
│ │inhibit peak infective load, but that this schedule is very sensitive to parameter values and the schedule's frequency. Further, we considered the impact of steadily reducing social │
│Abstract:│distancing measures over time. We find that a too-sudden reopening of society may negate the progress achieved under initial distancing guidelines, if not carefully designed. │
8. J. M. Greene, C. Sanchez-Tapia, and E.D. Sontag. Mathematical details on a cancer resistance model. Frontiers in Bioengineering and Biotechnology, 8:501: 1-27, 2020. [PDF] [doi:10.3389/
fbioe.2020.00501] Keyword(s): resistance, chemotherapy, phenotype, optimal control, singular controls, cancer, oncology, systems biology.
│ │One of the most important factors limiting the success of chemotherapy in cancer treatment is the phenomenon of drug resistance. We have recently introduced a framework for │
│ │quantifying the effects of induced and non-induced resistance to cancer chemotherapy. In this work, we expound on the details relating to an optimal control problem outlined in our │
│ │previous paper (Greene et al., 2018). The control structure is precisely characterized as a concatenation of bang-bang and path-constrained arcs via the Pontryagin Maximum Principle │
│ │and differential Lie algebraic techniques. A structural identifiability analysis is also presented, demonstrating that patient-specific parameters may be measured and thus utilized in│
│Abstract:│the design of optimal therapies prior to the commencement of therapy. For completeness, a detailed analysis of existence results is also included. │
9. E. A. Hernandez-Vargas, G. Giordano, E.D. Sontag, J. G. Chase, H. Chang, and A. Astolfi. First special section on systems and control research efforts against COVID-19 and future pandemics.
Annual Reviews in Control, 50:343-344, 2020. [WWW] [doi:https://doi.org/10.1016/j.arcontrol.2020.10.007] Keyword(s): COVID-19, epidemiology, epidemics.
10. E.A. Hernandez-Vargas, G. Giordano, E. D. Sontag, J. G. Chase, H. Chang, and A. Astolfi. First special section on systems and control research efforts against COVID-19 and future pandemics.
Annual Reviews in Control, 50:343-344, 2020. [WWW] [doi:https://doi.org/10.1016/j.arcontrol.2020.10.007] Keyword(s): SARS-CoV-2, COVID-19, Modelling, Control, Pandemics, epidemics, epidemiology.
11. J. Miller, M.A. Al-Radhawi, and E.D. Sontag. Mediating ribosomal competition by splitting pools. IEEE Control Systems Letters, 5:1555-1560, 2020. [PDF] Keyword(s): systems biology, synthetic
biology, ribosomes, RFM, ribosome flow model.
│ │Synthetic biology constructs often rely upon the introduction of "circuit" genes into host cells, in order to express novel proteins and thus endow the host with a desired behavior. │
│ │The expression of these new genes "consumes" existing resources in the cell, such as ATP, RNA polymerase, amino acids, and ribosomes. Ribosomal competition among strands of mRNA may │
│ │be described by a system of nonlinear ODEs called the Ribosomal Flow Model (RFM). The competition for resources between host and circuit genes can be ameliorated by splitting the │
│ │ribosome pool by use of orthogonal ribosomes, where the circuit genes are exclusively translated by mutated ribosomes. In this work, the RFM system is extended to include orthogonal │
│ │ribosome competition. This Orthogonal Ribosomal Flow Model (ORFM) is proven to be stable through the use of Robust Lyapunov Functions. The optimization problem of maximizing the │
│Abstract:│weighted protein translation rate by adjusting allocation of ribosomal species is formulated and implemented. Note: publsihed Nov 2020, even though journal reprint says "Nov 2021". │
12. E.D. Sontag. Bell-shaped dose response for a system with no IFFLs. bioRxiv, 2020. [PDF] Keyword(s): IFFL, feedforward loops, nonlinear systems, immunology.
│ │It is well known that the presence of an incoherent feedforward loop (IFFL) in a network may give rise to a steady state non-monotonic dose response. This note shows that the converse│
│ │implication does not hold. It gives an example of a three-dimensional system that has no IFFLs, yet its dose response is bell-shaped. It also studies under what conditions the result │
│Abstract:│is true for two-dimensional systems, in the process recovering, in far more generality, a result given in the T-cell activation literature. │
13. A.P. Tran, M.A. Al-Radhawi, I. Kareva, J. Wu, D.J. Waxman, and E.D. Sontag. Delicate balances in cancer chemotherapy: Modeling immune recruitment and emergence of systemic drug resistance.
Frontiers in Immunology, 11:1376-, 2020. [PDF] [doi:10.3389/fimmu.2020.01376] Keyword(s): metronomic chemotherapy, cyclophosphamide, mathematical modeling, immune recruitment, cancer, resistance,
oncology, immunology, systems biology.
│ │Metronomic chemotherapy can drastically enhance immunogenic tumor cell death. However, the responsible mechanisms are still incompletely understood. Here, we develop a mathematical │
│ │model to elucidate the underlying complex interactions between tumor growth, immune system activation, and therapy-mediated immunogenic cell death. Our model is conceptually simple, │
│ │yet it provides a surprisingly excellent fit to empirical data obtained from a GL261 mouse glioma model treated with cyclophosphamide on a metronomic schedule. The model includes │
│ │terms representing immune recruitment as well as the emergence of drug resistance during prolonged metronomic treatments. Strikingly, a fixed set of parameters, not adjusted for │
│ │individuals nor for drug schedule, excellently recapitulates experimental data across various drug regimens, including treatments administered at intervals ranging from 6 to 12 days. │
│ │Additionally, the model predicts peak immune activation times, rediscovering experimental data that had not been used in parameter fitting or in model construction. The validated │
│ │model was then used to make predictions about expected tumor-immune dynamics for novel drug administration schedules. Notably, the validated model suggests that immunostimulatory and │
│ │immunosuppressive intermediates are responsible for the observed phenomena of resistance and immune cell recruitment, and thus for variation of responses with respect to different │
│Abstract:│schedules of drug administration. │
14. A.L. Williams, J.E. Fitzgerald, F. Ivich, E.D. Sontag, and M. Niedre. Short-term circulating tumor cell dynamics in mouse xenograft models and implications for liquid biopsy. Frontiers in
Oncology, 10:2447-, 2020. [PDF] [doi:10.3389/fonc.2020.601085] Keyword(s): circulating tumor cells, liquid biopsy, cancer, oncology, multiple myeloma, systems biology.
│ │Circulating tumor cells (CTCs) are widely studied using liquid biopsy methods that analyze single, fractionally-small peripheral blood (PB) samples. However, little is known about │
│ │fluctuations in CTC numbers that occur over short timescales in vivo, and how these may affect accurate enumeration from blood samples. Diffuse in vivo flow cytometry (DiFC) developed│
│ │by the Niedre lab allows continuous, non-invasive counting of rare, green fluorescent protein expressing CTCs in large deeply-seated blood vessels in mice. Here, DiFC is used to study│
│ │short-term changes in CTC numbers in multiple myeloma and Lewis lung carcinoma xenograft models. Both 35- to 50-minute data sets are analyzed, with intervals corresponding to │
│ │approximately 1, 5, 10 and 20\% of the PB volume, as well as changes over 24-hour periods. For rare CTCs, the use of short DiFC intervals (corresponding to small PB samples) │
│ │frequently resulted in no detections. For more abundant CTCs, CTC numbers frequently varied by an order of magnitude or more over the time-scales considered. This variability far │
│ │exceeded that expected by Poisson statistics, and instead was consistent with rapidly changing mean numbers of CTCs in the PB. Because of these natural temporal changes, accurately │
│ │enumerating CTCs from fractionally small blood samples is inherently problematic. The problem is likely to be compounded for multicellular CTC clusters or specific CTC subtypes. │
│ │However, it is also shown that enumeration can be improved by averaging multiple samples, analysis of larger volumes, or development of new methods for enumeration of CTCs directly in│
│Abstract:│vivo. │
1. D.K. Agrawal, R. Marshall, V. Noireaux, and E.D. Sontag. In vitro implementation of robust gene regulation in a synthetic biomolecular integral controller. Nature Communications, 10:1-12, 2019. [
PDF] Keyword(s): tracking, synthetic biology, integral feedback, TX/TL, systems biology, dynamical systems, adaptation, internal model principle, identifiability.
│ │Cells respond to biochemical and physical internal as well as external signals. These signals can be broadly classified into two categories: (a) ``actionable'' or ``reference'' inputs│
│ │that should elicit appropriate biological or physical responses such as gene expression or motility, and (b) ``disturbances'' or ``perturbations'' that should be ignored or actively │
│ │filtered-out. These disturbances might be exogenous, such as binding of nonspecific ligands, or endogenous, such as variations in enzyme concentrations or gene copy numbers. In this │
│ │context, the term robustness describes the capability to produce appropriate responses to reference inputs while at the same time being insensitive to disturbances. These two │
│ │objectives often conflict with each other and require delicate design trade-offs. Indeed, natural biological systems use complicated and still poorly understood control strategies in │
│ │order to finely balance the goals of responsiveness and robustness. A better understanding of such natural strategies remains an important scientific goal in itself and will play a │
│ │role in the construction of synthetic circuits for therapeutic and biosensing applications. A prototype problem in robustly responding to inputs is that of ``robust tracking'', │
│ │defined by the requirement that some designated internal quantity (for example, the level of expression of a reporter protein) should faithfully follow an input signal while being │
│ │insensitive to an appropriate class of perturbations. Control theory predicts that a certain type of motif, called integral feedback, will help achieve this goal, and this motif is, │
│ │in fact, a necessary feature of any system that exhibits robust tracking. Indeed, integral feedback has always been a key component of electrical and mechanical control systems, at │
│ │least since the 18th century when James Watt employed the centrifugal governor to regulate steam engines. Motivated by this knowledge, biological engineers have proposed various │
│ │designs for biomolecular integral feedback control mechanisms. However, practical and quantitatively predictable implementations have proved challenging, in part due to the difficulty│
│ │in obtaining accurate models of transcription, translation, and resource competition in living cells, and the stochasticity inherent in cellular reactions. These challenges prevent │
│ │first-principles rational design and parameter optimization. In this work, we exploit the versatility of an Escherichia coli cell-free transcription-translation (TXTL) to accurately │
│ │design, model and then build, a synthetic biomolecular integral controller that precisely controls the expression of a target gene. To our knowledge, this is the first design of a │
│ │functioning gene network that achieves the goal of making gene expression track an externally imposed reference level, achieves this goal even in the presence of disturbances, and │
│Abstract:│whose performance quantitatively agrees with mathematical predictions. │
2. M. A. Al-Radhawi, D. Del Vecchio, and E. D. Sontag. Multi-modality in gene regulatory networks with slow gene binding. PLoS Computational Biology, 15:e1006784, 2019. [PDF] Keyword(s):
multistability, gene networks, Markov Chains, Master Equation, cancer heterogeneity, phenotypic variation, nonlinear systems, stochastic systems, epigenetics, chemical master equations, systems
│ │In biological processes such as embryonic development, hematopoietic cell differentiation, and the arising of tumor heterogeneity and consequent resistance to therapy, mechanisms of │
│ │gene activation and deactivation may play a role in the emergence of phenotypically heterogeneous yet genetically identical (clonal) cellular populations. Mathematically, the │
│ │variability in phenotypes in the absence of genetic variation can be modeled through the existence of multiple metastable attractors in nonlinear systems subject with stochastic │
│ │switching, each one of them associated to an alternative epigenetic state. An important theoretical and practical question is that of estimating the number and location of these │
│ │states, as well as their relative probabilities of occurrence. This paper focuses on a rigorous analytic characterization of multiple modes under slow promoter kinetics, which is a │
│ │feature of epigenetic regulation. It characterizes the stationary distributions of Chemical Master Equations for gene regulatory networks as a mixture of Poisson distributions. As │
│ │illustrations, the theory is used to tease out the role of cooperative binding in stochastic models in comparison to deterministic models, and applications are given to various model │
│Abstract:│systems, such as toggle switches in isolation or in communicating populations and a trans-differentiation network. │
3. J.M. Greene, J.L. Gevertz, and E. D. Sontag. A mathematical approach to distinguish spontaneous from induced evolution of drug resistance during cancer treatment. JCO Clinical Cancer Informatics,
DOI: 10.1200/CCI.18.00087:1-20, 2019. [PDF] Keyword(s): cancer heterogeneity, phenotypic variation, nonlinear systems, epigenetics, oncology, cancer, systems biology.
│ │Resistance to chemotherapy is a major impediment to the successful treatment of cancer. Classically, resistance has been thought to arise primarily through random genetic mutations, │
│ │after which mutated cells expand via Darwinian selection. However, recent experimental evidence suggests that the progression to resistance need not occur randomly, but instead may be│
│ │induced by the therapeutic agent itself. This process of resistance induction can be a result of genetic changes, or can occur through epigenetic alterations that cause otherwise │
│ │drug-sensitive cancer cells to undergo "phenotype switching". This relatively novel notion of resistance further complicates the already challenging task of designing treatment │
│ │protocols that minimize the risk of evolving resistance. In an effort to better understand treatment resistance, we have developed a mathematical modeling framework that incorporates │
│ │both random and drug-induced resistance. Our model demonstrates that the ability (or lack thereof) of a drug to induce resistance can result in qualitatively different responses to │
│ │the same drug dose and delivery schedule. The importance of induced resistance in treatment response led us to ask if, in our model, one can determine the resistance induction rate of│
│ │a drug for a given treatment protocol. Not only could we prove that the induction parameter in our model is theoretically identifiable, we have also proposed a possible in vitro │
│Abstract:│experiment which could practically be used to determine a treatment's propensity to induce resistance. │
4. M. Margaliot and E.D. Sontag. Revisiting totally positive differential systems: A tutorial and new results. Automatica, 101:1-14, 2019. [PDF] Keyword(s): tridiagonal systems, cooperative systems,
monotone systems.
│ │A matrix is totally nonnegative (resp., totally positive) if all its minors are nonnegative (resp., positive). This paper draws connections between B. Schwarz's 1970 work on TN and TP│
│ │matrices to Smillie's 1984 and Smith's 1991 work on stability of nonlinear tridiagonal cooperative systems, simplifying proofs in the later paper and suggesting new research │
│Abstract:│questions. │
5. E.V. Nikolaev, A. Zloza, and E.D. Sontag. Immunobiochemical reconstruction of influenza lung infection - melanoma skin cancer interactions. Frontiers in Immunology, 10:Article 4, 2019. [PDF]
Keyword(s): oncology, cancer, infections, immunology, checkpoint inhibition, systems biology.
│ │Recent experimental results from the Zloza lab combined a mouse model of influenza A virus (IAV) infection (A/H1N1/PR8) and a highly aggressive model of infection-unrelated cancer, │
│ │B16-F10 skin melanoma. This paper showed that acute influenza infection of the lung promotes distal melanoma growth in the dermis of the flank and leads to decreased host survival. │
│ │Here, we proceed to ground the experimental observations in a mechanistic immunobiochemical model that incorporates the T cell receptor signaling pathway, various transcription │
│ │factors, and a gene regulatory network (GRN). A core component of our model is a biochemical motif, which we call a Triple Incoherent Feed-Forward Loop (TIFFL), and which reflects │
│ │known interactions between IRF4, Blimp-1, and Bcl-6. The different activity levels of the TIFFL components, as a function of the cognate antigen levels and the given inflammation │
│ │context, manifest themselves in phenotypically distinct outcomes. Specifically, both the TIFFL reconstruction and quantitative estimates obtained from the model allowed us to │
│ │formulate a hypothesis that it is the loss of the fundamental TIFFL-induced adaptation of the expression of PD-1 receptors on anti-melanoma CD8+ T cells that constitutes the essence │
│ │of the previously unrecognized immunologic factor that promotes the experimentally observed distal tumor growth in the presence of acute non-ocogenic infection. We therefore hope that│
│ │this work can further highlight the importance of adaptive mechanisms by which immune functions contribute to the balance between self and non-self immune tolerance, adaptive │
│Abstract:│resistance, and the strength of TCR-induced activation, thus contributing to the understanding of a broader complexity of fundamental interactions between pathogens and tumors. │
6. M. Sadeghi, M.A. Al-Radhawi, M. Margaliot, and E.D. Sontag. No switching policy is optimal for a positive linear system with a bottleneck entrance. IEEE Control Systems Letters, 3:889-894, 2019.
Note: (Also in Proc. 2019 IEEE Conf. Decision and Control.). [PDF] Keyword(s): entrainment, switched systems, RFM, ribosome flow model, traffic systems, nonlinear systems, nonlinear control.
│ │We consider a nonlinear SISO system that is a cascade of a scalar "bottleneck entrance" with a stable positive linear system. In response to any periodic inflow, all solutions │
│ │converge to a unique periodic solution with the same period. We study the problem of maximizing the averaged throughput via controlled switching. We compare two strategies: 1) │
│ │switching between a high and low value, and 2 ~using a constant inflow equal to the prescribed mean value. We show that no possible switching policy can outperform a constant inflow │
│Abstract:│rate, though it can approach it asymptotically. We describe several potential applications of this problem in traffic systems, ribosome flow models, and scheduling at security checks.│
7. S. Wang, J.-R. Lin, E.D. Sontag, and P.K. Sorger. Inferring reaction network structure from single-cell, multiplex data, using toric systems theory. PLoS Computational Biology, 15:e1007311, 2019.
[WWW] [PDF] Keyword(s): chemical reaction networks, stoichiometry, complex balancing, toric varieties, systems biology.
│ │The goal of many single-cell studies on eukaryotic cells is to gain insight into the biochemical reactions that control cell fate and state. This paper introduces the concept of │
│ │effective stoichiometric space (ESS) to guide the reconstruction of biochemical networks from multiplexed, fixed time-point, single-cell data. In contrast to methods based solely on │
│ │statistical models of data, the ESS method leverages the power of the geometric theory of toric varieties to begin unraveling the structure of chemical reaction networks (CRN). This │
│ │application of toric theory enables a data-driven mapping of covariance relationships in single cell measurements into stoichiometric information, one in which each cell subpopulation│
│ │has its associated ESS interpreted in terms of CRN theory. In the development of ESS we reframe certain aspects of the theory of CRN to better match data analysis. As an application │
│ │of our approach we process cytomery- and image-based single-cell datasets and identify differences in cells treated with kinase inhibitors. Our approach is directly applicable to data│
│Abstract:│acquired using readily accessible experimental methods such as Fluorescence Activated Cell Sorting (FACS) and multiplex immunofluorescence. │
8. D. K. Agrawal, R. Marshall, M.A. Al-Radhawi, V. Noireaux, and E. D. Sontag. Some remarks on robust gene regulation in a biomolecular integral controller. In Proc. 2019 IEEE Conf. Decision and
Control, pages 2820-2825, 2019. [PDF] Keyword(s): adaptation, biological adaptation, perfect adaptation, tracking, synthetic biology, integral feedback, TX/TL, systems biology, dynamical systems,
adaptation, internal model principle, systems biology.
│ │Integral feedback can help achieve robust tracking independently of external disturbances. Motivated by this knowledge, biological engineers have proposed various designs of │
│ │biomolecular integral feedback controllers to regulate biological processes. In this paper, we theoretically analyze the operation of a particular synthetic biomolecular integral │
│ │controller, which we have recently proposed and implemented experimentally. Using a combination of methods, ranging from linearized analysis to sum-of-squares (SOS) Lyapunov │
│ │functions, we demonstrate that, when the controller is operated in closed-loop, it is capable of providing integral corrections to the concentration of an output species in such a │
│ │manner that the output tracks a reference signal linearly over a large dynamic range. We investigate the output dependency on the reaction parameters through sensitivity analysis, and│
│ │quantify performance using control theory metrics to characterize response properties, thus providing clear selection guidelines for practical applications. We then demonstrate the │
│ │stable operation of the closed-loop control system by constructing quartic Lyapunov functions using SOS optimization techniques, and establish global stability for a unique │
│ │equilibrium. Our analysis suggests that by incorporating effective molecular sequestration, a biomolecular closed-loop integral controller that is capable of robustly regulating gene │
│Abstract:│expression is feasible. │
9. S. Bruno, M.A. Al-Radhawi, E.D. Sontag, and D. Del Vecchio. Stochastic analysis of genetic feedback controllers to reprogram a pluripotency gene regulatory network. In Proc. 2019 Automatic
Control Conference, pages 5089-5096, 2019. [PDF] Keyword(s): multistability, biochemical networks, systems biology, stochastic systems, cell differentiation, multistationarity, chemical master
│ │Cellular reprogramming is traditionally accomplished through an open loop control approach, wherein key transcription factors are injected in cells to steer a gene regulatory network │
│ │toward a pluripotent state. Recently, a closed loop feedback control strategy was proposed in order to achieve more accurate control. Previous analyses of the controller were based on│
│ │deterministic models, ignoring the substantial stochasticity in these networks, Here we analyze the Chemical Master Equation for reaction models with and without the feedback │
│ │controller. We computationally and analytically investigate the performance of the controller in biologically relevant parameter regimes where stochastic effects dictate system │
│Abstract:│dynamics. Our results indicate that the feedback control approach still ensures reprogramming even when analyzed using a stochastic model. │
10. T. Chen, M. A. Al-Radhawi, and E. D. Sontag. A mathematical model exhibiting the effect of DNA methylation on the stability boundary in cell-fate networks. Technical report, Cold Spring Harbor
Laboratory, 2019. Note: BioRxiv preprint 10.1101/2019.12.19.883280. Keyword(s): Cell-fate networks, gene regulatory networks, DNA methylation, epigenetic regulation, pluripotent stem cell
│ │Cell-fate networks are traditionally studied within the framework of gene regulatory networks. This paradigm considers only interactions of genes through expressed transcription │
│ │factors and does not incorporate chromatin modification processes. This paper introduces a mathematical model that seamlessly combines gene regulatory networks and DNA methylation, │
│ │with the goal of quantitatively characterizing the contribution of epigenetic regulation to gene silencing. The ``Basin of Attraction percentage'' is introduced as a metric to │
│ │quantify gene silencing abilities. As a case study, a computational and theoretical analysis is carried out for a model of the pluripotent stem cell circuit as well as a simplified │
│Abstract:│self-activating gene model. The results confirm that the methodology quantitatively captures the key role that methylation plays in enhancing the stability of the silenced gene state.│
11. J.L. Gevertz, J.M. Greene, and E.D. Sontag. Validation of a mathematical model of cancer incorporating spontaneous and induced evolution to drug resistance. Technical report, Cold Spring Harbor
Laboratory, 2019. Note: BioRxiv preprint 10.1101/2019.12.27.889444. Keyword(s): cancer heterogeneity, phenotypic variation, nonlinear systems, epigenetics, optimal control theory, oncology,
│ │This paper continues the study of a model which was introduced in earlier work by the authors to study spontaneous and induced evolution to drug resistance under chemotherapy. The │
│ │model is fit to existing experimental data, and is then validated on additional data that had not been used when fitting. In addition, an optimal control problem is studied │
│Abstract:│numerically. │
12. M. Margaliot and E.D. Sontag. Compact attractors of an antithetic integral feedback system have a simple structure. Technical report, bioRxiv 2019/868000v1, 2019. [PDF] Keyword(s):
Poincare-Bendixson, k-cooperative dynamical systems, sign-regular matrices, synthetic biology, antithetic feedback.
│ │Since its introduction by Briat, Gupta and Khammash, the antithetic feedback controller design has attracted considerable attention in both theoretical and experimental systems │
│ │biology. The case in which the plant is a two-dimensional linear system (making the closed-loop system a nonlinear four-dimensional system) has been analyzed in much detail. This │
│ │system has a unique equilibrium but, depending on parameters, it may exhibit periodic orbits. This note shows that, for any parameter choices, every bounded trajectory satisfies a │
│ │Poincare'-Bendixson property: the dynamics in the omega-limit set of any precompact solution is conjugate to the dynamics in a compact invariant subset of a two-dimensional Lipschitz │
│Abstract:│dynamical system, thus precluding chaotic and other strange attractors. │
13. A. P. Tran, M. A. Al-Radhawi, I. Kareva, J. Wu, D. J. Waxman, and E. D. Sontag. Delicate balances in cancer chemotherapy: modeling immune recruitment and emergence of systemic drug resistance.
Technical report, Cold Spring Harbor Laboratory, 2019. Note: BioRxiv 2019.12.12.874891. Keyword(s): chemotherapy, immunology, immune system, oncology, cancer, metronomic.
│ │Metronomic chemotherapy can drastically enhance immunogenic tumor cell death. However, the responsible mechanisms are still incompletely understood. Here, we develop a mathematical │
│ │model to elucidate the underlying complex interactions between tumor growth, immune system activation, and therapy-mediated immunogenic cell death. Our model is conceptually simple, │
│ │yet it provides a surprisingly excellent fit to empirical data obtained from a GL261 mouse glioma model treated with cyclophosphamide on a metronomic schedule. The model includes │
│ │terms representing immune recruitment as well as the emergence of drug resistance during prolonged metronomic treatments. Strikingly, a fixed set of parameters, not adjusted for │
│ │individuals nor for drug schedule, excellently recapitulates experimental data across various drug regimens, including treatments administered at intervals ranging from 6 to 12 days. │
│ │Additionally, the model predicts peak immune activation times, rediscovering experimental data that had not been used in parameter fitting or in model construction. The validated │
│ │model was then used to make predictions about expected tumor-immune dynamics for novel drug administration schedules. Notably, the validated model suggests that immunostimulatory and │
│ │immunosuppressive intermediates are responsible for the observed phenomena of resistance and immune cell recruitment, and thus for variation of responses with respect to different │
│Abstract:│schedules of drug administration. │
1. E.D. Sontag. Examples of computation of exact moment dynamics for chemical reaction networks. In R. Tempo, S. Yurkovich, and P. Misra, editors, Emerging Applications of Control and Systems Theory
, volume 473 of Lecture Notes in Control and Inform. Sci., pages 295-312. Springer-Verlag, Berlin, 2018. [PDF] Keyword(s): chemical master equations, stochastic systems, moments, chemical
reaction networks, incoherent feedforward loop, feedforward, IFFL, systems biology.
│ │The study of stochastic biomolecular networks is a key part of systems biology, as such networks play a central role in engineered synthetic biology constructs as well as in naturally│
│Abstract:│occurring cells. This expository paper reviews in a unified way a pair of recent approaches to the finite computation of statistics for chemical reaction networks. │
2. D. Del Vecchio, Y. Qian, R.M Murray, and E.D. Sontag. Future systems and control research in synthetic biology. Annual Reviews in Control, 45:5-17, 2018. [PDF] Keyword(s): synthetic biology,
systems biology.
│ │This paper is a review of systems and control problems in synthetic biology, focusing on past accomplishments and open problems. It is partially a report on the workshop "The │
│Abstract:│Compositionality Problem in Synthetic Biology: New Directions for Control Theory" held on June 26-27, 2017 at MIT, and organized by D. Del Vecchio, R. M. Murray, and E. D. Sontag │
3. E.V. Nikolaev, S.J. Rahi, and E.D. Sontag. Chaos in simple periodically-forced biological models. Biophysical Journal, 114:1232-1240, 2018. [PDF] Keyword(s): chaos, entrainment, systems biology,
periodic inputs, subharmonic responses, biochemical systems, forced oscillations.
│ │What complicated dynamics can arise in the simplest biochemical systems, in response to a periodic input? This paper discusses two models that commonly appear as components of larger │
│ │sensing and signal transduction pathways in systems biology: a simple two-species negative feedback loop, and a prototype nonlinear integral feedback. These systems have globally │
│ │attracting steady states when unforced, yet, when subject to a periodic excitation, subharmonic responses and strange attractors can arise via period-doubling cascades. These │
│ │behaviors are similar to those exhibited by classical forced nonlinear oscillators such as those described by van der Pol or Duffing equations. The lack of entrainment to external │
│Abstract:│oscillations, in even the simplest biochemical networks, represents a level of additional complexity in molecular biology. │
4. T.H. Segall-Shapiro, E. D. Sontag, and C. A. Voigt. Engineered promoters enable constant gene expression at any copy number in bacteria. Nature Biotechnology, 36:352-358, 2018. [PDF] Keyword(s):
synthetic biology, systems biology, genetic circuits, gene copy number, incoherent feedforward loop, feedforward, IFFL.
│ │This paper deals with the design of promoters that maintain constant levels of expression, whether they are carried at single copy in the genome or on high-copy plasmids. The design │
│ │is based on an incoherent feedforward loop (iFFL) with a perfectly non-cooperative repression. The circuits are implemented in E. coli using Transcription Activator Like Effectors │
│ │(TALEs). The resulting stabilized promoters generate near identical expression across different genome locations and plasmid backbones (pSC101, p15a, ColE1, pUC), and also provide │
│Abstract:│robustness to strain mutations and growth media. Further, their strength is tunable and can be used to maintain constant ratios between proteins. │
5. Y. Zarai, M. Margaliot, E.D. Sontag, and T. Tuller. Controllability analysis and control synthesis for the ribosome flow model. IEEE/ACM Transactions on Computational Biology and Bioinformatics,
15:1351-1364, 2018. [PDF] Keyword(s): systems biology, ribosomes, controllability, RFM, ribosome flow model.
│ │The ribosomal density along the coding region of the mRNA molecule affects various fundamental intracellular phenomena including: protein production rates, organismal fitness, │
│ │ribosomal drop off, and co-translational protein folding. Thus, regulating translation in order to obtain a desired ribosomal profile along the mRNA molecule is an important │
│Abstract:│biological problem. This paper studies this problem formulated in the context of the ribosome flow model (RFM) in which one views the transition rates between site as controls. │
6. M.A. Al-Radhawi, N.S. Kumar, E.D. Sontag, and D. Del Vecchio. Stochastic multistationarity in a model of the hematopoietic stem cell differentiation network. In Proc. 2018 IEEE Conf. Decision and
Control, pages 1886-1892, 2018. [PDF] Keyword(s): multistability, biochemical networks, systems biology, stochastic systems, cell differentiation, multistationarity, chemical master equations.
│ │In the mathematical modeling of cell differentiation, it is common to think of internal states of cells (quanfitied by activation levels of certain genes) as determining different │
│ │cell types. We study here the "PU.1/GATA-1 circuit" that controls the development of mature blood cells from hematopoietic stem cells (HSCs). We introduce a rigorous chemical reaction│
│ │network model of the PU.1/GATA-1 circuit, which incorporates current biological knowledge and find that the resulting ODE model of these biomolecular reactions is incapable of │
│ │exhibiting multistability, contradicting the fact that differentiation networks have, by definition, alternative stable steady states. When considering instead the stochastic version │
│ │of this chemical network, we analytically construct the stationary distribution, and are able to show that this distribution is indeed capable of admitting a multiplicity of modes. │
│ │Finally, we study how a judicious choice of system parameters serves to bias the probabilities towards different stationary states. We remark that certain changes in system parameters│
│ │can be physically implemented by a biological feedback mechanism; tuning this feedback gives extra degrees of freedom that allow one to assign higher likelihood to some cell types │
│Abstract:│over others. │
7. F. Blanchini, H. El-Samad, G. Giordano, and E. D. Sontag. Control-theoretic methods for biological networks. In Proc. 2018 IEEE Conf. Decision and Control, pages 466-483, 2018. [PDF] Keyword(s):
systems biology, dynamic response phenotypes, multistability, oscillations, feedback, nonlinear systems, incoherent feedforward loop, feedforward, IFFL.
│Abstract:│This is a tutorial paper on control-theoretic methods for the analysis of biological systems. │
8. J.M. Greene, C. Sanchez-Tapia, and E.D. Sontag. Control structures of drug resistance in cancer chemotherapy. In Proc. 2018 IEEE Conf. Decision and Control, pages 5195-5200, 2018. [PDF]
│ │The primary factor limiting the success of chemotherapy in cancer treatment is the phenomenon of drug resistance. This work extends the work reported in "A mathematical approach to │
│ │distinguish spontaneous from induced evolution of drug resistance during cancer treatment" by introducing a time-optimal control problem that is analyzed utilizing │
│ │differential-geometric techniques: we seek a treatment protocol which maximizes the time of treatment until a critical tumor size is reached. The general optimal control structure is │
│ │determined as a combination of both bang-bang and path-constrained arcs. Numerical results are presented which demonstrate decreasing treatment efficacy as a function of the ability │
│ │of the drug to induce resistance. Thus, drug-induced resistance may dramatically effect the outcome of chemotherapy, implying that factors besides cytotoxicity should be considered │
│Abstract:│when designing treatment regimens. │
9. J. Huang, A. Isidori, L. Marconi, M. Mischiati, E. D. Sontag, and W. M. Wonham. Internal models in control, biology and neuroscience. In Proc. 2018 IEEE Conf. Decision and Control, pages
5370-5390, 2018. [PDF] Keyword(s): feeedback, internal model principle, nonlinear systems, incoherent feedforward loop, feedforward, IFFL.
│ │This tutorial paper deals with the Internal Model Principle (IMP) from different perspectives. The goal is to start from the principle as introduced and commonly used in the control │
│ │theory and then enlarge the vision to other fields where "internal models" play a role. The biology and neuroscience fields are specifically targeted in the paper. The paper ends by │
│Abstract:│presenting an "abstract" theory of IMP applicable to a large class of systems. │
10. M. Margaliot and E.D. Sontag. Analysis of nonlinear tridiagonal cooperative systems using totally positive linear differential systems. In Proc. 2018 IEEE Conf. Decision and Control, pages
3104-3109, 2018. [PDF] Keyword(s): tridiagonal systems, cooperative systems, monotone systems.
│Abstract:│This is a conference version of "Revisiting totally positive differential systems: A tutorial and new results". │
11. J.M. Greene, C. Sanchez-Tapia, and E.D. Sontag. Mathematical details on a cancer resistance model. Technical report, bioRxiv 2018/475533, 2018. [PDF] Keyword(s): identifiability, drug resistance,
chemotherapy, optimal control theory, singular controls, oncology, cancer.
│ │The primary factor limiting the success of chemotherapy in cancer treatment is the phenomenon of drug resistance. We have recently introduced a framework for quantifying the effects │
│ │of induced and non-induced resistance to cancer chemotherapy . In this work, the control structure is precisely characterized as a concatenation of bang-bang and path-constrained arcs│
│ │via the Pontryagin Maximum Principle and differential Lie techniques. A structural identfiability analysis is also presented, demonstrating that patient-specfic parameters may be │
│Abstract:│measured and thus utilized in the design of optimal therapies prior to the commencement of therapy. │
12. M. Sadeghi, M.A. Al-Radhawi, M. Margaliot, and E.D. Sontag. On the periodic gain of the Ribosome Flow Model. Technical report, bioRxiv 2018/507988, 2018. [PDF] Keyword(s): systems biology,
biochemical networks, ribosomes, RFM, ribosome flow model.
│ │We consider a compartmental model for ribosome flow during RNA translation, the Ribosome Flow Model (RFM). This model includes a set of positive transition rates that control the flow│
│ │from every site to the consecutive site. It has been shown that when these rates are time-varying and jointly T-periodic, the protein production rate converges to a unique T-periodic │
│ │pattern. Here, we study a problem that can be roughly stated as: can periodic rates yield a higher average production rate than constant rates? We rigorously formulate this question │
│Abstract:│and show via simulations, and rigorous analysis in one simple case, that the answer is no. │
1. M. Margaliot, E.D. Sontag, and T. Tuller. Checkable conditions for contraction after small transients in time and amplitude. In N. Petit, editor, Feedback Stabilization of Controlled Dynamical
Systems - In Honor of Laurent Praly, volume 473 of Lecture Notes in Control and Inform. Sci., pages 279-305. Springer-Verlag, Berlin, 2017. [PDF] Keyword(s): contractions, contractive systems,
│Abstract:│This is an expository paper, which compares in detail various alternative weak contraction ideas for nonlinear system stability. │
2. S. Barish, M.F. Ochs, E.D. Sontag, and J.L. Gevertz. Evaluating optimal therapy robustness by virtual expansion of a sample population, with a case study in cancer immunotherapy. Proc Natl Acad
Sci USA, 114:E6277-E6286, 2017. [WWW] [PDF] [doi:10.1073/pnas.1703355114] Keyword(s): cancer, oncolytic therapy, immunotherapy, optimal therapy, identifiability, systems biology.
│ │This paper proposes a technique that combines experimental data, mathematical modeling, and statistical analyses for identifying optimal treatment protocols that are robust with │
│ │respect to individual variability. Experimental data from a small sample population is amplified using bootstrapping to obtain a large number of virtual populations that statistically│
│ │match the expected heterogeneity. Alternative therapies chosen from among a set of clinically-realizable protocols are then compared and scored according to coverage. As proof of │
│ │concept, the method is used to evaluate a treatment with oncolytic viruses and dendritic cell vaccines in a mouse model of melanoma. The analysis shows that while every scheduling │
│ │variant of an experimentally-utilized treatment protocol is fragile (non-robust), there is an alternative region of dosing space (lower oncolytic virus dose, higher dendritic cell │
│Abstract:│dose) for which a robust optimal protocol exists. │
3. J. K. Kim and E.D. Sontag. Reduction of multiscale stochastic biochemical reaction networks using exact moment derivation. PLoS Computational Biology, 13:13(6): e1005571, 2017. [PDF] Keyword(s):
systems biology, biochemical networks, stochastic systems, chemical master equation, chemical reaction networks, moments, molecular networks, complex-balanced networks.
│ │Biochemical reaction networks in cells frequently consist of reactions with disparate timescales. Stochastic simulations of such multiscale BRNs are prohibitively slow due to the high│
│ │computational cost incurred in the simulations of fast reactions. One way to resolve this problem is to replace fast species by their stationary conditional expectation values │
│ │conditioned on slow species. While various approximations schemes for this quasi-steady state approximation have been developed, they often lead to considerable errors. This paper │
│ │considers two classes of multiscale BRNs which can be reduced by through an exact QSS rather than approximations. Specifically, we assume that fast species constitute either a │
│ │feedforward network or a complex balanced network. Exact reductions for various examples are derived, and the computational advantages of this approach are illustrated through │
│Abstract:│simulations. │
4. M. Lang and E.D. Sontag. Zeros of nonlinear systems with input invariances. Automatica, 81:46-55, 2017. [PDF] Keyword(s): scale invariance, fold change detection, nonlinear systems, realization
theory, internal model principle.
│ │This paper introduces two generalizations of systems invariant with respect to continuous sets of input transformations, that is, systems whose output dynamics remain invariant when │
│ │applying a transformation to the input and simultaneously adjusting the initial conditions. These generalizations concern systems invariant with respect to time-dependent input │
│ │transformations with exponentially increasing or decreasing ``strength'', and systems invariant with respect to transformations of the "nonlinear derivatives" of the input. │
│ │Interestingly, these two generalizations of invariant systems encompass linear time-invariant (LTI) systems with real transfer function zeros of arbitrary multiplicity. Furthermore, │
│ │the zero-dynamics of systems possessing our generalized invariances show properties analogous to those of LTI systems with transfer function zeros, generalizing concepts like │
│ │pole-zero cancellation, the rejection of ramps by Hurwitz LTI systems with a zero at the origin with multiplicity two, and (to a certain extend) the superposition principle with │
│Abstract:│respect to inputs zeroing the output. │
5. F. Menolascina, R. Rusconi, V.I. Fernandez, S.P. Smriga, Z. Aminzare, E. D. Sontag, and R. Stocker. Logarithmic sensing in Bacillus subtilis aerotaxis. Nature Systems Biology and Applications,
3:16036-, 2017. [PDF] Keyword(s): adaptation, biological adaptation, perfect adaptation, Aerotaxis, chemotaxis, scale invariance, FCD, fold-change detection, B. subtilis, systems biology.
│ │Aerotaxis, the directed migration along oxygen gradients, allows many microorganisms to locate favorable oxygen concentrations. Despite oxygen's fundamental role for life, even key │
│ │aspects of aerotaxis remain poorly understood. In Bacillus subtilis, for example, there is conflicting evidence of whether migration occurs to the maximal oxygen concentration │
│ │available or to an optimal intermediate one, and how aerotaxis can be maintained over a broad range of conditions. Using precisely controlled oxygen gradients in a microfluidic │
│ │device, spanning the full spectrum of conditions from quasi-anoxic to oxic (60nM-1mM), we resolved B. subtilis' ``oxygen preference conundrum'' by demonstrating consistent migration │
│ │towards maximum oxygen concentrations. Surprisingly, the strength of aerotaxis was largely unchanged over three decades in oxygen concentration (131nM-196mM). We discovered that in │
│Abstract:│this range B. subtilis responds to the logarithm of the oxygen concentration gradient, a log-sensing strategy that affords organisms high sensitivity over a wide range of conditions. │
6. V. H. Nagaraj, J. M. Greene, A. M. Sengupta, and E.D. Sontag. Translation inhibition and resource balance in the TX-TL cell-free gene expression system. Synthetic Biology, 2:ysx005, 2017. [PDF]
Keyword(s): tx/tl, cell-free systems, in vitro synthetic biology, synthetic biology, systems biology.
│ │Utilizing the synthetic transcription-translation (TX-TL) system, this paper studies the impact of nucleotide triphosphates (NTPs) and magnesium (Mg2+), on gene expression, in the │
│ │context of the counterintuitive phenomenon of suppression of gene expression at high NTP concentration. Measuring translation rates for different Mg2+ and NTP concentrations, we │
│ │observe a complex resource dependence. We demonstrate that translation is the rate-limiting process that is directly inhibited by high NTP concentrations. Additional Mg2+ can │
│ │partially reverse this inhibition. In several experiments, we observe two maxima of the translation rate viewed as a function of both Mg2+ and NTP concentration, which can be │
│ │explained in terms of an NTP-independent effect on the ribosome complex and an NTP- Mg2+ titration effect. The non-trivial compensatory effects of abundance of different vital │
│Abstract:│resources signals the presence of complex regulatory mechanisms to achieve optimal gene expression. │
7. S. J. Rahi, J. Larsch, K. Pecani, N. Mansouri, A. Y. Katsov, K. Tsaneva-Atanasova, E. D. Sontag, and F. R. Cross. Oscillatory stimuli differentiate adapting circuit topologies. Nature Methods,
14:1010-1016, 2017. [PDF] Keyword(s): biochemical networks, periodic behaviors, monotone systems, entrainment, oscillations, incoherent feedforward loop, feedforward, IFFL, systems biology.
│ │Elucidating the structure of biological intracellular networks from experimental data remains a major challenge. This paper studies two types of ``response signatures'' to identify │
│ │specific circuit motifs, from the observed response to periodic inputs. In particular, the objective is to distinguish negative feedback loops (NFLs) from incoherent feedforward loops│
│ │(IFFLs), which are two types of circuits capable of producing exact adaptation. The theory of monotone systems with inputs is used to show that ``period skipping'' (non-harmonic │
│ │responses) is ruled out in IFFL's, and a notion called ``refractory period stabilization'' is also analyzed. The approach is then applied to identify a circuit dominating cell cycle │
│Abstract:│timing in yeast, and to uncover a calcium-mediated NFL circuit in \emph{C.elegans} olfactory sensory neurons. │
8. A. Rendall and E. D. Sontag. Multiple steady states and the form of response functions to antigen in a model for the initiation of T cell activation. Royal Society Open Science, 4:170821-, 2017.
[PDF] Keyword(s): kinetic proofreading, T cells, immunology, systems biology.
│ │This paper analizes a model for the initial stage of T cell activation. The state variables in the model are the concentrations of phosphorylation states of the T cell receptor │
│ │complex and the phosphatase SHP-1 in the cell. It is shown that these quantities cannot approach zero, and that there is more than one positive steady state for certain values of the │
│ │parameters; in addition, damped oscillations are possible. It is also shown that the chemical concentration which represents the degree of activation of the cell, represented by the │
│ │maximally phosphorylated form of the T cell receptor complex, is in general a non-monotone function of the activating signal. In particular there are cases where there is a value of │
│ │the dissociation constant of the ligand from the receptor which produces an optimal activation of the T cell. In this way the results of certain simulations in the literature have │
│Abstract:│been confirmed rigorously and new features are discovered. │
9. A. Silva, M. Silva, P. Sudalagunta, A. Distler, T. Jacobson, A. Collins, T. Nguyen, J. Song, D.T. Chen, Lu Chen, C. Cubitt, R. Baz, L. Perez, D. Rebatchouk, W. Dalton, J.M. Greene, R. Gatenby, R.
Gillies, E.D. Sontag, M. Meads, and K. Shain. An ex vivo platform for the prediction of clinical response in multiple myeloma. Cancer Research, pp 10.1158/0008-5472.CAN-17-0502, 2017. [PDF]
Keyword(s): cancer, multiple myeloma, personalized therapy.
│ │This paper describes a novel approach for characterization of chemosensitivity and prediction of clinical response in multiple myeloma. It relies upon a patient-specific computational│
│ │model of clinical response, parameterized by a high-throughput ex vivo assay that quantifies sensitivity of primary MM cells to 31 agents or combinations, in a reconstruction of the │
│ │tumor microenvironment. The mathematical model, which inherently accounts for intra-tumoral heterogeneity of drug sensitivity, combined with drug- and regimen-specific │
│Abstract:│pharmacokinetics, produces patient-specific predictions of clinical response 5 days post-biopsy. │
10. E.D. Sontag. A dynamical model of immune responses to antigen presentation predicts different regions of tumor or pathogen elimination. Cell Systems, 4:231-241, 2017. [PDF] Keyword(s): scale
invariance, fold change detection, T cells, incoherent feedforward loops, immunology, cancer, internal model principle, incoherent feedforward loop, feedforward, IFFL, systems biology.
│ │Since the early 1990s, many authors have independently suggested that self/nonself recognition by the immune system might be modulated by the rates of change of antigen challenges. │
│ │This paper introduces an extremely simple and purely conceptual mathematical model that allows dynamic discrimination of immune challenges. The main component of the model is a motif │
│ │which is ubiquitous in systems biology, the incoherent feedforward loop, which endows the system with the capability to estimate exponential growth exponents, a prediction which is │
│ │consistent with experimental work showing that exponentially increasing antigen stimulation is a determinant of immune reactivity. Combined with a bistable system and a simple │
│ │feedback repression mechanism, an interesting phenomenon emerges as a tumor growth rate increases: elimination, tolerance (tumor growth), again elimination, and finally a second zone │
│ │of tolerance (tumor escape). This prediction from our model is analogous to the ``two-zone tumor tolerance'' phenomenon experimentally validated since the mid 1970s. Moreover, we │
│Abstract:│provide a plausible biological instantiation of our circuit using combinations of regulatory and effector T cells. │
11. E.D. Sontag. Dynamic compensation, parameter identifiability, and equivariances. PLoS Computational Biology, 13:e1005447, 2017. Note: (Preprint was in bioRxiv https://doi.org/0.1101/095828,
2016). [WWW] [PDF] Keyword(s): fcd, fold-change detection, scale invariance, dynamic compensation, identifiability, observability, systems biology.
│ │A recent paper by Karin et al. introduced a mathematical notion called dynamical compensation (DC) of biological circuits. DC was shown to play an important role in glucose │
│ │homeostasis as well as other key physiological regulatory mechanisms. Karin et al.\ went on to provide a sufficient condition to test whether a given system has the DC property. Here,│
│ │we show how DC is a reformulation of a well-known concept in systems biology, statistics, and control theory -- that of parameter structural non-identifiability. Viewing DC as a │
│ │parameter identification problem enables one to take advantage of powerful theoretical and computational tools to test a system for DC. We obtain as a special case the sufficient │
│Abstract:│criterion discussed by Karin et al. We also draw connections to system equivalence and to the fold-change detection property. │
12. Y. Vodovotz, A. Xia, E. Read, J. Bassaganya-Riera, D.A. Hafler, E.D. Sontag, J. Wang, J.S. Tsang, J.D. Day, S. Kleinstein, A.J. Butte, M.C. Altman, R. Hammond, C. Benoist, and S.C. Sealfon.
Solving Immunology?. Trends in Immunology, 38:116-127, 2017. [PDF] Keyword(s): Immunology.
│ │Emergent responses of the immune system result from the integration of molecular and cellular networks over time and across multiple organs. High-content and high-throughput analysis │
│ │technologies, concomitantly with data-driven and mechanistic modeling, hold promise for the systematic interrogation of these complex pathways. However, connecting genetic variation │
│ │and molecular mechanisms to individual phenotypes and health outcomes has proven elusive. Gaps remain in data, and disagreements persist about the value of mechanistic modeling for │
│ │immunology. This paper presents perspectives that emerged from the National Institute of Allergy and Infectious Disease (NIAID) workshop `Complex Systems Science, Modeling and │
│ │Immunity' and subsequent discussions regarding the potential synergy of high-throughput data acquisition, data-driven modeling, and mechanistic modeling to define new mechanisms of │
│Abstract:│immunological disease and to accelerate the translation of these insights into therapies. │
13. L. Yang, E.M. Dolan, S.K. Tan, T. Lin, E.D. Sontag, and S.D. Khare. Computation-guided design of a stimulus-responsive multi-enzyme supramolecular assembly, systems biology. ChemBioChem,
18:2000-2006, 2017. [PDF]
│ │This paper reports on the construction of a phosphorylation- and optically-responsive supramolecular complex of metabolic pathway enzymes for the biodegradation of an environmental │
│ │pollutant. Fusing of enzymes led to an increase in pathway efficiency, and illustrates the possibility of spatio-temporal control over formation and functioning of a wide variety of │
│Abstract:│synthetic biotransformations. │
1. Z. Aminzare and E.D. Sontag. Some remarks on spatial uniformity of solutions of reaction-diffusion PDEs. Nonlinear Analysis, 147:125-144, 2016. [PDF] Keyword(s): contractions, contractive
systems, matrix measures, logarithmic norms, synchronization, consensus, reaction-diffusion PDEs, partial differential equations.
│ │This paper presents a condition which guarantees spatial uniformity for the asymptotic behavior of the solutions of a reaction diffusion partial differential equation (PDE) with │
│ │Neumann boundary conditions in one dimension, using the Jacobian matrix of the reaction term and the first Dirichlet eigenvalue of the Laplacian operator on the given spatial domain. │
│ │The estimates are based on logarithmic norms in non-Hilbert spaces, which allow, in particular for a class of examples of interest in biology, tighter estimates than other previously │
│Abstract:│proposed methods. │
2. J.A. Ascensao, P. Datta, B. Hancioglu, E.D. Sontag, M.L. Gennaro, and O.A. Igoshin. Non-monotonic response dynamics of glyoxylate shunt genes in Mycobacterium tuberculosis. PLoS Computational
Biology, 12:e1004741, 2016. [PDF] Keyword(s): cell signaling, monotone systems, monotone systems, systems biology.
│ │Understanding how dynamical responses of biological networks are constrained by underlying network topology is one of the fundamental goals of systems biology. Here we employ monotone│
│ │systems theory to formulate a theorem stating necessary conditions for non-monotonic time-response of a biochemical network to a monotonic stimulus. We apply this theorem to analyze │
│ │the non-monotonic dynamics of the sigmaB-regulated glyoxylate shunt gene expression in Mycobacterium tuberculosis cells exposed to hypoxia. We first demonstrate that the known network│
│ │structure is inconsistent with observed dynamics. To resolve this inconsistency we employ the formulated theorem, modeling simulations and optimization along with follow-up dynamic │
│ │experimental measurements. We show a requirement for post-translational modulation of sigmaB activity in order to reconcile the network dynamics with its topology. The results of this│
│Abstract:│analysis make testable experimental predictions and demonstrate wider applicability of the developed methodology to a wide class of biological systems. │
3. M. Margaliot, E.D. Sontag, and T. Tuller. Contraction after small transients. Automatica, 67:178-184, 2016. [PDF] Keyword(s): entrainment, nonlinear systems, stability, contractions, contractive
systems, systems biology.
│ │Contraction theory is a powerful tool for proving asymptotic properties of nonlinear dynamical systems including convergence to an attractor and entrainment to a periodic excitation. │
│ │We introduce three new forms of generalized contraction (GC) that are motivated by allowing contraction to take place after small transients in time and/or amplitude. These forms of │
│ │GC are useful for several reasons. First, allowing small transients does not destroy the asymptotic properties provided by standard contraction. Second, in some cases as we change the│
│ │parameters in a contractive system it becomes a GC just before it looses contractivity. In this respect, GC is the analogue of marginal stability in Lyapunov stability theory. We │
│ │provide checkable sufficient conditions for GC, and demonstrate their usefulness using several models from systems biology that are not contractive, with respect to any norm, yet are │
│Abstract:│GC. │
4. E.V. Nikolaev and E.D. Sontag. Quorum-sensing synchronization of synthetic toggle switches: A design based on monotone dynamical systems theory. PLoS Computational Biology, 12:e1004881, 2016. [
PDF] Keyword(s): quorum sensing, toggle switches, monotone systems, systems biology.
│ │Synthetic constructs in biotechnology, bio-computing, and proposed gene therapy interventions are often based on plasmids or transfected circuits which implement some form of on-off │
│ │(toggle or flip-flop) switch. For example, the expression of a protein used for therapeutic purposes might be triggered by the recognition of a specific combination of inducers (e.g.,│
│ │antigens), and memory of this event should be maintained across a cell population until a specific stimulus commands a coordinated shut-off. The robustness of such a design is │
│ │hampered by molecular (intrinsic) or environmental (extrinsic) noise, which may lead to spontaneous changes of state in a subset of the population and is reflected in the bimodality │
│ │of protein expression, as measured for example using flow cytometry. In this context, a majority-vote correction circuit, which brings deviant cells back into the required state, is │
│ │highly desirable. To address this concrete challenge, we have developed a new theoretical design for quorum-sensing (QS) synthetic toggles. QS provides a way for cells to broadcast │
│ │their states to the population as a whole so as to facilitate consensus. Our design is endowed with strong theoretical guarantees, based on monotone dynamical systems theory, of │
│Abstract:│global stability and no oscillations, and which leads to robust consensus states. │
5. A. Raveh, M. Margaliot, E.D. Sontag, and T. Tuller. A model for competition for ribosomes in the cell. Proc. Royal Society Interface, 13:2015.1062, 2016. [PDF] Keyword(s): resource competition,
ribosomes, entrainment, nonlinear systems, stability, contractions, contractive systems, systems biology, RFM, ribosome flow model.
│ │We develop and analyze a general model for large-scale simultaneous mRNA translation and competition for ribosomes. Such models are especially important when dealing with highly │
│ │expressed genes, as these consume more resources. For our model, we prove that the compound system always converges to a steady-state and that it always entrains or phase locks to │
│ │periodically time-varying transition rates in any of the mRNA molecules. We use this model to explore the interactions between the various mRNA molecules and ribosomes at │
│ │steady-state. We show that increasing the length of an mRNA molecule decreases the production rate of all the mRNAs. Increasing any of the codon translation rates in a specific mRNA │
│ │molecule yields a local effect: an increase in the translation rate of this mRNA, and also a global effect: the translation rates in the other mRNA molecules all increase or all │
│Abstract:│decrease. These results suggest that the effect of codon decoding rates of endogenous and heterologous mRNAs on protein production might be more complicated than previously thought. │
6. M. Lang and E.D. Sontag. Scale-invariant systems realize nonlinear differential operators. In 2016 American Control Conference (ACC), pages 6676 - 6682, 2016. [PDF] Keyword(s): scale invariance,
fold change detection, nonlinear systems, realization theory, internal model principle.
│ │In this article, we show that scale-invariant systems, as well as systems invariant with respect to other input transformations, can realize nonlinear differential operators: when │
│ │excited by inputs obeying functional forms characteristic for a given class of invariant systems, the systems' outputs converge to constant values directly quantifying the speed of │
│Abstract:│the input. │
7. F. Menolascina, R. Stocker, and E.D. Sontag. In-vivo identification and control of aerotaxis in Bacillus subtilis. In Proc. IEEE Conf. Decision and Control, Dec. 2016, pages 764-769, 2016. [PDF]
Keyword(s): identification, systems biology, aerotaxis, B. subtilis.
│ │Combining in-vivo experiments with system identification methods, we determine a simple model of aerotaxis in B. subtilis, and we subsequently employ this model in order to compute │
│ │the sequence of oxygen gradients needed in order to achieve set-point regulation with respect to a signal tracking the center of mass of the bacterial population. We then successfully│
│ │validate both the model and the control scheme, by showing that in-vivo positioning control can be achieved via the application of the precomputed inputs in-vivo in an open-loop │
│Abstract:│configuration. │
8. E.D. Sontag. Some remarks on a model for immune signal detection and feedback. In Proc. IEEE Conf. Decision and Control, Dec. 2016, pages 2475-2480, 2016. [PDF] Keyword(s): scale invariance, fold
change detection, T cells, incoherent feedforward loops, immunology, cancer.
│ │This is a conference paper related to the journal paper "A dynamical model of immune responses to antigen presentation predicts different regions of tumor or pathogen elimination". │
│Abstract:│The conference paper includes several theorems for a simplified model which were not included in the journal paper. │
9. Q. Tyles, T. Kang, E.D. Sontag, and L. Bleris. Exploring the impact of resource limitations on gene network reconstruction. In Proc. IEEE Conf. Decision and Control, Dec. 2016, pages 3350-3355,
2016. [PDF] Keyword(s): Biological systems, Genetic regulatory systems, Systems biology.
│ │Applying Modular Response Analysis to a synthetic gene circuit, which was introduced in a recent paper by the authors, leads to the inference of a nontrivial "ghost" regulation edge │
│ │which was not explicitly engineered into the network and which is, in fact, not immediately apparent from experimental measurements. One may thus hypothesize that this ghost │
│Abstract:│regulatory effect is due to competition for resources. A mathematical model is proposed, and analyzed in closed form, that lends validation to this hypothesis. │
10. Y. Zarai, M. Margaliot, E.D. Sontag, and T. Tuller. Controlling the ribosomal density profile in mRNA translation. In Proc. IEEE Conf. Decision and Control, Dec. 2016, pages 4184-4189, 2016.
Keyword(s): ribosomes, translation, RFM, ribosome flow model.
11. E.D. Sontag. A remark on incoherent feedforward circuits as change detectors and feedback controllers. Technical report, arXiv:1602.00162, 2016. [PDF] Keyword(s): scale invariance, fold change
detection, T cells, incoherent feedforward loops, immunology, incoherent feedforward loop, feedforward, IFFL.
│ │This note analyzes incoherent feedforward loops in signal processing and control. It studies the response properties of IFFL's to exponentially growing inputs, both for a standard │
│ │version of the IFFL and for a variation in which the output variable has a positive self-feedback term. It also considers a negative feedback configuration, using such a device as a │
│ │controller. It uncovers a somewhat surprising phenomenon in which stabilization is only possible in disconnected regions of parameter space, as the controlled system's growth rate is │
│Abstract:│varied. │
12. E.D. Sontag. Examples of computation of exact moment dynamics for chemical reaction networks. Technical report, arXiv:1612.02393, 2016. [PDF] Keyword(s): systems biology, biochemical networks,
stochastic systems, chemical master equation, chemical reaction networks, moments, molecular networks, complex-balanced networks.
│ │We review in a unified way results for two types of stochastic chemical reaction systems for which moments can be effectively computed: feedforward networks and complex-balanced │
│Abstract:│networks. │
13. E.D. Sontag. Two-zone tumor tolerance can arise from a simple immunological feedforward motif that estimates tumor growth rates. Technical report, bioRxiv https://doi.org/10.1101/095455, 2016. [
PDF] Keyword(s): scale invariance, fold change detection, T cells, incoherent feedforward loops, immunology, cancer.
│ │Preprint version of "A dynamical model of immune responses to antigen presentation predicts different regions of tumor or pathogen elimination", appeared in Cell Systems 2017. │
│Abstract:│However, the journal version does not include Section 9 on degradation-based IFFL's from this preprint. │
1. E.D. Sontag. Input-to-State Stability. In J. Baillieul and T. Samad, editors, Encyclopedia of Systems and Control. Springer-Verlag, 2015. [PDF] Keyword(s): input to state stability, integral
input to state stability, iISS, ISS, input to output stability.
│ │The notion of input to state stability (ISS) qualitatively describes stability of the mapping from initial states and inputs to internal states (and more generally outputs). This │
│Abstract:│entry focuses on the definition of ISS and a discussion of equivalent characterizations. │
2. P. Bastiaens, M. R. Birtwistle, N. Bluthgen, F. J. Bruggeman, K.-H. Cho, C. Cosentino, A. de la Fuente, J. B. Hoek, A. Kiyatkin, S. Klamt, W. Kolch, S. Legewie, P. Mendes, T. Naka, T. Santra,
E.D. Sontag, H. V. Westerhoff, and B. N. Kholodenko. Silence on the relevant literature and errors in implementation. Nature Biotech, 33:336-339, 2015. [PDF] Keyword(s): modular response
analysis, systems biology, biochemical networks, reverse engineering, gene and protein networks, protein networks, gene networks, systems identification.
│ │This letter discusses a paper in the same journal which reported a method for reconstructing network topologies. Here we show that the method is a variant of a previously published │
│ │method, modular response analysis. We also demonstrate that the implementation of the algorithm in that paper using statistical similarity measures as a proxy for global network │
│Abstract:│responses to perturbations is erroneous and its performance is overestimated. │
3. T. Kang, R. Moore, Y. Li, E.D. Sontag, and L. Bleris. Discriminating direct and indirect connectivities in biological networks. Proc Natl Acad Sci USA, 112:12893-12898, 2015. [PDF] Keyword(s):
modular response analysis, stochastic systems, reverse engineering, gene networks, synthetic biology, feedforward, systems biology.
│ │Reverse engineering of biological pathways involves an iterative process between experiments, data processing, and theoretical analysis. In this work, we engineer synthetic circuits, │
│ │subject them to perturbations, and then infer network connections using a combination of nonparametric single-cell data resampling and modular response analysis. Intriguingly, we │
│ │discover that recovered weights of specific network edges undergo divergent shifts under differential perturbations, and that the particular behavior is markedly different between │
│ │different topologies. Investigating topological changes under differential perturbations may address the longstanding problem of discriminating direct and indirect connectivities in │
│Abstract:│biological networks. │
4. M. Skataric, E.V. Nikolaev, and E.D. Sontag. A fundamental limitation to fold-change detection by biological systems with multiple time scales. IET Systems Biology, 9:1-15, 2015. [PDF] Keyword
(s): adaptation, biological adaptation, perfect adaptation, singular perturbations, scale invariance, systems biology, transient behavior, symmetries, fcd, fold-change detection, incoherent
feedforward loop, feedforward, IFFL.
│ │The phenomenon of fold-change detection, or scale invariance, is exhibited by a variety of sensory systems, in both bacterial and eukaryotic signaling pathways. It has been often │
│ │remarked in the systems biology literature that certain systems whose output variables respond at a faster time scale than internal components give rise to an approximate │
│ │scale-invariant behavior, allowing approximate fold-change detection in stimuli. This paper establishes a fundamental limitation of such a mechanism, showing that there is a minimal │
│ │fold-change detection error that cannot be overcome, no matter how large the separation of time scales is. To illustrate this theoretically predicted limitation, we discuss two common│
│Abstract:│biomolecular network motifs, an incoherent feedforward loop and a feedback system, as well as a published model of the chemotaxis signaling pathway of Dictyostelium discoideum. │
5. E.D. Sontag and A. Singh. Exact moment dynamics for feedforward nonlinear chemical reaction networks. IEEE Life Sciences Letters, 1:26-29, 2015. [PDF] Keyword(s): systems biology, biochemical
networks, stochastic systems, chemical master equation, chemical reaction networks.
│ │Chemical systems are inherently stochastic, as reactions depend on random (thermal) motion. This motivates the study of stochastic models, and specifically the Chemical Master │
│ │Equation (CME), a discrete-space continuous-time Markov process that describes stochastic chemical kinetics. Exact studies using the CME are difficult, and several moment closure │
│ │tools related to "mass fluctuation kinetics" and "fluctuation-dissipation" formulas can be used to obtain approximations of moments. This paper, in contrast, introduces a class of │
│ │nonlinear chemical reaction networks for which exact computation is possible, by means of finite-dimensional linear differential equations. This class allows second and higher order │
│Abstract:│reactions, but only under special assumptions on structure and/or conservation laws. │
6. M. Marcondes de Freitas and E.D. Sontag. A small-gain theorem for random dynamical systems with inputs and outputs. SIAM J. Control and Optimization, 53:2657-2695, 2015. [PDF] Keyword(s): random
dynamical systems, monotone systems, small-gain theorem, stochastic systems.
│ │A formalism for the study of random dynamical systems with inputs and outputs (RDSIO) is introduced. An axiomatic framework and basic properties of RDSIO are developed, and a theorem │
│Abstract:│is shown that guarantees the stability of interconnected systems. │
7. A. O. Hamadeh, E.D. Sontag, and D. Del Vecchio. A contraction approach to output tracking via high-gain feedback. In Proc. IEEE Conf. Decision and Control, Dec. 2015, pages 7689-7694, 2015. [PDF]
│ │This paper adopts a contraction approach to the analysis of the tracking properties of dynamical systems under high gain feedback when subject to inputs with bounded derivatives. It │
│ │is shown that if the tracking error dynamics are contracting, then the system is input to output stable with respect to the input signal derivatives and the output tracking error. As │
│ │an application, it iss hown that the negative feedback connection of plants composed of two strictly positive real LTI subsystems in cascade can follow external inputs with tracking │
│ │errors that can be made arbitrarily small by applying a sufficiently large feedback gain. We utilize this result to design a biomolecular feedback for a synthetic genetic sensor to │
│Abstract:│make it robust to variations in the availability of a cellular resource required for protein production. │
8. E.D. Sontag. Incoherent feedforward motifs as immune change detectors. Technical report, bioRxiv http://dx.doi.org/10.1101/035600, December 2015. [PDF] Keyword(s): scale invariance, fcd, fold
change detection, T cells, incoherent feedforward loops, immunology, incoherent feedforward loop, feedforward, IFFL.
│ │We speculate that incoherent feedforward loops may be phenomenologically involved in self/nonself discrimination in immune-infection and immune-tumor interactions, acting as "change │
│Abstract:│detectors". In turn, this may result in logarithmic sensing (Weber phenomenon) and even scale invariance (fold-change detection). │
1. Z. Aminzare, Y. Shafi, M. Arcak, and E.D. Sontag. Guaranteeing spatial uniformity in reaction-diffusion systems using weighted $L_2$-norm contractions. In V. Kulkarni, G.-B. Stan, and K. Raman,
editors, A Systems Theoretic Approach to Systems and Synthetic Biology I: Models and System Characterizations, pages 73-101. Springer-Verlag, 2014. [PDF] Keyword(s): contractions, contractive
systems, Turing instabilities, diffusion, partial differential equations, synchronization.
│ │This paper gives conditions that guarantee spatial uniformity of the solutions of reaction-diffusion partial differential equations, stated in terms of the Jacobian matrix and Neumann│
│ │eigenvalues of elliptic operators on the given spatial domain, and similar conditions for diffusively-coupled networks of ordinary differential equations. Also derived are numerical │
│Abstract:│tests making use of linear matrix inequalities that are useful in certifying these conditions. │
2. Z. Aminzare and E.D. Sontag. Synchronization of diffusively-connected nonlinear systems: results based on contractions with respect to general norms. IEEE Transactions on Network Science and
Engineering, 1(2):91-106, 2014. [PDF] Keyword(s): matrix measures, logarithmic norms, synchronization, consensus, contractions, contractive systems.
│ │Contraction theory provides an elegant way to analyze the behavior of certain nonlinear dynamical systems. In this paper, we discuss the application of contraction to synchronization │
│ │of diffusively interconnected components described by nonlinear differential equations. We provide estimates of convergence of the difference in states between components, in the │
│ │cases of line, complete, and star graphs, and Cartesian products of such graphs. We base our approach on contraction theory, using matrix measures derived from norms that are not │
│ │induced by inner products. Such norms are the most appropriate in many applications, but proofs cannot rely upon Lyapunov-like linear matrix inequalities, and different techniques, │
│Abstract:│such as the use of the Perron-Frobenious Theorem in the cases of L1 or L-infinity norms, must be introduced. │
3. D. Angeli, G.A. Enciso, and E.D. Sontag. A small-gain result for orthant-monotone systems under mixed feedback. Systems and Control Letters, 68:9-19, 2014. [PDF] Keyword(s): small-gain theorem,
monotone systems.
│ │This paper introduces a small-gain result for interconnected orthant-monotone systems for which no matching condition is required between the partial orders in input and output │
│ │spaces. Previous results assumed that the partial orders adopted would be induced by positivity cones in input and output spaces and that such positivity cones should fulfill a │
│ │compatibility rule: namely either be coincident or be opposite. Those two configurations correspond to positive feedback or negative feedback cases. We relax those results by allowing│
│Abstract:│arbitrary orthant orders. │
4. M. Margaliot, E.D. Sontag, and T. Tuller. Entrainment to periodic initiation and transition rates in a computational model for gene translation. PLoS ONE, 9(5):e96039, 2014. [WWW] [PDF] [doi:
10.1371/journal.pone.0096039] Keyword(s): ribosomes, entrainment, nonlinear systems, stability, contractions, contractive systems, systems biology, RFM, ribosome flow model.
│ │A recent biological study has demonstrated that the gene expression pattern entrains to a periodically varying abundance of tRNA molecules. This motivates developing mathematical │
│ │tools for analyzing entrainment of translation elongation to intra-cellular signals such as tRNAs levels and other factors affecting translation. We consider a recent deterministic │
│ │mathematical model for translation called the Ribosome Flow Model (RFM). We analyze this model under the assumption that the elongation rate of the tRNA genes and/or the initiation │
│ │rate are periodic functions with a common period T. We show that the protein synthesis pattern indeed converges to a unique periodic trajectory with period T. The analysis is based on│
│ │introducing a novel property of dynamical systems, called contraction after a short transient (CAST), that may be of independent interest. We provide a sufficient condition for CAST │
│ │and use it to prove that the RFM is CAST, and that this implies entrainment. Our results support the conjecture that periodic oscillations in tRNA levels and other factors related to │
│Abstract:│the translation process can induce periodic oscillations in protein levels, and suggest a new approach for engineering genes to obtain a desired, periodic, synthesis rate. │
5. S. Prabakaran, J. Gunawardena, and E.D. Sontag. Paradoxical results in perturbation-based signaling network reconstruction. Biophysical Journal, 106:2720-2728, 2014. [PDF] Keyword(s):
stoichiometry, MAPK cascades, systems biology, biochemical networks, gene and protein networks, reverse engineering, systems identification, retroactivity.
│ │This paper describes a potential pitfall of perturbation-based approaches to network inference It is shows experimentally, and then explained mathematically, how even in the simplest │
│ │signaling systems, perturbation methods may lead to paradoxical conclusions: for any given pair of two components X and Y, and depending upon the specific intervention on Y, either an│
│ │activation or a repression of X could be inferred. The experiments are performed in an in vitro minimal system, thus isolating the effect and showing that it cannot be explained by │
│ │feedbacks due to unknown intermediates; this system utilizes proteins from a pathway in mammalian (and other eukaryotic) cells that play a central role in proliferation, gene │
│ │expression, differentiation, mitosis, cell survival, and apoptosis and is a perturbation target of contemporary therapies for various types of cancers. The results show that the │
│ │simplistic view of intracellular signaling networks being made up of activation and repression links is seriously misleading, and call for a fundamental rethinking of signaling │
│Abstract:│network analysis and inference methods. │
6. T.H. Segall-Shapiro, A.J. Meyer, A.D. Ellington, E.D. Sontag, and C.A. Voigt. A `resource allocator' for transcription based on a highly fragmented T7 RNA polymerase. Molecular Systems Biology,
10:742-, 2014. [WWW] [PDF] Keyword(s): systems biology, synthetic biology, gene expression.
│ │A transcriptional system is built based on a 'resource allocator' that sets a core RNAP concentration, which is then shared by multiple sigma fragments, which provide specificity. │
│Abstract:│Adjusting the concentration of the core sets the maximum transcriptional capacity available to a synthetic system. │
7. E.D. Sontag. A technique for determining the signs of sensitivities of steady states in chemical reaction networks. IET Systems Biology, 8:251-267, 2014. Note: Code is here: https://github.com/
sontaglab/CRNSeSi. [PDF] Keyword(s): sensitivity, retroactivity, biomolecular networks, systems biology, stoichiometry, biochemical networks, systems biology.
│ │This paper studies the direction of change of steady states to parameter perturbations in chemical reaction networks, and, in particular, to changes in conserved quantities. │
│ │Theoretical considerations lead to the formulation of a computational procedure that provides a set of possible signs of such sensitivities. The procedure is purely algebraic and │
│ │combinatorial, only using information on stoichiometry, and is independent of the values of kinetic constants. Two examples of important intracellular signal transduction models are │
│ │worked out as an illustration. In these examples, the set of signs found is minimal, but there is no general guarantee that the set found will always be minimal in other examples. The│
│Abstract:│paper also briefly discusses the relationship of the sign problem to the question of uniqueness of steady states in stoichiometry classes. │
8. Z. Aminzare and E.D. Sontag. Contraction methods for nonlinear systems: A brief introduction and some open problems. In Proc. IEEE Conf. Decision and Control, Los Angeles, Dec. 2014, pages
3835-3847, 2014. [PDF] Keyword(s): contractions, contractive systems, stability, reaction-diffusion PDE's, synchronization, contractive systems, stability.
│ │Contraction theory provides an elegant way to analyze the behaviors of certain nonlinear dynamical systems. Under sometimes easy to check hypotheses, systems can be shown to have the │
│ │incremental stability property that trajectories converge to each other. The present paper provides a self-contained introduction to some of the basic concepts and results in │
│Abstract:│contraction theory, discusses applications to synchronization and to reaction-diffusion partial differential equations, and poses several open questions. │
9. Z. Aminzare and E.D. Sontag. Remarks on diffusive-link synchronization using non-Hilbert logarithmic norms. In Proc. IEEE Conf. Decision and Control, Los Angeles, Dec. 2014, pages 6086-6091,
2014. Keyword(s): contractions, contractive systems, stability, reaction-diffusion PDE's, synchronization.
│ │In this paper, we sketch recent results for synchronization in a network of identical ODE models which are diffusively interconnected. In particular, we provide estimates of │
│Abstract:│convergence of the difference in states between components, in the cases of line, complete, and star graphs, and Cartesian products of such graphs. │
10. M. Skataric, E.V. Nikolaev, and E.D. Sontag. Scale-invariance in singularly perturbed systems. In Proc. IEEE Conf. Decision and Control, Los Angeles, Dec. 2014, pages 3035-3040, 2014. [PDF]
Keyword(s): adaptation, biological adaptation, perfect adaptation, singular perturbations, scale invariance, systems biology, transient behavior, symmetries, fcd, fold-change detection,
incoherent feedforward loop, feedforward, IFFL.
│ │This conference paper (a) summarizes material from "A fundamental limitation to fold-change detection by biological systems with multiple time scales" (IET Systems Biology 2014) and │
│Abstract:│presents additional remarks regarding (b) expansion techniques to compute FCD error and (c) stochastic adaptation and FCD │
11. M. Skataric and E.D. Sontag. Remarks on model-based estimation of nonhomogeneous Poisson processes and applications to biological systems. In Proc. European Control Conference, Strasbourg,
France, June 2014, pages 2052-2057, 2014. [PDF] Keyword(s): systems biology, random dynamical systems.
│ │This paper studies model-based estimation methods of a rate of a nonhomogeneous Poisson processes that describes events arising from modeling biological phenomena in which discrete │
│ │events are measured. We describe an approach based on observers and Kalman filters as well as preliminary simulation results, and compare these to other methods (not model-based) in │
│Abstract:│the literature. The problem is motivated by the question of identification of internal states from neural spikes and bacterial tumbling behavior. │
12. E.D. Sontag. Quantifying the effect of interconnections on the steady states of biomolecular networks. In Proc. IEEE Conf. Decision and Control, Los Angeles, Dec. 2014, pages 5419-5424, 2014.
13. E.D. Sontag, M. Margaliot, and T. Tuller. On three generalizations of contraction. In Proc. IEEE Conf. Decision and Control, Los Angeles, Dec. 2014, pages 1539-1544, 2014. Keyword(s):
contractions, contractive systems, stability.
│ │We introduce three forms of generalized contraction~(GC). Roughly speaking, these are motivated by allowing contraction to take place after small transients in time and/or amplitude. │
│ │Indeed, contraction is usually used to prove asymptotic properties, like convergence to an attractor or entrainment to a periodic excitation, and allowing initial transients does not │
│ │affect this asymptotic behavior. We provide sufficient conditions for GC, and demonstrate their usefulness using examples of systems that are not contractive, with respect to any │
│Abstract:│norm, yet are~GC. │
14. J. Barton and E.D. Sontag. Remarks on the energy costs of insulators in enzymatic cascades. Technical report, http://arxiv.org/abs/1412.8065, December 2014. [PDF] Keyword(s): retroactivity,
systems biology, biochemical networks, futile cycles, singular perturbations, modularity.
│ │The connection between optimal biological function and energy use, measured for example by the rate of metabolite consumption, is a current topic of interest in the systems biology │
│ │literature which has been explored in several different contexts. In [J. P. Barton and E. D. Sontag, Biophys. J. 104, 6 (2013)], we related the metabolic cost of enzymatic futile │
│ │cycles with their capacity to act as insulators which facilitate modular interconnections in biochemical networks. There we analyzed a simple model system in which a signal molecule │
│ │regulates the transcription of one or more target proteins by interacting with their promoters. In this note, we consider the case of a protein with an active and an inactive form, │
│Abstract:│and whose activation is controlled by the signal molecule. As in the original case, higher rates of energy consumption are required for better insulator performance. │
1. D. Angeli and E.D. Sontag. Behavior of responses of monotone and sign-definite systems. In K. Hper and Jochen Trumpf, editors, Mathematical System Theory - Festschrift in Honor of Uwe Helmke on
the Occasion of his Sixtieth Birthday, pages 51-64. CreateSpace, 2013. [PDF] Keyword(s): monotone systems, reverse engineering, systems biology.
│ │This paper study systems with sign-definite interactions between variables, providing a sufficient condition to characterize the possible transitions between intervals of increasing │
│Abstract:│and decreasing behavior. It also provides a discussion illustrating how our approach can help identify interactions in models, using information from time series of observations. │
2. M. Marcondes de Freitas and E.D. Sontag. Random dynamical systems with inputs. In C. Ptzsche and P. Kloeden, editors, Nonautonomous Dynamical Systems in the Life Sciences, Lecture Notes in
Mathematics vol. 2102, pages 41-87. Springer-Verlag, 2013. [PDF] Keyword(s): random dynamical systems, monotone systems.
│ │This work introduces a notion of random dynamical systems with inputs, providing several basic definitions and results on equilibria and convergence. It also presents a "converging │
│Abstract:│input to converging state" result, a concept that plays a key role in the analysis of stability of feedback interconnections, for monotone systems. │
3. Z. Aminzare and E.D. Sontag. Logarithmic Lipschitz norms and diffusion-induced instability. Nonlinear Analysis: Theory, Methods & Applications, 83:31-49, 2013. [PDF] Keyword(s): contractions,
contractive systems, matrix measures, logarithmic norms, Turing instabilities, diffusion, partial differential equations, synchronization.
│ │This paper proves that ordinary differential equation systems that are contractive with respect to $L^p$ norms remain so when diffusion is added. Thus, diffusive instabilities, in the│
│ │sense of the Turing phenomenon, cannot arise for such systems, and in fact any two solutions converge exponentially to each other. The key tools are semi-inner products and │
│ │logarithmic Lipschitz constants in Banach spaces. An example from biochemistry is discussed, which shows the necessity of considering non-Hilbert spaces. An analogous result for │
│Abstract:│graph-defined interconnections of systems defined by ordinary differential equations is given as well. │
4. J. Barton and E.D. Sontag. The energy costs of insulators in biochemical networks. Biophysical Journal, 104:1390-1380, 2013. [PDF] Keyword(s): biochemical networks, futile cycles, enzymatic
cycles, cell signaling, retroactivity, modularity, systems biology.
│ │Complex networks of biochemical reactions, such as intracellular protein signaling pathways and genetic networks, are often conceptualized in terms of ``modules,'' semi-independent │
│ │collections of components that perform a well-defined function and which may be incorporated in multiple pathways. However, due to sequestration of molecular messengers during │
│ │interactions and other effects, collectively referred to as retroactivity, real biochemical systems do not exhibit perfect modularity. Biochemical signaling pathways can be insulated │
│ │from impedance and competition effects, which inhibit modularity, through enzymatic ``futile cycles'' which consume energy, typically in the form of ATP. We hypothesize that better │
│ │insulation necessarily requires higher energy consumption. We test this hypothesis through a combined theoretical and computational analysis of a simplified physical model of covalent│
│ │cycles, using two innovative measures of insulation, as well as a new way to characterize optimal insulation through the balancing of these two measures in a Pareto sense. Our results│
│ │indicate that indeed better insulation requires more energy. While insulation may facilitate evolution by enabling a modular ``plug and play'' interconnection architecture, allowing │
│ │for the creation of new behaviors by adding targets to existing pathways, our work suggests that this potential benefit must be balanced against the metabolic costs of insulation │
│Abstract:│necessarily incurred in not affecting the behavior of existing processes. │
5. A.O. Hamadeh, B.P. Ingalls, and E.D. Sontag. Transient dynamic phenotypes as criteria for model discrimination: fold-change detection in Rhodobacter sphaeroides chemotaxis. Proc. Royal Society
Interface, 10:20120935, 2013. [PDF] Keyword(s): adaptation, biological adaptation, perfect adaptation, scale invariance, systems biology, transient behavior, symmetries, fcd, fold-change
detection, chemotaxis.
│ │The chemotaxis pathway of the bacterium Rhodobacter sphaeroides has many similarities to that of Escherichia coli. It exhibits robust adaptation and has several homologues of the │
│ │latter's chemotaxis proteins. Recent theoretical results have correctly predicted that, in response to a scaling of its ligand input signal, Escherichia coli exhibits the same output │
│ │behavior, a property known as fold-change detection (FCD). In light of recent experimental results suggesting that R. sphaeroides may also show FCD, we present theoretical assumptions│
│ │on the R. sphaeroides chemosensory dynamics that can be shown to yield FCD behavior. Furthermore, it is shown that these assumptions make FCD a property of this system that is robust │
│ │to structural and parametric variations in the chemotaxis pathway, in agreement with experimental results. We construct and examine models of the full chemotaxis pathway that satisfy │
│ │these assumptions and reproduce experimental time-series data from earlier studies. We then propose experiments in which models satisfying our theoretical assumptions predict robust │
│ │FCD behavior where earlier models do not. In this way, we illustrate how transient dynamic phenotypes such as FCD can be used for the purposes of discriminating between models that │
│Abstract:│reproduce the same experimental time-series data. │
6. T. Kang, J.T. White, Z. Xie, Y. Benenson, E.D. Sontag, and L. Bleris. Reverse engineering validation using a benchmark synthetic gene circuit in human cells. ACS Synthetic Biology, 2:255-262,
2013. [PDF] Keyword(s): reverse engineering, systems biology, synthetic biology.
│ │This work introduces an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. │
│ │Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is │
│ │orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive │
│ │perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal │
│Abstract:│relationships of the integrated synthetic network. │
7. L. Liu, G. Duclos, B. Sun, J. Lee, A. Wu, Y. Kam, E.D. Sontag, H.A. Stone, J.C. Sturm, R.A. Gatenby, and R.H. Austin. Minimization of thermodynamic costs in cancer cell invasion. Proc Natl Acad
Sci USA, 110:1686-1691, 2013. [PDF] Keyword(s): chemotaxis, cancer, metastasis.
│ │This paper shows that metastatic breast cancer cells cooperatively invade a 3D collagen matrix while following a glucose gradient. The front cell leadership is dynamic, and invading │
│Abstract:│cells act in a cooperative manner by exchanging leaders in the invading front. │
8. G. Russo, M. di Bernardo, and E.D. Sontag. A contraction approach to the hierarchical analysis and design of networked systems. IEEE Transactions Autom. Control, 58:1328-1331, 2013. [PDF] Keyword
(s): contractions, contractive systems, matrix measures, logarithmic norms, synchronization, systems biology.
│ │This paper studies networks of components, and shows that a contraction property on the interconnection matrix, coupled with contractivity of the individual component subsystems, │
│Abstract:│suffices to insure contractivity of the overall system. │
9. V. Shimoga, J.T. White, Y. Li, E.D. Sontag, and L. Bleris. Synthetic mammalian transgene negative autoregulation. Molecular Systems Biology, 9:670-, 2013. [PDF] Keyword(s): systems biology,
synthetic biology, gene expression.
│ │Using synthetic circuits stably integrated in human kidney cells, we study the effect of negative feedback regulation on cell-wide (extrinsic) and gene-specific (intrinsic) sources of│
│ │uncertainty. We develop a theoretical approach to extract the two noise components from experiments and show that negative feedback reduces extrinsic noise while marginally increasing│
│ │intrinsic noise, resulting to significant total noise reduction. We compare the results to simple negative regulation, where a constitutively transcribed transcription factor │
│ │represses a reporter protein. We observe that the control architecture also reduces the extrinsic noise but results in substantially higher intrinsic fluctuations. We conclude that │
│Abstract:│negative feedback is the most efficient way to mitigate the effects of extrinsic fluctuations by a sole regulatory wiring. │
10. A. White, B. Lees, H.-L. Kao, G. Cipriani, E. Munarriz, A. Paaby, K. Erickson, S. Guzman, K. Rattanakorn, E.D. Sontag, D. Geiger, K. Gunsalus, and F. Piano. DevStaR: A novel algorithm for
quantitative phenotyping of C. elegans development. IEEE Transactions on Medical Imaging, 32:1791-1803, 2013. [PDF]
11. A. O. Hamadeh, E.D. Sontag, and B.P. Ingalls. Response time re-scaling and Weber's law in adapting biological systems. In Proc. American Control Conference, pages 4564-4569, 2013. [PDF] Keyword
(s): adaptation, biological adaptation, perfect adaptation, scale invariance, systems biology, transient behavior, symmetries, fcd, fold-change detection, chemotaxis.
│ │Recent experimental work has shown that transient E. coli chemotactic response is unchanged by a scaling of its ligand input signal (fold change detection, or FCD), and this is in │
│ │agreement with earlier mathematical predictions. However, this prediction was based on certain particular assumptions on the structure of the chemotaxis pathway. In this work, we │
│ │begin by showing that behavior similar to FCD can be obtained under weaker conditions on the system structure. Namely, we show that under relaxed conditions, a scaling of the │
│ │chemotaxis system's inputs leads to a time scaling of the output response. We propose that this may be a contributing factor to the robustness of the experimentally observed FCD. We │
│ │further show that FCD is a special case of this time scaling behavior for which the time scaling factor is unity. We then proceed to extend the conditions for output time scaling to │
│ │more general adapting systems, and demonstrate this time scaling behavior on a published model of the chemotaxis pathway of the bacterium Rhodobacter sphaeroides. This work therefore │
│Abstract:│provides examples of how robust biological behavior can arise from simple yet realistic conditions on the underlying system structure. │
12. Y. Shafi, Z. Aminzare, M. Arcak, and E.D. Sontag. Spatial uniformity in diffusively-coupled systems using weighted L2 norm contractions. In Proc. American Control Conference, pages 5639-5644,
2013. [PDF] Keyword(s): contractions, contractive systems, matrix measures, logarithmic norms, Turing instabilities, diffusion, partial differential equations, synchronization.
│ │We present conditions that guarantee spatial uniformity in diffusively-coupled systems. Diffusive coupling is a ubiquitous form of local interaction, arising in diverse areas │
│ │including multiagent coordination and pattern formation in biochemical networks. The conditions we derive make use of the Jacobian matrix and Neumann eigenvalues of elliptic │
│ │operators, and generalize and unify existing theory about asymptotic convergence of trajectories of reaction-diffusion partial differential equations as well as compartmental ordinary│
│ │differential equations. We present numerical tests making use of linear matrix inequalities that may be used to certify these conditions. We discuss an example pertaining to │
│Abstract:│electromechanical oscillators. The paper's main contributions are unified verifiable relaxed conditions that guarantee synchrony. │
13. M. Marcondes de Freitas and E.D. Sontag. A class of random control systems: Monotonicity and the convergent-input convergent-state property. In Proc. American Control Conference, pages 4564-4569,
2013. [PDF] Keyword(s): random dynamical systems, monotone systems.
14. Z. Aminzare and E. D. Sontag. Remarks on a population-level model of chemotaxis: advection-diffusion approximation and simulations. Technical report, arxiv:1302.2605, 2013. [PDF]
│ │This note works out an advection-diffusion approximation to the density of a population of E. coli bacteria undergoing chemotaxis in a one-dimensional space. Simulations show the high│
│Abstract:│quality of predictions under a shallow-gradient regime. │
15. E.D. Sontag. A remark about polynomials with specified local minima and no other critical points. Technical report, arxiv 1302.0759, 2013. [PDF]
│ │The following observation must surely be "well-known", but it seems worth giving a simple and quite explicit proof. Take any finite subset X of Rn, n>1. Then, there is a polynomial │
│ │function P:Rn -> R which has local minima on the set X, and has no other critical points. Applied to the negative gradient flow of P, this implies that there is a polynomial vector │
│ │field with asymptotically stable equilibria on X and no other equilibria. Some trajectories of this vector field are not pre-compact; a complementary observation says that, again for │
│Abstract:│arbitrary X, one can find a vector field with asymptotically stable equilibria on X, no other equilibria except saddles, and all omega-limit sets consisting of singletons. │
1. M. Miller, M. Hafner, E.D. Sontag, N. Davidsohn, S. Subramanian, P. E. M. Purnick, D. Lauffenburger, and R. Weiss. Modular design of artificial tissue homeostasis: robust control through
synthetic cellular heterogeneity. PLoS Computational Biology, 8:e1002579-, 2012. [PDF] Keyword(s): systems biology, homeostasis, stem cells, synthetic biology.
│ │Synthetic biology efforts have largely focused on small engineered gene networks, yet understanding how to integrate multiple synthetic modules and interface them with endogenous │
│ │pathways remains a challenge. Here we present the design, system integration, and analysis of several large scale synthetic gene circuits for artificial tissue homeostasis. Diabetes │
│ │therapy represents a possible application for engineered homeostasis, where genetically programmed stem cells maintain a steady population of beta-cells despite continuous turnover. │
│ │We develop a new iterative process that incorporates modular design principles with hierarchical performance optimization targeted for environments with uncertainty and incomplete │
│ │information. We employ theoretical analysis and computational simulations of multicellular reaction/diffusion models to design and understand system behavior, and find that certain │
│ │features often associated with robustness (e.g., multicellular synchronization and noise attenuation) are actually detrimental for tissue homeostasis. We overcome these problems by │
│ │engineering a new class of genetic modules for 'synthetic cellular heterogeneity' that function to generate beneficial population diversity. We design two such modules (an │
│ │asynchronous genetic oscillator and a signaling throttle mechanism), demonstrate their capacity for enhancing robust control, and provide guidance for experimental implementation with│
│ │various computational techniques. We found that designing modules for synthetic heterogeneity can be complex, and in general requires a framework for non-linear and multifactorial │
│ │analysis. Consequently, we adapt a 'phenotypic sensitivity analysis' method to determine how functional module behaviors combine to achieve optimal system performance. We ultimately │
│ │combine this analysis with Bayesian network inference to extract critical, causal relationships between a module's biochemical rate-constants, its high level functional behavior in │
│Abstract:│isolation, and its impact on overall system performance once integrated. │
2. A. Rufino Ferreira, M. Arcak, and E.D. Sontag. Stability certification of large scale stochastic systems using dissipativity of subsystems. Automatica, 48:2956-2964, 2012. [PDF] Keyword(s):
stochastic systems, passivity, noise-to-state stability, ISS, input to state stability.
│Abstract:│This paper deals with the stability of interconnections of nonlinear stochastic systems, using concepts of passivity and noise-to-state stability.│
3. M. Skataric and E.D. Sontag. A characterization of scale invariant responses in enzymatic networks. PLoS Computational Biology, 8:e1002748, 2012. [PDF] Keyword(s): adaptation, biological
adaptation, perfect adaptation, scale invariance, systems biology, transient behavior, symmetries, fcd, fold-change detection.
│ │This paper studies a recently discovered remarkable feature that was shown in many adapting systems: scale invariance, which means that the initial, transient behavior stays │
│ │approximately the same when the background signal level is scaled. Not every adapting system is scale-invariant: we investigate under which conditions a broadly used model of │
│ │biochemical enzymatic networks will show scale invariant behavior. For all 3-node enzymatic networks, we performed a wide computational study to find candidates for scale invariance, │
│ │among 16,038 possible topologies. This effort led us to discover a new necessary and sufficient mechanism that explains the behavior of all 3-node enzyme networks that have this │
│ │property, which we call``uniform linearizations with fast output''. We also apply our theoretical results to a concrete biological example of order six, a model of the response of the│
│Abstract:│chemotaxis signaling pathway of Dictyostelium discoideum to changes in chemoeffector cyclic adenosine monophosphate (cAMP). │
4. K. Wood, S. Nishida, E.D. Sontag, and P. Cluzel. Mechanism-independent method for predicting response to multiple drug exposure in bacteria. Proc Natl Acad Sci USA, 109:12254-12259, 2012. [PDF]
Keyword(s): systems biology, drug interactions.
│ │Drugs are commonly used in combinations larger than two for treating bacterial infections. It is generally impossible to infer directly from the effects of individual drugs the net │
│ │effect of a multi-drug combination. This paper describes an empirically derived mechanism-independent method for predicting the microbial growth response to combinations of more than │
│ │two drugs, experimentally tested on both gram-negative (Escherichia coli) and grampositive (Staphylococcus aureus) bacteria. The method shows that for a wide range of drugs, the │
│Abstract:│bacterial responses to drug pairs are sufficient to infer the effects of larger drug combinations, and provides a simple formula for the prediction. │
5. D. Angeli and E.D. Sontag. Remarks on the invalidation of biological models using monotone systems theory. In Proc. IEEE Conf. Decision and Control, Maui, Dec. 2012, 2012. Note: Paper TuC09.3.[
│ │This paper presents techniques for finding out what type of solutions are compatible with a given sign pattern of interactions between state/input variables once the input behaviour │
│ │is also known. By ``type'' of solutions we essentially refer to the sequence of upwards or downwards segments that variables can exhibit (essentially sign-patterns of variables │
│Abstract:│derivatives) once input profiles are also specified. A concrete experimental example of how such techniques can invalidate models is also provided. │
6. A.O. Hamadeh, B.P. Ingalls, and E.D. Sontag. Fold-Change Detection As a Chemotaxis Model Discrimination Tool. In Proc. IEEE Conf. Decision and Control, Maui, Dec. 2012, 2012. Note: Paper
WeC09.2.Keyword(s): adaptation, biological adaptation, perfect adaptation, scale invariance, systems biology, transient behavior, symmetries, fcd, fold-change detection, chemotaxis.
7. A. Rufino Ferreira, M. Arcak, and E.D. Sontag. A decomposition-based approach to stability analysis of large-scale stochastic systems. In Proceedings of the 2012 American Control Conference,
Montreal, June 2012, pages Paper FrC10.4, 2012. Keyword(s): stochastic systems, passivity, noise-to-state stability.
│Abstract:│Conference version of ``Stability certification of large scale stochastic systems using dissipativity of subsystems''. │
8. M. Skataric and E.D. Sontag. Exploring the scale invariance property in enzymatic networks. In Proc. IEEE Conf. Decision and Control, Maui, Dec. 2012, 2012. Note: Paper WeC09.2.Keyword(s):
adaptation, biological adaptation, perfect adaptation, scale invariance, systems biology, transient behavior, symmetries, fcd, fold-change detection, enzymatic networks.
│Abstract:│This is a conference version of ``A characterization of scale invariant responses in enzymatic networks. │
9. J. Barton and E.D. Sontag. The energy costs of biological insulators. Technical report, http://arxiv.org/abs/1210.3809, October 2012. Keyword(s): retroactivity, systems biology, biochemical
networks, futile cycles, singular perturbations, modularity.
│ │Biochemical signaling pathways can be insulated from impedance and competition effects through enzymatic "futile cycles" which consume energy, typically in the form of ATP. We │
│ │hypothesize that better insulation necessarily requires higher energy consumption, and provide evidence, through the computational analysis of a simplified physical model, to support │
│Abstract:│this hypothesis. │
10. M. Marcondes de Freitas and E.D. Sontag. Remarks on random dynamical systems with inputs and outputs and a small-gain theorem for monotone RDS. Technical report, http://arxiv.org/abs/1207.1690,
July 2012. Keyword(s): random dynamical systems, monotone systems.
1. G. Craciun, C. Pantea, and E.D. Sontag. Graph-theoretic analysis of multistability and monotonicity for biochemical reaction networks. In H. Koeppl, G. Setti, M. di Bernardo, and D. Densmore,
editors, Design and Analysis of Biomolecular Circuits, pages 63-72. Springer-Verlag, 2011. [PDF] Keyword(s): biochemical networks, monotone systems.
│Abstract:│This is a short expository article describing how the species-reaction graph (SR graph) can be used to analyze both multistability and monotonicity of biochemical networks.│
2. E.D. Sontag. Input to State Stability. In W. S. Levine, editor, The Control Systems Handbook: Control System Advanced Methods, Second Edition., pages 45.1-45.21 (1034-1054). CRC Press, Boca
Raton, 2011. [PDF] Keyword(s): input to state stability, integral input to state stability, iISS, ISS, input to output stability.
│Abstract:│An encyclopedia-type article on foundations of ISS. │
3. E.D. Sontag. Modularity, retroactivity, and structural identification. In H. Koeppl, G. Setti, M. di Bernardo, and D. Densmore, editors, Design and Analysis of Biomolecular Circuits, pages
183-202. Springer-Verlag, 2011. [PDF] Keyword(s): modularity, retroactivity, identification.
│ │Many reverse-engineering techniques in systems biology rely upon data on steady-state (or dynamic) perturbations --obtained from siRNA, gene knock-down or overexpression, kinase and │
│ │phosphatase inhibitors, or other interventions-- in order to understand the interactions between different ``modules'' in a network. This paper first reviews one such popular such │
│ │technique, introduced by the author and collaborators, and focuses on why conclusions drawn from its use may be misleading due to ``retroactivity'' (impedance or load) effects. A │
│Abstract:│theoretical result characterizing stoichiometric-induced steady-state retroactivity effects is given for a class of biochemical networks. │
4. E.D. Sontag. Stability and feedback stabilization. In Robert Meyers, editor, Mathematics of Complexity and Dynamical Systems, pages 1639-1652. Springer-Verlag, Berlin, 2011. [PDF] Keyword(s):
stability, nonlinear control, feedback stabilization.
│ │The problem of stabilization of equilibria is one of the central issues in control. In addition to its intrinsic interest, it represents a first step towards the solution of more │
│ │complicated problems, such as the stabilization of periodic orbits or general invariant sets, or the attainment of other control objectives, such as tracking, disturbance rejection, │
│ │or output feedback, all of which may be interpreted as requiring the stabilization of some quantity (typically, some sort of ``error'' signal). A very special case, when there are no │
│Abstract:│inputs, is that of stability. This short and informal article provides an introduction to the subject. │
5. A.R. Teel, T.T. Georgiou, L. Praly, and E.D. Sontag. Input-Output Stability. In W. S. Levine, editor, The Control Systems Handbook: Control System Advanced Methods, Second Edition., pages
44.1-44.23 (1011-1033). CRC Press, Boca Raton, 2011. [PDF]
│Abstract:│An encyclopedia-type article on foundations of input/output stability. │
6. R. Albert, B. DasGupta, R. Hegde, G.S. Sivanathan, A. Gitter, G. Grsoy, P. Paul, and E.D. Sontag. A new computationally efficient measure of topological redundancy of biological and social
networks. Physical Review E, 84:036117, 2011. [PDF]
│ │In this paper, we introduce a topological redundancy measure for labeled directed networks that is formal, computationally efficient and applicable to a variety of directed networks │
│ │such as cellular signaling, metabolic and social interaction networks. We demonstrate the computational efficiency of our measure by computing its value and statistical significance │
│ │on a number of biological and social networks with up to several thousands of nodes and edges. Our results suggest a number of interesting observations: (1) social networks are more │
│ │redundant that their biological counterparts, (2) transcriptional networks are less redundant than signaling networks, (3) the topological redundancy of the C. elegans metabolic │
│Abstract:│network is largely due to its inclusion of currency metabolites, and (4) the redundancy of signaling networks is highly (negatively) correlated with monotonicity of their dynamics. │
7. D. Angeli, P. de Leenheer, and E.D. Sontag. Persistence results for chemical reaction networks with time-dependent kinetics and no global conservation laws. SIAM Journal on Applied Mathematics,
71:128-146, 2011. [PDF] Keyword(s): biochemical networks, fluxes, Petri nets, persistence, biochemical networks with inputs.
│ │New checkable criteria for persistence of chemical reaction networks are proposed, which extend and complement existing ones. The new results allow the consideration of reaction rates│
│ │which are time-varying, thus incorporating the effects of external signals, and also relax the assumption of existence of global conservation laws, thus allowing for inflows │
│ │(production) and outflows (degradation). For time-invariant networks parameter-dependent conditions for persistence of certain classes of networks are provided. As an illustration, │
│Abstract:│two networks arising in the systems biology literature are analyzed, namely a hypoxia and an apoptosis network. │
8. L. Bleris, Z. Xie, D. Glass, A. Adadey, E.D. Sontag, and Y. Benenson. Synthetic incoherent feed-forward circuits show adaptation to the amount of their genetic template. Molecular Systems Biology
, 7:519-, 2011. [PDF] Keyword(s): adaptation, feedforward loops, systems biology, synthetic biology, incoherent feedforward loop, feedforward, IFFL.
│ │Natural and synthetic biological networks must function reliably in the face of fluctuating stoichiometry of their molecular components. These fluctuations are caused in part by │
│ │changes in relative expression efficiency and the DNA template amount of the network-coding genes. Gene product levels could potentially be decoupled from these changes via built-in │
│ │adaptation mechanisms, thereby boosting network reliability. Here we show that a mechanism based on an incoherent feed-forward motif enables adaptive gene expression in mammalian │
│ │cells. We modeled, synthesized, and tested transcriptional and post-transcriptional incoherent loops and found that in all cases the gene product adapts to changes in DNA template │
│ │abundance. We also observed that the post-transcriptional form results in superior adaptation behavior, higher absolute expression levels, and lower intrinsic fluctuations. Our │
│ │results support a previously-hypothesized endogenous role in gene dosage compensation for such motifs and suggest that their incorporation in synthetic networks will improve their │
│Abstract:│robustness and reliability. │
9. S.N. Dashkovskiy, D.V. Efimov, and E.D. Sontag. Ustoichivost' ot vhoda k sostoyaniu i smezhnie svoystva sistem (In Russian, Input to state stability and allied system properties). Avtomatika i
Telemekhanika (Automation and Remote Control), 72(8):1579-1614, 2011. [PDF]
10. A.C. Jiang, A. C. Ventura, E. D. Sontag, S. D. Merajver, A. J. Ninfa, and D. Del Vecchio. Load-induced modulation of signal transduction networks. Science Signaling, 4, issue 194:ra67, 2011. [PDF
] Keyword(s): systems biology, biochemical networks, synthetic biology, futile cycles, singular perturbations, modularity.
│ │Biological signal transduction networks are commonly viewed as circuits that pass along in the process amplifying signals, enhancing sensitivity, or performing other signal-processing│
│ │to transcriptional and other components. Here, we report on a "reverse-causality" phenomenon, which we call load-induced modulation. Through a combination of analytical and │
│ │experimental tools, we discovered that signaling was modulated, in a surprising way, by downstream targets that receive the signal and, in doing so, apply what in physics is called a │
│ │load. Specifically, we found that non-intuitive changes in response dynamics occurred for a covalent modification cycle when load was present. Loading altered the response time of a │
│ │system, depending on whether the activity of one of the enzymes was maximal and the other was operating at its minimal rate or whether both enzymes were operating at submaximal rates.│
│ │These two conditions, which we call "limit regime" and "intermediate regime," were associated with increased or decreased response times, respectively. The bandwidth, the range of │
│ │frequency in which the system can process information, decreased in the presence of load, suggesting that downstream targets participate in establishing a balance between │
│ │noise-filtering capabilities and a its ability to process high-frequency stimulation. Nodes in a signaling network are not independent relay devices, but rather are modulated by their│
│Abstract:│downstream targets │
11. O. Shoval, U. Alon, and E.D. Sontag. Symmetry invariance for adapting biological systems. SIAM Journal on Applied Dynamical Systems, 10:857-886, 2011. Note: (See here for a small typo: http://
www.sontaglab.org/FTPDIR/shoval.alon.sontag.erratum.pdf). [PDF] Keyword(s): identifiability, adaptation, biological adaptation, perfect adaptation, adaptation, feedforward loops, integral
feedback, scale invariance, systems biology, transient behavior, symmetries, fcd, fold-change detection, incoherent feedforward loop, feedforward, IFFL.
│ │Often, the ultimate goal of regulation is to maintain a narrow range of concentration levels of vital quantities (homeostasis, adaptation) while at the same time appropriately │
│ │reacting to changes in the environment (signal detection or sensitivity). Much theoretical, modeling, and analysis effort has been devoted to the understanding of these questions, │
│ │traditionally in the context of steady-state responses to constant or step-changing stimuli. In this paper, we present a new theorem that provides a necessary and sufficient │
│ │characterization of invariance of transient responses to symmetries in inputs. A particular example of this property, scale invariance (a.k.a. "fold change detection"), appears to be │
│ │exhibited by biological sensory systems ranging from bacterial chemotaxis pathways to signal transduction mechanisms in eukaryotes. The new characterization amounts to the solvability│
│ │of an associated partial differential equation. It is framed in terms of a notion which considerably extends equivariant actions of compact Lie groups. For several simple system │
│Abstract:│motifs that are recurrent in biology, the solvability criterion may be checked explicitly. │
12. D. Angeli and E.D. Sontag. A small-gain result for orthant-monotone systems in feedback: the non sign-definite case. In Proc. IEEE Conf. Decision and Control, Orlando, Dec. 2011, pages WeC09.1,
2011. Keyword(s): small-gain theorem, monotone systems.
│ │This note introduces a small-gain result for interconnected MIMO orthant-monotone systems for which no matching condition is required between the partial orders in input and output │
│ │spaces of the considered subsystems. Previous results assumed that the partial orders adopted would be induced by positivity cones in input and output spaces and that such positivity │
│ │cones should fulfill a compatibility rule: namely either be coincident or be opposite. Those two configurations corresponded to positive-feedback or negative feedback cases. We relax │
│Abstract:│those results by allowing arbitrary orthant orders. │
13. O. Shoval, U. Alon, and E.D. Sontag. Input symmetry invariance, and applications to biological systems. In Proc. IEEE Conf. Decision and Control, Orlando, Dec. 2011, pages TuA02.5, 2011. Keyword
(s): adaptation, biological adaptation, perfect adaptation, adaptation, feedforward loops, integral feedback, scale invariance, systems biology, transient behavior, symmetries, fcd, fold-change
detection, jump Markov processes.
│ │This paper studies invariance with respect to symmetries in sensory fields, a particular case of which, scale invariance, has recently been found in certain eukaryotic as well as │
│ │bacterial cell signaling systems. We describe a necessary and sufficient characterization of symmetry invariance in terms of equivariant transformations, show how this │
│Abstract:│characterization helps find all possible symmetries in standard models of biological adaptation, and discuss symmetry-invariant searches. │
14. E.D. Sontag. Remarks on invariance of population distributions for systems with equivariant internal dynamics. Technical report, arxiv.1108.3245, August 2011. [PDF] Keyword(s): adaptation,
biological adaptation, perfect adaptation, scale invariance, systems biology, transient behavior, symmetries, fcd, fold-change detection, jump Markov processes.
1. R. Albert, B. Dasgupta, and E.D. Sontag. Inference of signal transduction networks from double causal evidence. In David Feny, editor, Computational Biology, Methods in Molecular Biology vol. 673
, pages 239-251. Springer, 2010. [PDF] Keyword(s): systems biology, biochemical networks, algorithms, signal transduction networks, graph algorithms.
│Abstract:│We present a novel computational method, and related software, to synthesize signal transduction networks from single and double causal evidence.│
2. B. Dasgupta, P. Vera-Licona, and E.D. Sontag. Reverse engineering of molecular networks from a common combinatorial approach. In M. Elloumi and A.Y. Zomaya, editors, Algorithms in computational
molecular biology: Techniques, Approaches and Applications, pages 941-954. Wiley, Hoboken, 2010. [PDF] Keyword(s): reverse engineering, systems biology.
3. E.D. Sontag. Contractive systems with inputs. In Jan Willems, Shinji Hara, Yoshito Ohta, and Hisaya Fujioka, editors, Perspectives in Mathematical System Theory, Control, and Signal Processing,
pages 217-228. Springer-verlag, 2010. [PDF] Keyword(s): contractions, contractive systems, consensus, synchronization.
│ │Contraction theory provides an elegant way of analyzing the behaviors of systems subject to external inputs. Under sometimes easy to check hypotheses, systems can be shown to have the│
│ │incremental stability property that all trajectories converge to a unique solution. This property is especially interesting when forcing functions are periodic (globally attracting │
│ │limit cycles result), as well as in the context of establishing synchronization results. The present paper provides a self-contained introduction to some basic results, with a focus │
│Abstract:│on contractions with respect to non-Euclidean metrics. │
4. D. Angeli, P. de Leenheer, and E.D. Sontag. Graph-theoretic characterizations of monotonicity of chemical networks in reaction coordinates. J. Mathematical Biology, 61:581-616, 2010. [PDF]
Keyword(s): MAPK cascades, biochemical networks, fluxes, monotone systems, reaction cordinates, Petri nets, persistence, futile cycles.
│ │This paper derives new results for certain classes of chemical reaction networks, linking structural to dynamical properties. In particular, it investigates their monotonicity and │
│ │convergence without making assumptions on the form of the kinetics (e.g., mass-action) of the dynamical equations involved, and relying only on stoichiometric constraints. The key │
│ │idea is to find an alternative representation under which the resulting system is monotone. As a simple example, the paper shows that a phosphorylation/dephosphorylation process, │
│Abstract:│which is involved in many signaling cascades, has a global stability property. │
5. G. Russo, M. di Bernardo, and E.D. Sontag. Global entrainment of transcriptional systems to periodic inputs. PLoS Computational Biology, 6:e1000739, 2010. [PDF] Keyword(s): contractive systems,
contractions, systems biology, biochemical networks, gene and protein networks.
│ │This paper addresses the problem of giving conditions for transcriptional systems to be globally entrained to external periodic inputs. By using contraction theory, a powerful tool │
│ │from dynamical systems theory, it is shown that certain systems driven by external periodic signals have the property that all solutions converge to fixed limit cycles. General │
│Abstract:│results are proved, and the properties are verified in the specific case of some models of transcriptional systems. │
6. L. Scardovi, M. Arcak, and E.D. Sontag. Synchronization of interconnected systems with applications to biochemical networks: an input-output approach. IEEE Transactions Autom. Control,
55:1367-1379, 2010. [PDF]
│ │This paper provides synchronization conditions for networks of nonlinear systems, where each component of the network itself consists of subsystems represented as operators in the │
│ │extended L2 space. The synchronization conditions are provided by combining the input-output properties of the subsystems with information about the structure of network. The paper │
│ │also explores results for state-space models as well as biochemical applications. The work is motivated by cellular networks where signaling occurs both internally, through │
│Abstract:│interactions of species, and externally, through intercellular signaling. │
7. O. Shoval, L. Goentoro, Y. Hart, A. Mayo, E.D. Sontag, and U. Alon. Fold change detection and scalar symmetry of sensory input fields. Proc Natl Acad Sci USA, 107:15995-16000, 2010. [PDF] Keyword
(s): identifiability, adaptation, biological adaptation, perfect adaptation, adaptation, feedforward loops, integral feedback, scale invariance, systems biology, transient behavior, symmetries,
fcd, fold-change detection, incoherent feedforward loop, feedforward, IFFL.
│ │Certain cellular sensory systems display fold-change detection (FCD): a response whose entire shape, including amplitude and duration, depends only on fold-changes in input, and not │
│ │on absolute changes. Thus, a step change in input from, say, level 1 to 2, gives precisely the same dynamical output as a step from level 2 to 4, since the steps have the same │
│ │fold-change. We ask what is the benefit of FCD, and show that FCD is necessary and sufficient for sensory search to be independent of multiplying the input-field by a scalar. Thus the│
│ │FCD search pattern depends only on the spatial profile of the input, and not on its amplitude. Such scalar symmetry occurs in a wide range of sensory inputs, such as source strength │
│ │multiplying diffusing/convecting chemical fields sensed in chemotaxis, ambient light multiplying the contrast field in vision, and protein concentrations multiplying the output in │
│ │cellular signaling-systems.Furthermore, we demonstrate that FCD entails two features found across sensory systems, exact adaptation and Weber's law, but that these two features are │
│ │not sufficient for FCD. Finally, we present a wide class of mechanisms that have FCD, including certain non-linear feedback and feedforward loops.. We find that bacterial chemotaxis │
│ │displays feedback within the present class, and hence is expected to show FCD. This can explain experiments in which chemotaxis searches are insensitive to attractant source levels. │
│Abstract:│This study thus suggests a connection between properties of biological sensory systems and scalar symmetry stemming from physical properties of their input-fields. │
8. E.D. Sontag. Remarks on Feedforward Circuits, Adaptation, and Pulse Memory. IET Systems Biology, 4:39-51, 2010. [PDF] Keyword(s): adaptation, feedforward loops, integral feedback, systems
biology, transient behavior, incoherent feedforward loop, feedforward, IFFL.
│ │This note studies feedforward circuits as models for perfect adaptation to step signals in biological systems. A global convergence theorem is proved in a general framework, which │
│ │includes examples from the literature as particular cases. A notable aspect of these circuits is that they do not adapt to pulse signals, because they display a memory phenomenon. │
│Abstract:│Estimates are given of the magnitude of this effect. │
9. E.D. Sontag. Rudolf E. Kalman and his students. Control Systems Magazine, 30:87-103, 2010. [PDF]
│Abstract:│An edited set of articles about Rudolf Kalman's legacy through his Ph.D. students. │
10. E.D. Sontag and D. Zeilberger. A symbolic computation approach to a problem involving multivariate Poisson distributions. Advances in Applied Mathematics, 44:359-377, 2010. Note: There are a few
typos in the published version. Please see this file for corrections: https://drive.google.com/file/d/0BzWFHczJF2INUlEtVkFJOUJiUFU/view. [PDF] Keyword(s): probability theory, stochastic systems,
systems biology, biochemical networks, chemical master equation.
│ │Multivariate Poisson random variables subject to linear integer constraints arise in several application areas, such as queuing and biomolecular networks. This note shows how to │
│ │compute conditional statistics in this context, by employing WZ Theory and associated algorithms. A symbolic computation package has been developed and is made freely available. A │
│Abstract:│discussion of motivating biomolecular problems is also provided. │
11. L. Wang, P. de Leenheer, and E.D. Sontag. Conditions for global stability of monotone tridiagonal systems with negative feedback. Systems and Control Letters, 59:138-130, 2010. [PDF] Keyword(s):
systems biology, monotone systems, tridiagonal systems, global stability.
│ │This paper studies monotone tridiagonal systems with negative feedback. These systems possess the Poincar{\'e}-Bendixson property, which implies that, if orbits are bounded, if there │
│ │is a unique steady state and this unique equilibrium is asymptotically stable, and if one can rule out periodic orbits, then the steady state is globally asymptotically stable. │
│ │Different approaches are discussed to rule out period orbits. One is based on direct linearization, while the other uses the theory of second additive compound matrices. Among the │
│Abstract:│examples that will illustrate our main theoretical results is the classical Goldbeter model of circadian rhythms. │
12. G. Russo, M. di Bernardo, and E.D. Sontag. Stability of networked systems: a multi-scale approach using contraction. In Proc. IEEE Conf. Decision and Control, Atlanta, Dec. 2010, pages FrB14.3,
2010. Keyword(s): contractive systems, contractions, systems biology, biochemical networks, synchronization.
│Abstract:│Preliminary conference version of ''A contraction approach to the hierarchical analysis and design of networked systems''. │
13. E.D. Sontag. Remarks on structural identification, modularity, and retroactivity. In Proc. IEEE Conf. Decision and Control, Atlanta, Dec. 2010, pages ThA23.1, 2010. [PDF] Keyword(s): modularity,
retroactivity, identification.
│Abstract:│Summarized conference version of ``Modularity, retroactivity, and structural identification''. │
14. A. White, P.G. Cipriani, H.-L. Kao, B. Lees, D. Geiger, E.D. Sontag, K. Gunsalus, and F. Piano. Rapid and accurate developmental stage recognition of C. elegans from high-throughput image data.
In 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3089-3096, 2010. [PDF]
│ │This paper presents a hierarchical principle for object recognition and its application to automatically classify developmental stages of C. elegans animals from a population of mixed│
│Abstract:│stages. The system is in current use in a functioning C. elegans laboratory and has processed over two hundred thousand images for lab users. │
1. D. Angeli and E.D. Sontag. Graphs and the Dynamics of Biochemical Networks. In B.P. Ingalls and P. Iglesias, editors, Control Theory in Systems Biology, pages 125-142. MIT Press, 2009.
│ │This is an expository paper about graph-theoretical properties of biochemical networks, discussing two approaches, one based on bipartite graphs and Petri net concepts, and another │
│Abstract:│based on decompositions into order-preserving subsystems. Other papers on this website contain basically the same material. │
2. D. Del Vecchio and E.D. Sontag. Synthetic Biology: A Systems Engineering Perspective. In B.P. Ingalls and P. Iglesias, editors, Control Theory in Systems Biology, pages 101-123. MIT Press, 2009.
│Abstract:│This is an expository paper about certain aspects of Synthetic Biology, including a discussion of the issue of modularity (load effects from downstream components).│
3. D. Angeli, M.W. Hirsch, and E.D. Sontag. Attractors in coherent systems of differential equations. J. of Differential Equations, 246:3058-3076, 2009. [PDF] Keyword(s): monotone systems, positive
feedback systems.
│ │Attractors of cooperative dynamical systems are particularly simple; for example, a nontrivial periodic orbit cannot be an attractor. This paper provides characterizations of │
│ │attractors for the wider class of systems defined by the property that all directed feedback loops are positive. Several new results for cooperative systems are obtained in the │
│Abstract:│process. │
4. D. Angeli, P. de Leenheer, and E.D. Sontag. Chemical networks with inflows and outflows: A positive linear differential inclusions approach. Biotechnology Progress, 25:632-642, 2009. [PDF]
Keyword(s): biochemical networks, fluxes, differential inclusions, positive systems, Petri nets, persistence, switched systems.
│ │Certain mass-action kinetics models of biochemical reaction networks, although described by nonlinear differential equations, may be partially viewed as state-dependent linear │
│ │time-varying systems, which in turn may be modeled by convex compact valued positive linear differential inclusions. A result is provided on asymptotic stability of such inclusions, │
│Abstract:│and applied to biochemical reaction networks with inflows and outflows. Included is also a characterization of exponential stability of general homogeneous switched systems │
5. M. Chaves, A. M. Sengupta, and E.D. Sontag. Geometry and topology of parameter space: investigating measures of robustness in regulatory networks. J. of Mathematical Biology, 59:315-358, 2009. [
PDF] Keyword(s): identifiability, robust, robustness, geometry.
│ │The concept of robustness of regulatory networks has been closely related to the nature of the interactions among genes, and the capability of pattern maintenance or reproducibility. │
│ │Defining this robustness property is a challenging task, but mathematical models have often associated it to the volume of the space of admissible parameters. Not only the volume of │
│ │the space but also its topology and geometry contain information on essential aspects of the network, including feasible pathways, switching between two parallel pathways or distinct/│
│ │disconnected active regions of parameters. A method is presented here to characterize the space of admissible parameters, by writing it as a semi-algebraic set, and then theoretically│
│ │analyzing its topology and geometry, as well as volume. This method provides a more objective and complete measure of the robustness of a developmental module. As a detailed case │
│Abstract:│study, the segment polarity gene network is analyzed. │
6. A. Dayarian, M. Chaves, E.D. Sontag, and A. M. Sengupta. Shape, Size and Robustness: Feasible Regions in the Parameter Space of Biochemical Networks. PLoS Computational Biology, 5:e10000256,
2009. [PDF] Keyword(s): identifiability, robust, robustness, geometry.
│ │The concept of robustness of regulatory networks has received much attention in the last decade. One measure of robustness has been associated with the volume of the feasible region, │
│ │namely, the region in the parameter space in which the system is functional. In recent work, we emphasized that topology and geometry matter, as well as volume. In this paper, and │
│ │using the segment polarity gene network to illustrate our approach, we show that random walks in parameter space and how they exit the feasible region provide a rich perspective on │
│ │the different modes of failure of a model. In particular, for the segment polarity network, we found that, between two alternative ways of activating Wingless, one is more robust. Our│
│ │method provides a more complete measure of robustness to parameter variation. As a general modeling strategy, our approach is an interesting alternative to Boolean representation of │
│Abstract:│biochemical networks. │
7. D. Del Vecchio and E.D. Sontag. Engineering Principles in Bio-Molecular Systems: From Retroactivity to Modularity. European Journal of Control, 15:389-397, 2009. Note: Preliminary version
appeared as paper MoB2.2 in Proceedings of the European Control Conference 2009, August 23-26, 2009, Budapest. [PDF] Keyword(s): systems biology, biochemical networks, synthetic biology, futile
cycles, singular perturbations, modularity.
8. T. Riley, X. Yu, E.D. Sontag, and A. Levine. The P53HMM algorithm: using novel profile Hidden Markov Models to detect p53-responsive genes. BMC Bioinformatics, 10:111, 2009. [PDF] [doi:10.1186/
1471-2105-10-111] Keyword(s): Hidden Markov Models, p53, transcription factors.
│ │A novel computational method (called p53HMM) is presented that utilizes Profile Hidden Markov Models (PHMM's) to estimate the relative binding affinities of putative p53 response │
│ │elements (RE's), both p53 single-sites and cluster-sites. These models incorporate a novel ``Correlated Baum Welch'' training algorithm that provides increased predictive power by │
│ │exploiting the redundancy of information found in the repeated, palindromic p53-binding motif. The predictive accuracy of these new models are compared against other predictive │
│ │models, including position specic score matrices (PSSM's, or weight matrices). Finally, we provide experimental evidence that verifies a predicted p53-target site that regu- lates the│
│Abstract:│CHMP4C gene. The P53HMM algorithm is available on-line from http://tools.csb.ias.edu. │
9. E.D. Sontag, Y. Wang, and A. Megretski. Input classes for identification of bilinear systems. IEEE Transactions Autom. Control, 54:195-207, 2009. Note: Also arXiv math.OC/0610633, 20 Oct 2006,
and short version in ACC'07.[PDF] Keyword(s): realization theory, observability, identifiability, bilinear systems.
│ │This paper asks what classes of input signals are sufficient in order to completely identify the input/output behavior of generic bilinear systems. The main results are that step │
│Abstract:│inputs are not sufficient, nor are single pulses, but the family of all pulses (of a fixed amplitude but varying widths) do suffice for identification. │
10. A.M. Weinstein and E.D. Sontag. Modeling proximal tubule cell homeostasis: Tracking changes in luminal flow. Bulletin of Mathematical Biology, 71:1285-1322, 2009. [PDF]
│ │During normal kidney function, there are are routinely wide swings in proximal tubule fluid flow and proportional changes in Na+ reabsorption across tubule epithelial cells. This │
│ │"glomerulotubular balance" occurs in the absence of any substantial change in cell volume, and is thus a challenge to coordinate luminal membrane solute entry with peritubular │
│ │membrane solute exit. In this work, linear optimal control theory is applied to generate a configuration of regulated transporters that could achieve this result. A previously │
│ │developed model of rat proximal tubule epithelium is linearized about a physiologic reference condition; the approximate linear system is recast as a dynamical system; and a Riccati │
│ │equation is solved to yield optimal linear feedback that stabilizes Na+ flux, cell volume, and cell pH. This optimal feedback control is largely consigned to three physiologic │
│ │variables, cell volume, cell electrical potential, and lateral intercellular hydrostatic pressure. Transport modulation by cell volume stabilizes cell volume; transport modulation by │
│ │electrical potential or interspace pressure act to stabilize Na+ flux and cell pH. This feedback control is utilized in a tracking problem, in which reabsorptive Na+ flux varies over │
│ │a factor of two. The resulting control parameters consist of two terms, an autonomous term and a feedback term, and both terms include transporters on both luminal and peritubular │
│ │cell membranes. Overall, the increase in Na+ flux is achieved with upregulation of luminal Na+/H+ exchange and Na+-glucose cotransport, with increased peritubular Na+-3HCO_3- and │
│ │K+-Cl- cotransport, and with increased Na+,K+-ATPase activity. The configuration of activated transporters emerges as testable hypothesis of the molecular basis for glomerulotubular │
│Abstract:│balance. It is suggested that the autonomous control component at each cell membrane could represent the cytoskeletal effects of luminal flow. │
11. D. Angeli, P. de Leenheer, and E.D. Sontag. On persistence of chemical reaction networks with time-dependent kinetics and no global conservation laws. In Proc. IEEE Conf. Decision and Control,
Shanhai, Dec. 2009, pages 4559-4564, 2009. [PDF] Keyword(s): biochemical networks, fluxes, Petri nets, persistence, biochemical networks with inputs.
│Abstract:│This is a very summarized version ofthe first part of the paper "Persistence results for chemical reaction networks with time-dependent kinetics and no global conservation laws".│
12. L. Scardovi, M. Arcak, and E.D. Sontag. Synchronization of interconnected systems with an input-output approach. Part I: Main results. In Proc. IEEE Conf. Decision and Control, Shanhai, Dec. 2009
, pages 609-614, 2009. Note: First part of conference version of journal paper.Keyword(s): passive systems, secant condition, biochemical networks, systems biology.
│Abstract:│See abstract and link to pdf in entry for Journal paper. │
13. L. Scardovi, M. Arcak, and E.D. Sontag. Synchronization of interconnected systems with an input-output approach. Part II: State-Space result and application to biochemical networks. In Proc. IEEE
Conf. Decision and Control, Shanhai, Dec. 2009, pages 615-620, 2009. Note: Second part of conference version of journal paper.Keyword(s): passive systems, secant condition, biochemical networks,
systems biology.
│Abstract:│See abstract and link to pdf in entry for Journal paper. │
14. E.D. Sontag. An observation regarding systems which converge to steady states for all constant inputs, yet become chaotic with periodic inputs. Technical report, arxiv 0906.2166, 2009. [PDF]
1. M. Arcak and E.D. Sontag. Passivity-based Stability of Interconnection Structures. In V. Blondel, S. Boyd, and H. Kimura, editors, Recent Advances in Learning and Control, volume Volume 371,
pages 195-204. Springer-Verlag, NY, 2008. [PDF] [doi:10.1007/978-1-84800-155-8_14] Keyword(s): passive systems, secant condition, biochemical networks.
│ │In this expository paper, we provide a streamlined version of the key lemma on stability of interconnections due to Vidyasagar and Moylan and Hill, and then show how it its hypotheses│
│Abstract:│may be verified for network structures of great interest in biology. │
2. R. Albert, B. Dasgupta, R. Dondi, and E.D. Sontag. Inferring (biological) signal transduction networks via transitive reductions of directed graphs. Algorithmica, 51:129-159, 2008. [PDF] [doi:
10.1007/s00453-007-9055-0] Keyword(s): systems biology, biochemical networks, algorithms, signal transduction networks, graph algorithms.
│ │The transitive reduction problem is that of inferring a sparsest possible biological signal transduction network consistent with a set of experimental observations, with a goal to │
│ │minimize false positive inferences even if risking false negatives. This paper provides computational complexity results as well as approximation algorithms with guaranteed │
│Abstract:│performance. │
3. D. Angeli and E.D. Sontag. Oscillations in I/O monotone systems. IEEE Transactions on Circuits and Systems, Special Issue on Systems Biology, 55:166-176, 2008. Note: Preprint version in arXiv
q-bio.QM/0701018, 14 Jan 2007. [PDF] Keyword(s): monotone systems, hopf bifurcations, circadian rhythms, tridiagonal systems, nonlinear dynamics, systems biology, biochemical networks,
oscillations, periodic behavior, delay-differential systems.
│ │In this note, we show how certain properties of Goldbeter's 1995 model for circadian oscillations can be proved mathematically, using techniques from the recently developed theory of │
│ │monotone systems with inputs and outputs. The theory establishes global asymptotic stability, and in particular no oscillations, if the rate of transcription is somewhat smaller than │
│ │that assumed by Goldbeter, based on the application of a tight small gain condition. This stability persists even under arbitrary delays in the feedback loop. On the other hand, when │
│Abstract:│the condition is violated a Poincare'-Bendixson result allows to conclude existence of oscillations, for sufficiently high delays. │
4. D. Angeli and E.D. Sontag. Translation-invariant monotone systems, and a global convergence result for enzymatic futile cycles. Nonlinear Analysis Series B: Real World Applications, 9:128-140,
2008. [PDF] [doi:10.1016/j.nonrwa.2006.09.006] Keyword(s): systems biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
│ │Strongly monotone systems of ordinary differential equations which have a certain translation-invariance property are shown to have the property that all projected solutions converge │
│ │to a unique equilibrium. This result may be seen as a dual of a well-known theorem of Mierczynski for systems that satisfy a conservation law. As an application, it is shown that │
│Abstract:│enzymatic futile cycles have a global convergence property. │
5. M. Arcak and E.D. Sontag. A passivity-based stability criterion for a class of interconnected systems and applications to biochemical reaction networks. Mathematical Biosciences and Engineering,
5:1-19, 2008. Note: Also, preprint: arxiv0705.3188v1 [q-bio], May 2007. [PDF] Keyword(s): MAPK cascades, systems biology, biochemical networks, cyclic feedback systems, secant condition,
nonlinear stability, dynamical systems.
│ │This paper presents a stability test for a class of interconnected nonlinear systems motivated by biochemical reaction networks. One of the main results determines global asymptotic │
│ │stability of the network from the diagonal stability of a "dissipativity matrix" which incorporates information about the passivity properties of the subsystems, the interconnection │
│ │structure of the network, and the signs of the interconnection terms. This stability test encompasses the "secant criterion" for cyclic networks presented in our previous paper, and │
│ │extends it to a general interconnection structure represented by a graph. A second main result allows one to accommodate state products. This extension makes the new stability │
│ │criterion applicable to a broader class of models, even in the case of cyclic systems. The new stability test is illustrated on a mitogen activated protein kinase (MAPK) cascade │
│ │model, and on a branched interconnection structure motivated by metabolic networks. Finally, another result addresses the robustness of stability in the presence of diffusion terms in│
│Abstract:│a compartmental system made out of identical systems. │
6. D. Del Vecchio, A.J. Ninfa, and E.D. Sontag. Modular Cell Biology: Retroactivity and Insulation. Molecular Systems Biology, 4:161, 2008. [PDF] Keyword(s): retroactivity, systems biology,
biochemical networks, synthetic biology, futile cycles, singular perturbations, modularity.
│ │Modularity plays a fundamental role in the prediction of the behavior of a system from the behavior of its components, guaranteeing that the properties of individual components do not│
│ │change upon interconnection. Just as electrical, hydraulic, and other physical systems often do not display modularity, nor do many biochemical systems, and specifically, genetic │
│ │networks. Here, we study the effect of interconnections on the input/output dynamic characteristics of transcriptional components, focusing on a property, which we call │
│ │"retroactivity," that plays a role analogous to non-zero output impedance in electrical systems. In transcriptional networks, retroactivity is large when the amount of transcription │
│ │factor is comparable to, or smaller than, the amount of promoter binding sites, or when the affinity of such binding sites is high. In order to attenuate the effect of retroactivity, │
│ │we propose a feedback mechanism inspired by the design of amplifiers in electronics. We introduce, in particular, a mechanism based on a phosphorylation/dephosphorylation cycle. This │
│ │mechanism enjoys a remarkable insulation property, due to the fast time scales of the phosphorylation and dephosphorylation reactions. Such a mechanism, when viewed as a signal │
│Abstract:│transduction system, has thus an inherent capacity to provide insulation and hence to increase the modularity of the system in which it is placed. │
7. G.A. Enciso and E.D. Sontag. Monotone bifurcation graphs. Journal of Biological Dynamics, 2:121-139, 2008. [PDF]
│ │This paper generalizes the approach to bistability based on the existence of characteristics for open-loop monotone systems to the case when characteristics do not exist. A set-valued│
│Abstract:│version is provided, instead. │
8. M.R. Jovanovic, M. Arcak, and E.D. Sontag. A passivity-based approach to stability of spatially distributed systems with a cyclic interconnection structure. IEEE Transactions on Circuits and
Systems, Special Issue on Systems Biology, 55:75-86, 2008. Note: Preprint: also arXiv math.OC/0701622, 22 January 2007.[PDF] Keyword(s): MAPK cascades, systems biology, biochemical networks,
nonlinear stability, nonlinear dynamics, diffusion, secant condition, cyclic feedback systems.
│ │A class of distributed systems with a cyclic interconnection structure is considered. These systems arise in several biochemical applications and they can undergo diffusion driven │
│ │instability which leads to a formation of spatially heterogeneous patterns. In this paper, a class of cyclic systems in which addition of diffusion does not have a destabilizing │
│ │effect is identified. For these systems global stability results hold if the "secant" criterion is satisfied. In the linear case, it is shown that the secant condition is necessary │
│ │and sufficient for the existence of a decoupled quadratic Lyapunov function, which extends a recent diagonal stability result to partial differential equations. For reaction-diffusion│
│ │equations with nondecreasing coupling nonlinearities global asymptotic stability of the origin is established. All of the derived results remain true for both linear and nonlinear │
│Abstract:│positive diffusion terms. Similar results are shown for compartmental systems. │
9. S. Kachalo, R. Zhang, E.D. Sontag, R. Albert, and B. Dasgupta. NET-SYNTHESIS: A software for synthesis, inference and simplification of signal transduction networks. Bioinformatics, 24:293 - 295,
2008. [PDF] Keyword(s): systems biology, biochemical networks, algorithms, signal transduction networks, graph algorithms.
│ │This paper presents a software tool for inference and simplification of signal transduction networks. The method relies on the representation of observed indirect causal relationships│
│ │as network paths, using techniques from combinatorial optimization to find the sparsest graph consistent with all experimental observations. We illustrate the biological usability of │
│ │our software by applying it to a previously published signal transduction network and by using it to synthesize and simplify a novel network corresponding to activation-induced cell │
│Abstract:│death in large granular lymphocyte leukemia. │
10. A. Maayan, R. Iyengar, and E.D. Sontag. Intracellular Regulatory Networks are close to Monotone Systems. IET Systems Biology, 2:103-112, 2008. [PDF]
│ │We find that three intracellular regulatory networks contain far more positive "sign-consistent" feedback and feed-forward loops than negative loops. Negative inconsistent loops can │
│ │be more easily removed from real regulatory network topologies compared to removing negative loops from shuffled networks. The abundance of positive feed-forward loops and feedback │
│ │loops in real networks emerges from the presence of hubs that are enriched with either negative or positive links, and from the non-uniform connectivity distribution. Boolean dynamics│
│ │applied to the signaling network further support the stability of its topology. These observations suggest that the "close-to-monotone" structure of intracellular regulatory networks │
│Abstract:│may contribute to the dynamical stability observed in cellular behavior. │
11. T. Riley, E.D. Sontag, P. Chen, and A. Levine. The transcriptional regulation of human p53-regulated genes. Nature Reviews Molecular Cell Biology, 9:402-412, 2008. [PDF] Keyword(s): Hidden Markov
Models, p53, transcription.
│ │The p53 protein regulates the transcription of many different genes in response to a wide variety of stress signals. Following DNA damage, p53 regulates key processes, including DNA │
│ │repair, cell-cycle arrest, senescence and apoptosis, in order to suppress cancer. This Analysis article provides an overview of the current knowledge of p53-regulated genes in these │
│ │pathways and others, and the mechanisms of their regulation. In addition, we present the most comprehensive list so far of human p53-regulated genes and their experimentally │
│Abstract:│validated, functional binding sites that confer p53 regulation. │
12. E.D. Sontag. Network reconstruction based on steady-state data. Essays in Biochemistry, 45:161-176, 2008. [PDF] Keyword(s): modular response analysis, systems biology, biochemical networks,
reverse engineering, gene and protein networks, protein networks, gene networks, systems identification, MAPK cascades.
│ │The ``reverse engineering problem'' in systems biology is that of unraveling of the web of interactions among the components of protein and gene regulatory networks, so as to map out │
│ │the direct or local interactions among components. These direct interactions capture the topology of the functional network. An intrinsic difficulty in capturing these direct │
│ │interactions, at least in intact cells, is that any perturbation to a particular gene or signaling component may rapidly propagate throughout the network, thus causing global changes │
│ │which cannot be easily distinguished from direct effects. Thus, a major goal in reverse engineering is to use these observed global responses - such as steady-state changes in │
│ │concentrations of active proteins, mRNA levels, or transcription rates - in order to infer the local interactions between individual nodes. One approach to solving this │
│ │global-to-local problem is the ``Modular Response Analysis'' (MRA) method proposed in work of the author with Kholodenko et. al. (PNAS, 2002) and further elaborated in other papers. │
│ │The basic method deals only with steady-state data. However, recently, quasi-steady state MRA has been used by Santos et. al. (Nature Cell Biology, 2007) for quantifying positive and │
│ │negative feedback effects in the Raf/Mek/Erk MAPK network in rat adrenal pheochromocytoma (PC-12) cells. This paper presents an overview of the MRA technique, as well as a │
│Abstract:│generalization of the algorithm to that quasi-steady state case. │
13. E.D. Sontag, A. Veliz-Cuba, R. Laubenbacher, and A.S. Jarrah. The effect of negative feedback loops on the dynamics of Boolean networks. Biophysical Journal, 95:518-526, 2008. [PDF] Keyword(s):
monotone systems, positive feedback systems, Boolean networks, limit cycles.
│ │Feedback loops play an important role in determining the dynamics of biological networks. In order to study the role of negative feedback loops, this paper introduces the notion of │
│ │"distance to positive feedback (PF-distance)" which in essence captures the number of "independent" negative feedback loops in the network, a property inherent in the network │
│ │topology. Through a computational study using Boolean networks it is shown that PF-distance has a strong influence on network dynamics and correlates very well with the number and │
│ │length of limit cycles in the phase space of the network. To be precise, it is shown that, as the number of independent negative feedback loops increases, the number (length) of limit│
│ │cycles tends to decrease (increase). These conclusions are consistent with the fact that certain natural biological networks exhibit generally regular behavior and have fewer negative│
│Abstract:│feedback loops than randomized networks with the same numbers of nodes and connectivity. │
14. L. Wang and E.D. Sontag. On the number of steady states in a multiple futile cycle. Journal of Mathematical Biology, 57:29-52, 2008. [PDF] Keyword(s): singular perturbations, futile cycles, MAPK
cascades, systems biology, biochemical networks, multistability.
│ │This note studies the number of positive steady states in biomolecular reactions consisting of activation/deactivation futile cycles, such as those arising from phosphorylations and │
│ │dephosphorylations at each level of a MAPK cascade. It is shown that: (1) for some parameter ranges, there are at least n+1 (if n is even) or n (if n is odd) steady states; (2) there │
│ │never are more than 2n-1 steady states (so, for n=2, there are no more than 3 steady states); (3) for parameters near the standard Michaelis-Menten quasi-steady state conditions, │
│Abstract:│there are at most n+1 steady states; and (4) for parameters far from the standard Michaelis-Menten quasi-steady state conditions, there is at most one steady state. │
15. L. Wang and E.D. Sontag. Singularly perturbed monotone systems and an application to double phosphorylation cycles. J. Nonlinear Science, 18:527-550, 2008. [PDF] Keyword(s): singular
perturbations, futile cycles, MAPK cascades, systems biology, biochemical networks, nonlinear stability, nonlinear dynamics, multistability, monotone systems.
│ │The theory of monotone dynamical systems has been found very useful in the modeling of some gene, protein, and signaling networks. In monotone systems, every net feedback loop is │
│ │positive. On the other hand, negative feedback loops are important features of many systems, since they are required for adaptation and precision. This paper shows that, provided that│
│ │these negative loops act at a comparatively fast time scale, the main dynamical property of (strongly) monotone systems, convergence to steady states, is still valid. An application │
│ │is worked out to a double-phosphorylation "futile cycle" motif which plays a central role in eukaryotic cell signaling The workis heavily based on Fenichel-Jones geometric singular │
│Abstract:│perturbation theory. │
16. B. Andrews, E.D. Sontag, and P. Iglesias. An approximate internal model principle: Applications to nonlinear models of biological systems. In Proc. 17th IFAC World Congress, Seoul, pages Paper
FrB25.3, 6 pages, 2008. [PDF] Keyword(s): biological adaptation, internal model principle.
│ │The proper function of many biological systems requires that external perturbations be detected, allowing the system to adapt to these environmental changes. It is now well │
│ │established that this dual detection and adaptation requires that the system have an internal model in the feedback loop. In this paper we relax the requirement that the response of │
│ │the system adapt perfectly, but instead allow regulation to within a neighborhood of zero. We show, in a nonlinear setting, that systems with the ability to detect input signals and │
│ │approximately adapt require an approximate model of the input. We illustrate our results by analyzing a well-studied biological system. These results generalize previous work which │
│Abstract:│treats the perfectly adapting case. │
17. D. Del Vecchio, A.J. Ninfa, and E.D. Sontag. A Systems Theory with Retroactivity: Application to Transcriptional Modules. In Proceedings of the 2008 American Control Conference, Seattle, June
2008, pages Paper WeC04.1, 2008. [PDF] Keyword(s): retroactivity, systems biology, biochemical networks, synthetic biology, futile cycles, singular perturbations, modularity.
18. L. Wang, P. de Leenheer, and E.D. Sontag. Global stability for monotone tridiagonal systems with negative feedback. In Proc. IEEE Conf. Decision and Control, Cancun, Dec. 2008, pages 4091-4096,
2008. Keyword(s): systems biology, monotone systems, tridiagonal systems, global stability.
│Abstract:│Conference version of paper "Conditions for global stability of monotone tridiagonal systems with negative feedback" │
1. R. Albert, B. DasGupta, R. Dondi, S. Kachalo, E.D. Sontag, A. Zelikovsky, and K. Westbrooks. A novel method for signal transduction network inference from indirect experimental evidence. In R.
Giancarlo and S. Hannenhalli, editors, 7th Workshop on Algorithms in Bioinformatics (WABI), volume 14, pages 407-419. Springer-Verlag, Berlin, 2007. Note: Conference version of journal paper with
same title. Keyword(s): systems biology, biochemical networks, algorithms, signal transduction networks, graph algorithms.
2. D. Angeli, P. De Leenheer, and E.D. Sontag. A Petri net approach to persistence analysis in chemical reaction networks. In I. Queinnec, S. Tarbouriech, G. Garcia, and S-I. Niculescu, editors,
Biology and Control Theory: Current Challenges (Lecture Notes in Control and Information Sciences Volume 357), pages 181-216. Springer-Verlag, Berlin, 2007. Note: See abstract for A Petri net
approach to the study of persistence in chemical reaction networks.[PDF]
3. E.D. Sontag. Input to state stability: Basic concepts and results. In P. Nistri and G. Stefani, editors, Nonlinear and Optimal Control Theory, pages 163-220. Springer-Verlag, Berlin, 2007. [PDF]
Keyword(s): input to state stability, stability, input to state stability, nonlinear systems, detectability, nonlinear regulation.
│ │This expository presentation, prepared for a summer course, addresses the precise formulation of questions of robustness with respect to disturbances, using the paradigm of input to │
│Abstract:│state stability. It provides an intuitive and informal presentation of the main concepts. │
4. E.D. Sontag. Monotone and near-monotone systems. In I. Queinnec, S. Tarbouriech, G. Garcia, and S-I. Niculescu, editors, Biology and Control Theory: Current Challenges (Lecture Notes in Control
and Information Sciences Volume 357), pages 79-122. Springer-Verlag, Berlin, 2007. Note: Conference version of ``Monotone and near-monotone biochemical networks,'' basically the same
paper.Keyword(s): systems biology, biochemical networks, monotone systems, Ising spin models, nonlinear stability, dynamical systems, consistent graphs, gene networks.
│Abstract:│See abstract and pdf for ``Monotone and near-monotone biochemical networks''. │
5. E.D. Sontag. Stability and Feedback Stabilization. In Robert Meyers, editor, Encyclopedia of Complexity and Systems Science. Springer-Verlag, Berlin, 2007. Keyword(s): stability, nonlinear
control, feedback stabilization.
│ │The problem of stabilization of equilibria is one of the central issues in control. In addition to its intrinsic interest, it represents a first step towards the solution of more │
│ │complicated problems, such as the stabilization of periodic orbits or general invariant sets, or the attainment of other control objectives, such as tracking, disturbance rejection, │
│ │or output feedback, all of which may be interpreted as requiring the stabilization of some quantity (typically, some sort of ``error'' signal). A very special case, when there are no │
│Abstract:│inputs, is that of stability. This short and informal article provides an introduction to the subject. │
6. E.D. Sontag and Y. Wang. Uniformly Universal Inputs. In Alessandro Astolfi, editor, Analysis and Design of Nonlinear Control Systems, volume 224, pages 9-24. Springer-Verlag, London, 2007. [PDF]
Keyword(s): observability, identification, real-analytic functions.
│ │A result is presented showing the existence of inputs universal for observability, uniformly with respect to the class of all continuous-time analytic systems. This represents an │
│Abstract:│ultimate generalization of a 1977 theorem, for bilinear systems, due to Alberto Isidori and Osvaldo Grasselli. │
7. R. Albert, B. DasGupta, R. Dondi, S. Kachalo, E.D. Sontag, A. Zelikovsky, and K. Westbrooks. A novel method for signal transduction network inference from indirect experimental evidence. Journal
of Computational Biology, 14:927-949, 2007. [PDF] Keyword(s): systems biology, biochemical networks, algorithms, signal transduction networks, graph algorithms.
│ │This paper introduces a new method of combined synthesis and inference of biological signal transduction networks. The main idea lies in representing observed causal relationships as │
│ │network paths, and using techniques from combinatorial optimization to find the sparsest graph consistent with all experimental observations. The paper formalizes the approach, │
│ │studies its computational complexity, proves new results for exact and approximate solutions of the computationally hard transitive reduction substep of the approach, validates the │
│ │biological applicability by applying it to a previously published signal transduction network by Li et al., and shows that the algorithm for the transitive reduction substep performs │
│Abstract:│well on graphs with a structure similar to those observed in transcriptional regulatory and signal transduction networks. │
8. D. Angeli, P. de Leenheer, and E.D. Sontag. A Petri net approach to the study of persistence in chemical reaction networks. Mathematical Biosciences, 210:598-618, 2007. Note: Please look at the
paper ``A Petri net approach to persistence analysis in chemical reaction networks'' for additional results, not included in the journal paper due to lack of space. See also the preprint: arXiv
q-bio.MN/068019v2, 10 Aug 2006. [PDF] Keyword(s): Petri nets, systems biology, biochemical networks, nonlinear stability, dynamical systems, futile cycles.
│ │Persistency is the property, for differential equations in Rn, that solutions starting in the positive orthant do not approach the boundary. For chemical reactions and population │
│ │models, this translates into the non-extinction property: provided that every species is present at the start of the reaction, no species will tend to be eliminated in the course of │
│ │the reaction. This paper provides checkable conditions for persistence of chemical species in reaction networks, using concepts and tools from Petri net theory, and verifies these │
│Abstract:│conditions on various systems which arise in the modeling of cell signaling pathways. │
9. P. Berman, B. Dasgupta, and E.D. Sontag. Algorithmic issues in reverse engineering of protein and gene networks via the modular response analysis method. Annals of the NY Academy of Sciences,
1115:132-141, 2007. [PDF] Keyword(s): systems biology, biochemical networks, gene and protein networks, reverse engineering, systems identification, graph algorithms.
│ │This paper studies a computational problem motivated by the modular response analysis method for reverse engineering of protein and gene networks. This set-cover problem is hard to │
│Abstract:│solve exactly for large networks, but efficient approximation algorithms are given and their complexity is analyzed. │
10. P. Berman, B. Dasgupta, and E.D. Sontag. Randomized approximation algorithms for set multicover problems with applications to reverse engineering of protein and gene networks. Discrete Applied
Mathematics Special Series on Computational Molecular Biology, 155:733-749, 2007. [PDF] Keyword(s): systems biology, biochemical networks, gene and protein networks, systems identification,
reverse engineering.
│ │This paper investigates computational complexity aspects of a combinatorial problem that arises in the reverse engineering of protein and gene networks, showing relations to an │
│Abstract:│appropriate set multicover problem with large "coverage" factor, and providing a non-trivial analysis of a simple randomized polynomial-time approximation algorithm for the problem. │
11. B. DasGupta, G.A. Enciso, E.D. Sontag, and Y. Zhang. Algorithmic and complexity aspects of decompositions of biological networks into monotone subsystems. BioSystems, 90:161-178, 2007. [PDF]
Keyword(s): monotone systems, systems biology, biochemical networks.
│ │A useful approach to the mathematical analysis of large-scale biological networks is based upon their decompositions into monotone dynamical systems. This paper deals with two │
│ │computational problems associated to finding decompositions which are optimal in an appropriate sense. In graph-theoretic language, the problems can be recast in terms of maximal │
│ │sign-consistent subgraphs. The theoretical results include polynomial-time approximation algorithms as well as constant-ratio inapproximability results. One of the algorithms, which │
│ │has a worst-case guarantee of 87.9% from optimality, is based on the semidefinite programming relaxation approach of Goemans-Williamson. The algorithm was implemented and tested on a │
│Abstract:│Drosophila segmentation network and an Epidermal Growth Factor Receptor pathway model. │
12. T. Gedeon and E.D. Sontag. Oscillations in multi-stable monotone systems with slowly varying feedback. J. of Differential Equations, 239:273-295, 2007. [PDF] Keyword(s): systems biology,
biochemical networks, nonlinear stability, dynamical systems, monotone systems.
│ │This paper gives a theorem showing that a slow feedback adaptation, acting entirely analogously to the role of negative feedback for ordinary relaxation oscillations, leads to │
│Abstract:│periodic orbits for bistable monotone systems. The proof is based upon a combination of i/o monotone systems theory and Conley Index theory. │
13. W. Maass, P. Joshi, and E.D. Sontag. Computational aspects of feedback in neural circuits. PLoS Computational Biology, 3:e165 1-20, 2007. [PDF] Keyword(s): machine learning, neural networks,
feedback linearization, computation by cortical microcircuits, fading memory.
│ │It had previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be │
│ │carried out with a rapidly fading memory. We investigate in this article the computational capability of such circuits in the more realistic case where not only readout neurons, but │
│ │in addition a few neurons within the circuit have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back │
│ │into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any │
│ │conceivable digital or analog computation on time-varying inputs. But even with noise the resulting computational model can perform a large class of biologically relevant real-time │
│Abstract:│computations that require a non-fading memory. │
14. E.D. Sontag. Monotone and near-monotone biochemical networks. Systems and Synthetic Biology, 1:59-87, 2007. [PDF] [doi:10.1007/s11693-007-9005-9] Keyword(s): systems biology, biochemical
networks, monotone systems, Ising spin models, nonlinear stability, dynamical systems, consistent graphs, gene networks.
│ │This paper provides an expository introduction to monotone and near-monotone biochemical network structures. Monotone systems respond in a predictable fashion to perturbations, and │
│ │have very robust dynamical characteristics. This makes them reliable components of more complex networks, and suggests that natural biological systems may have evolved to be, if not │
│Abstract:│monotone, at least close to monotone. In addition, interconnections of monotone systems may be fruitfully analyzed using tools from control theory. │
15. P. de Leenheer, D. Angeli, and E.D. Sontag. Monotone chemical reaction networks. J. Math Chemistry, 41:295-314, 2007. [PDF] [doi:10.1007/s10910-006-9075-z] Keyword(s): systems biology,
biochemical networks, nonlinear stability, dynamical systems, monotone systems.
│ │We analyze certain chemical reaction networks and show that every solution converges to some steady state. The reaction kinetics are assumed to be monotone but otherwise arbitrary. │
│ │When diffusion effects are taken into account, the conclusions remain unchanged. The main tools used in our analysis come from the theory of monotone dynamical systems. We review some│
│Abstract:│of the features of this theory and provide a self-contained proof of a particular attractivity result which is used in proving our main result. │
16. D. Angeli, P. de Leenheer, and E.D. Sontag. Petri nets tools for the analysis of persistence in chemical networks. In Proc. 7th IFAC Symposium on Nonlinear Control Systems (NOLCOS 2007),
Pretoria, South Africa, 22-24 August, 2007, 2007. Keyword(s): Petri nets, systems biology, biochemical networks, nonlinear stability, dynamical systems, futile cycles.
17. M. Arcak and E.D. Sontag. A passivity-based stability criterion for a class of interconnected systems and applications to biochemical reaction networks. In Proc. IEEE Conf. Decision and Control,
New Orleans, Dec. 2007, pages 4477-4482, 2007. Note: Conference version of journal paper with same title. Keyword(s): systems biology, biochemical networks, cyclic feedback systems, secant
condition, nonlinear stability, dynamical systems.
18. D. Del Vecchio and E.D. Sontag. Dynamics and control of synthetic bio-molecular networks. In Proceedings American Control Conf., New York, July 2007, pages 1577-1588, 2007. Keyword(s): systems
biology, biochemical networks, synthetic biology.
│ │This tutorial paper presents an introduction to systems and synthetic molecular biology. It provides an introduction to basic biological concepts, and describes some of the techniques│
│Abstract:│as well as challenges in the analysis and design of biomolecular networks. │
19. M.R. Jovanovic, M. Arcak, and E.D. Sontag. Remarks on the stability of spatially distributed systems with a cyclic interconnection structure. In Proceedings American Control Conf., New York, July
2007, pages 2696-2701, 2007. Keyword(s): systems biology, biochemical networks, cyclic feedback systems, spatially distributed systems, secant condition.
│Abstract:│For distributed systems with a cyclic interconnection structure, a global stability result is shown to hold if the secant criterion is satisfied.│
20. E.D. Sontag, Y. Wang, and A. Megretski. Remarks on Input Classes for Identification of Bilinear Systems. In Proceedings American Control Conf., New York, July 2007, pages 4345-4350, 2007. Keyword
(s): realization theory, observability, identifiability, bilinear systems.
21. L. Wang and E.D. Sontag. Further results on singularly perturbed monotone systems, with an application to double phosphorylation cycles. In Proc. IEEE Conf. Decision and Control, New Orleans,
Dec. 2007, pages 627-632, 2007. Note: Conference version of Singularly perturbed monotone systems and an application to double phosphorylation cycles.Keyword(s): singular perturbations, futile
cycles, MAPK cascades, systems biology, biochemical networks, nonlinear stability, nonlinear dynamics, multistability, monotone systems.
1. B. Dasgupta, P. Berman, and E.D. Sontag. Computational complexities of combinatorial problems with applications to reverse engineering of biological networks. In D. Liu and F-Y. Wan, editors,
Advances in Computational Intelligence: Theory & Applications, pages 303-316. World Scientific, Hackensack, 2006. Keyword(s): systems biology, biochemical networks, gene and protein networks,
reverse engineering, systems identification, theory of computing and complexity.
2. B. Dasgupta, G.A. Enciso, E.D. Sontag, and Y. Zhang. Algorithmic and complexity results for decompositions of biological networks into monotone subsystems. In C. lvarez and M. Serna, editors,
Lecture Notes in Computer Science: Experimental Algorithms: 5th International Workshop, WEA 2006, pages 253-264. Springer-Verlag, 2006. Note: (Cala Galdana, Menorca, Spain, May 24-27, 2006).
Keyword(s): systems biology, biochemical networks, monotone systems, theory of computing and complexity.
3. W. Maass, P. Joshi, and E.D. Sontag. Principles of real-time computing with feedback applied to cortical microcircuit models. In Advances in Neural Information Processing Systems 18. MIT Press,
Cambridge, 2006. [PDF] Keyword(s): neural networks.
│ │The network topology of neurons in the brain exhibits an abundance of feedback connections, but the computational function of these feedback connections is largely unknown. We present│
│ │a computational theory that characterizes the gain in computational power achieved through feedback in dynamical systems with fading memory. It implies that many such systems acquire │
│ │through feedback universal computational capabilities for analog computing with a non-fading memory. In particular, we show that feedback enables such systems to process time-varying │
│ │input streams in diverse ways according to rules that are implemented through internal states of the dynamical system. In contrast to previous attractor-based computational models for│
│ │neural networks, these flexible internal states are high-dimensional attractors of the circuit dynamics, that still allow the circuit state to absorb new information from online input│
│ │streams. In this way one arrives at novel models for working memory, integration of evidence, and reward expectation in cortical circuits. We show that they are applicable to circuits│
│Abstract:│of conductance-based Hodgkin-Huxley (HH) neurons with high levels of noise that reflect experimental data on invivo conditions. │
4. M. Arcak and E.D. Sontag. Diagonal stability of a class of cyclic systems and its connection with the secant criterion. Automatica, 42:1531-1537, 2006. [PDF] Keyword(s): passive systems, systems
biology, biochemical networks, cyclic feedback systems, secant condition, nonlinear stability, dynamical systems.
│ │This paper considers a class of systems with a cyclic structure that arises, among other examples, in dynamic models for certain biochemical reactions. We first show that a criterion │
│ │for local stability, derived earlier in the literature, is in fact a necessary and sufficient condition for diagonal stability of the corresponding class of matrices. We then revisit │
│ │a recent generalization of this criterion to output strictly passive systems, and recover the same stability condition using our diagonal stability result as a tool for constructing a│
│Abstract:│Lyapunov function. Using this procedure for Lyapunov construction we exhibit classes of cyclic systems with sector nonlinearities and characterize their global stability properties. │
5. M. Chaves and E.D. Sontag. Exact computation of amplification for a class of nonlinear systems arising from cellular signaling pathways. Automatica, 42:1987-1992, 2006. [PDF] Keyword(s): MAPK
cascades, systems biology, biochemical networks, nonlinear stability, dynamical systems.
│ │A commonly employed measure of the signal amplification properties of an input/output system is its induced L2 norm, sometimes also known as H-infinity gain. In general, however, it │
│ │is extremely difficult to compute the numerical value for this norm, or even to check that it is finite, unless the system being studied is linear. This paper describes a class of │
│ │systems for which it is possible to reduce this computation to that of finding the norm of an associated linear system. In contrast to linearization approaches, a precise value, not │
│ │an estimate, is obtained for the full nonlinear model. The class of systems that we study arose from the modeling of certain biological intracellular signaling cascades, but the │
│Abstract:│results should be of wider applicability. │
6. M. Chaves, E.D. Sontag, and R. Albert. Methods of robustness analysis for Boolean models of gene control networks. IET Systems Biology, 153:154-167, 2006. [PDF] Keyword(s): systems biology,
biochemical networks, boolean systems, identifiability, robust, robustness, geometry, Boolean, segment polarity network, gene and protein networks, hybrid systems.
│ │As a discrete approach to genetic regulatory networks, Boolean models provide an essential qualitative description of the structure of interactions among genes and proteins. Boolean │
│ │models generally assume only two possible states (expressed or not expressed) for each gene or protein in the network as well as a high level of synchronization among the various │
│ │regulatory processes. In this paper, we discuss and compare two possible methods of adapting qualitative models to incorporate the continuous-time character of regulatory networks. │
│ │The first method consists of introducing asynchronous updates in the Boolean model. In the second method, we adopt the approach introduced by L. Glass to obtain a set of piecewise │
│ │linear differential equations which continuously describe the states of each gene or protein in the network. We apply both methods to a particular example: a Boolean model of the │
│ │segment polarity gene network of Drosophila melanogaster. We analyze the dynamics of the model, and provide a theoretical characterization of the model's gene pattern prediction as a │
│Abstract:│function of the timescales of the various processes. │
7. B. DasGupta, J.P. Hespanha, J. Riehl, and E.D. Sontag. Honey-pot constrained searching with local sensory information. Nonlinear Analysis, 65:1773-1793, 2006. [PDF] Keyword(s): search problems,
algorithms, computational complexity.
│ │This paper investigates the problem of searching for a hidden target in a bounded region of the plane by an autonomous robot which is only able to use limited local sensory │
│ │information. It proposes an aggregation-based approach to solve this problem, in which the continuous search space is partitioned into a finite collection of regions on which we │
│ │define a discrete search problem and a solution to the original problem is obtained through a refinement procedure that lifts the discrete path into a continuous one. The resulting │
│ │solution is in general not optimal but one can construct bounds to gauge the cost penalty incurred. The discrete version is formalized and an optimization problem is stated as a │
│Abstract:│`reward-collecting' bounded-length path problem. NP-completeness and efficient approximation algorithms for various cases of this problem are discussed. │
8. G.A. Enciso, H.L. Smith, and E.D. Sontag. Non-monotone systems decomposable into monotone systems with negative feedback. J. of Differential Equations, 224:205-227, 2006. [PDF] Keyword(s):
nonlinear stability, dynamical systems, monotone systems.
│ │Motivated by the theory of monotone i/o systems, this paper shows that certain finite and infinite dimensional semi-dynamical systems with negative feedback can be decomposed into a │
│ │monotone open loop system with inputs and a decreasing output function. The original system is reconstituted by plugging the output into the input. By embedding the system into a │
│ │larger symmetric monotone system, this paper obtains finer information on the asymptotic behavior of solutions, including existence of positively invariant sets and global │
│ │convergence. An important new result is the extension of the "small gain theorem" of monotone i/o theory to reaction-diffusion partial differential equations: adding diffusion │
│Abstract:│preserves the global attraction of the ODE equilibrium. │
9. G.A. Enciso and E.D. Sontag. Global attractivity, I/O monotone small-gain theorems, and biological delay systems. Discrete Contin. Dyn. Syst., 14(3):549-578, 2006. [PDF] Keyword(s): systems
biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems, delay-differential systems.
│ │This paper further develops a method, originally introduced in a paper by Angeli and Sontag, for proving global attractivity of steady states in certain classes of dynamical systems. │
│ │In this aproach, one views the given system as a negative feedback loop of a monotone controlled system. An auxiliary discrete system, whose global attractivity implies that of the │
│ │original system, plays a key role in the theory, which is presented in a general Banach space setting. Applications are given to delay systems, as well as to systems with multiple │
│Abstract:│inputs and outputs, and the question of expressing a given system in the required negative feedback form is addressed. │
10. M. Malisoff, M. Krichman, and E.D. Sontag. Global stabilization for systems evolving on manifolds. Journal of Dynamical and Control Systems, 12:161-184, 2006. [PDF] Keyword(s): nonlinear
stability, nonlinear control, feedback stabilization.
│ │This paper shows that any globally asymptotically controllable system on any smooth manifold can be globally stabilized by a state feedback. Since discontinuous feedbacks are allowed,│
│ │solutions are understood in the ``sample and hold'' sense introduced by Clarke-Ledyaev-Sontag-Subbotin (CLSS). This work generalizes the CLSS Theorem, which is the special case of our│
│Abstract:│result for systems on Euclidean space. We apply our result to the input-to-state stabilization of systems on manifolds relative to actuator errors, under small observation noise. │
11. E.P. Ryan and E.D. Sontag. Well-defined steady-state response does not imply CICS. Systems and Control Letters, 55:707-710, 2006. [PDF] [doi:10.1016/j.sysconle.2006.02.001] Keyword(s): nonlinear
stability, dynamical systems.
│ │Systems for which each constant input gives rise to a unique globally attracting equilibrium are considered. A counterexample is provided to show that inputs which are only │
│Abstract:│asymptotically constant may not result in states converging to equilibria (failure of the converging-input converging state, or ``CICS'' property). │
12. E.D. Sontag. Passivity gains and the ``secant condition'' for stability. Systems Control Lett., 55(3):177-183, 2006. [PDF] Keyword(s): cyclic feedback systems, systems biology, biochemical
networks, nonlinear stability, dynamical systems, passive systems, secant condition, biochemical networks.
│ │A generalization of the classical secant condition for the stability of cascades of scalar linear systems is provided for passive systems. The key is the introduction of a quantity │
│Abstract:│that combines gain and phase information for each system in the cascade. For linear one-dimensional systems, the known result is recovered exactly. │
13. E.D. Sontag and Y. Wang. A cooperative system which does not satisfy the limit set dichotomy. J. of Differential Equations, 224:373-384, 2006. [PDF] Keyword(s): dynamical systems, monotone
│ │The fundamental property of strongly monotone systems, and strongly cooperative systems in particular, is the limit set dichotomy due to Hirsch: if x < y, then either Omega(x) < Omega│
│Abstract:│(y), or Omega(x) = Omega(y) and both sets consist of equilibria. We provide here a counterexample showing that this property need not hold for (non-strongly) cooperative systems. │
14. P. de Leenheer, D. Angeli, and E.D. Sontag. Crowding effects promote coexistence in the chemostat. Journal of Mathematical Analysis and Applications, 319:48-60, 2006. [PDF] Keyword(s): systems
biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
│ │We provide an almost-global stability result for a particular chemostat model, in which crowding effects are taken into consideration. The model can be rewritten as a negative │
│ │feedback interconnection of two monotone i/o systems with well-defined characteristics, which allows the use of a small-gain theorem for feedback interconnections of monotone systems.│
│Abstract:│This leads to a sufficient condition for almost-global stability, and we show that coexistence occurs in this model if the crowding effects are large enough. │
15. P. de Leenheer, S.A. Levin, E.D. Sontag, and C.A. Klausmeier. Global stability in a chemostat with multiple nutrients. J. Mathematical Biology, 52:419-438, 2006. [PDF] Keyword(s): systems
biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
│ │We study a single species in a chemostat, limited by two nutrients, and separate nutrient uptake from growth. For a broad class of uptake and growth functions it is proved that a │
│Abstract:│nontrivial equilibrium may exist. Moreover, if it exists it is unique and globally stable, generalizing a previous result by Legovic and Cruzado. │
16. N.A.W. van Riel and E.D. Sontag. Parameter estimation in models combining signal transduction and metabolic pathways: The dependent input approach. IET Systems Biology, 153:263-274, 2006. [PDF]
Keyword(s): systems biology, biochemical networks, parameter identification.
│ │Biological complexity and limited quantitative measurements impose severe challenges to standard engineering methodologies for systems identification. This paper presents an approach,│
│ │justified by the theory of universal inputs for distinguishability, based on replacing unmodeled dynamics by fictitious `dependent inputs'. The approach is particularly useful in │
│ │validation experiments, because it allows one to fit model parameters to experimental data generated by a reference (wild-type) organism and then testing this model on data generated │
│ │by a variation (mutant), so long as the mutations only affect the unmodeled dynamics that produce the dependent inputs. As a case study, this paper addresses the pathways that control│
│ │the nitrogen uptake fluxes in baker's yeast Saccharomyces cerevisiae enabling it to optimally respond to changes in nitrogen availability. Well-defined perturbation experiments were │
│ │performed on cells growing in steady-state. Time-series data of extracellular and intracellular metabolites were obtained, as well as mRNA levels. A nonlinear model was proposed, and │
│Abstract:│shown to be structurally identifiable given input/output data. The identified model correctly predicted the responses of different yeast strains and different perturbations. │
17. B. Andrews, P. Iglesias, and E.D. Sontag. Signal detection and approximate adaptation implies an approximate internal model. In Proc. IEEE Conf. Decision and Control, San Diego, Dec. 2006, pages
2364-2369, 2006. IEEE. [PDF] Keyword(s): biological adaptation, internal model principle.
│ │This conference paper presented a version of an approximate internal model principle, for linear systems. A subsequent paper at the IFAC 2008 conference improved on this result by │
│Abstract:│extending it to a class of nonlinear systems. │
18. D. Angeli and E.D. Sontag. A note on monotone systems with positive translation invariance. In Control and Automation, 2006. MED '06. 14th Mediterranean Conference on, 28-30 June 2006, pages 1-6,
2006. IEEE. Note: Available from ieeexplore.ieee.org. [PDF] [doi:10.1109/MED.2006.3287822B2B2B2B2B2B] Keyword(s): systems biology, biochemical networks, nonlinear stability, dynamical systems,
monotone systems.
│ │Strongly monotone systems of ordinary differential equations which have a certain translation-invariance property are shown to have the property that all projected solutions converge │
│ │to a unique equilibrium. This result may be seen as a dual of a well-known theorem of Mierczynski for systems that satisfy a conservation law. As an application, it is shown that │
│Abstract:│enzymatic futile cycles have a global convergence property. │
19. D. Angeli, P. de Leenheer, and E.D. Sontag. On the structural monotonicity of chemical reaction networks. In Proc. IEEE Conf. Decision and Control, San Diego, Dec. 2006, pages 7-12, 2006. IEEE. [
PDF] Keyword(s): monotone systems, systems biology, biochemical networks, nonlinear stability, dynamical systems.
│ │This paper derives new results for certain classes of chemical reaction networks, linking structural to dynamical properties. In particular, it investigates their monotonicity and │
│ │convergence without making assumptions on the structure (e.g., mass-action kinetics) of the dynamical equations involved, and relying only on stoichiometric constraints. The key idea │
│ │is to find a suitable set of coordinates under which the resulting system is cooperative. As a simple example, the paper shows that a phosphorylation/dephosphorylation process, which │
│Abstract:│is involved in many signaling cascades, has a global stability property. │
20. M. Arcak and E.D. Sontag. Connections between diagonal stability and the secant condition for cyclic systems. In Proc. American Control Conference, Minneapolis, June 2006, pages 1493-1498, 2006.
Keyword(s): systems biology, biochemical networks, cyclic feedback systems, secant condition, nonlinear stability, dynamical systems.
21. M. Chaves, E.D. Sontag, and R. Albert. Structure and timescale analysis in genetic regulatory networks. In Proc. IEEE Conf. Decision and Control, San Diego, Dec. 2006, pages 2358-2363, 2006.
IEEE. [PDF] Keyword(s): genetic regulatory networks, Boolean systems, hybrid systems.
│ │This work is concerned with the study of the robustness and fragility of gene regulation networks to variability in the timescales of the distinct biological processes involved. It │
│ │explores and compares two methods: introducing asynchronous updates in a Boolean model, or integrating the Boolean rules in a continuous, piecewise linear model. As an example, the │
│ │segment polarity network of the fruit fly is analyzed. A theoretical characterization is given of the model's ability to predict the correct development of the segmented embryo, in │
│Abstract:│terms of the specific timescales of the various regulation interactions. │
22. L. Wang and E.D. Sontag. A remark on singular perturbations of strongly monotone systems. In Proc. IEEE Conf. Decision and Control, San Diego, Dec. 2006, pages 989-994, 2006. IEEE. [PDF] Keyword
(s): systems biology, biochemical networks, nonlinear stability, dynamical systems, singular perturbations, monotone systems.
│ │This paper deals with global convergence to equilibria, and in particular Hirsch's generic convergence theorem for strongly monotone systems, for singular perturbations of monotone │
│Abstract:│systems. │
23. L. Wang and E.D. Sontag. Almost global convergence in singular perturbations of strongly monotone systems. In C. Commault and N. Marchand, editors, Positive Systems, pages 415-422, 2006.
Springer-Verlag, Berlin/Heidelberg. Note: (Lecture Notes in Control and Information Sciences Volume 341, Proceedings of the second Multidisciplinary International Symposium on Positive Systems:
Theory and Applications (POSTA 06) Grenoble, France). [PDF] [doi:10.1007/3-540-34774-7] Keyword(s): systems biology, biochemical networks, nonlinear stability, dynamical systems, singular
perturbations, monotone systems.
│ │This paper deals with global convergence to equilibria, and in particular Hirsch's generic convergence theorem for strongly monotone systems, for singular perturbations of monotone │
│Abstract:│systems. │
1. M. Andrec, B.N. Kholodenko, R.M. Levy, and E.D. Sontag. Inference of signaling and gene regulatory networks by steady-state perturbation experiments: structure and accuracy. J. Theoret. Biol.,
232(3):427-441, 2005. Note: Supplementary materials are here: http://sontaglab.org/FTPDIR/andrec-kholodenko-levy-sontag-JTB04-supplementary.pdf. [PDF] Keyword(s): systems biology, biochemical
networks, gene and protein networks, systems identification, reverse engineering, modular response analysis, systems biology, biochemical networks, reverse engineering, gene and protein networks,
protein networks, gene networks, systems identification.
│ │One of the fundamental problems of cell biology is the understanding of complex regulatory networks. Such networks are ubiquitous in cells, and knowledge of their properties is │
│ │essential for the understanding of cellular behavior. This paper studies the effect of experimental uncertainty on the accuracy of the inferred structure of the networks determined │
│Abstract:│using the method in "Untangling the wires: a novel strategy to trace functional interactions in signaling and gene networks". │
2. M. Chaves, R. Albert, and E.D. Sontag. Robustness and fragility of Boolean models for genetic regulatory networks. J. Theoret. Biol., 235(3):431-449, 2005. [PDF] Keyword(s): systems biology,
biochemical networks, boolean systems, gene and protein networks.
│ │Interactions between genes and gene products give rise to complex circuits that enable cells to process information and respond to external signals. Theoretical studies often describe│
│ │these interactions using continuous, stochastic, or logical approaches. Here we propose a framework for gene regulatory networks that combines the intuitive appeal of a qualitative │
│ │description of gene states with a high flexibility in incorporating stochasticity in the duration of cellular processes. We apply our methods to the regulatory network of the segment │
│ │polarity genes, thus gaining novel insights into the development of gene expression patterns. For example, we show that very short synthesis and decay times can perturb the wild type │
│ │pattern. On the other hand, separation of timescales between pre- and post-translational processes and a minimal prepattern ensure convergence to the wild type expression pattern │
│Abstract:│regardless of fluctuations. │
3. G.A. Enciso and E.D. Sontag. Monotone systems under positive feedback: multistability and a reduction theorem. Systems Control Lett., 54(2):159-168, 2005. [PDF] Keyword(s): multistability,
systems biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
│ │For feedback loops involving single input, single output monotone systems with well-defined I/O characteristics, a previous paper provided an approach to determining the location and │
│ │stability of steady states. A result on global convergence for multistable systems followed as a consequence of the technique. The present paper extends the approach to multiple │
│ │inputs and outputs. A key idea is the introduction of a reduced system which preserves local stability properties. New results characterizing strong monotonicity of feedback loops │
│Abstract:│involving cascades are also presented. │
4. J.P. Hespanha, D. Liberzon, D. Angeli, and E.D. Sontag. Nonlinear norm-observability notions and stability of switched systems. IEEE Trans. Automat. Control, 50(2):154-168, 2005. [PDF] Keyword
(s): observability, input to state stability, observability, invariance principle.
│ │This paper proposes several definitions of observability for nonlinear systems and explores relationships among them. These observability properties involve the existence of a bound │
│ │on the norm of the state in terms of the norms of the output and the input on some time interval. A Lyapunov-like sufficient condition for observability is also obtained. As an │
│ │application, we prove several variants of LaSalle's stability theorem for switched nonlinear systems. These results are demonstrated to be useful for control design in the presence of│
│Abstract:│switching as well as for developing stability results of Popov type for switched feedback systems. │
5. J. L. Mancilla-Aguilar, R. Garca, E.D. Sontag, and Y. Wang. On the representation of switched systems with inputs by perturbed control systems. Nonlinear Anal., 60(6):1111-1150, 2005. [PDF]
│ │This paper provides representations of switched systems described by controlled differential inclusions, in terms of perturbed control systems. The control systems have dynamics given│
│ │by differential equations, and their inputs consist of the original controls together with disturbances that evolve in compact sets; their sets of maximal trajectories contain, as a │
│ │dense subset, the set of maximal trajectories of the original system. Several applications to control theory, dealing with properties of stability with respect to inputs and of │
│Abstract:│detectability, are derived as a consequence of the representation theorem. │
6. J. L. Mancilla-Aguilar, R. Garca, E.D. Sontag, and Y. Wang. Uniform stability properties of switched systems with switchings governed by digraphs. Nonlinear Anal., 63(3):472-490, 2005. [PDF]
│ │This paper develops characterizations of various uniform stability properties of switched systems described by differential inclusions, and whose switchings are governed by a digraph.│
│Abstract:│These characterizations are given in terms of stability properties of the system with restricted switchings and also in terms of Lyapunov functions. │
7. E.D. Sontag. Molecular systems biology and control. Eur. J. Control, 11(4-5):396-435, 2005. [PDF] Keyword(s): cell biology, systems biology, biochemical networks, nonlinear stability, dynamical
systems, monotone systems, molecular biology, systems biology, cellular signaling.
│ │This paper, prepared for a tutorial at the 2005 IEEE Conference on Decision and Control, presents an introduction to molecular systems biology and some associated problems in control │
│ │theory. It provides an introduction to basic biological concepts, describes several questions in dynamics and control that arise in the field, and argues that new theoretical problems│
│ │arise naturally in this context. A final section focuses on the combined use of graph-theoretic, qualitative knowledge about monotone building-blocks and steady-state step responses │
│Abstract:│for components. │
8. P. de Leenheer, D. Angeli, and E.D. Sontag. On predator-prey systems and small-gain theorems. Math. Biosci. Eng., 2(1):25-42, 2005. [PDF] Keyword(s): systems biology, biochemical networks,
nonlinear stability, dynamical systems, monotone systems.
│ │This paper deals with an almost global attractivity result for Lotka-Volterra systems with predator-prey interactions. These systems can be written as (negative) feedback systems. The│
│ │subsystems of the feedback loop are monotone control systems, possessing particular input-output properties. We use a small-gain theorem, adapted to a context of systems with multiple│
│ │equilibrium points to obtain the desired almost global attractivity result. It provides sufficient conditions to rule out oscillatory or more complicated behavior which is often │
│Abstract:│observed in predator-prey systems. │
9. G.A. Enciso and E.D. Sontag. A remark on multistability for monotone systems II. In Proc. IEEE Conf. Decision and Control, Seville, Dec. 2005, IEEE Publications, pages 2957-2962, 2005. Keyword
(s): multistability, systems biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
10. E.D. Sontag. A notion of passivity gain and a generalization of the `secant condition' for stability. In Proc. IEEE Conf. Decision and Control, Seville, Dec. 2005, IEEE Publications, pages
5645-5649, 2005. Keyword(s): nonlinear stability, dynamical systems.
11. E.D. Sontag and M. Chaves. Computation of amplification for systems arising from cellular signaling pathways. In Proc. 16th IFAC World Congress, Prague, July 2005, 2005. Keyword(s): systems
biology, biochemical networks, dynamical systems.
1. D. Angeli and E.D. Sontag. Interconnections of monotone systems with steady-state characteristics. In Optimal control, stabilization and nonsmooth analysis, volume 301 of Lecture Notes in Control
and Inform. Sci., pages 135-154. Springer, Berlin, 2004. [PDF] Keyword(s): systems biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
│ │One of the key ideas in control theory is that of viewing a complex dynamical system as an interconnection of simpler subsystems, thus deriving conclusions regarding the complete │
│ │system from properties of its building blocks. Following this paradigm, and motivated by questions in molecular biology modeling, the authors have recently developed an approach based│
│ │on components which are monotone systems with respect to partial orders in state and signal spaces. This paper presents a brief exposition of recent results, with an emphasis on small│
│Abstract:│gain theorems for negative feedback, and the emergence of multistability and associated hysteresis effects under positive feedback. │
2. M. Malisoff and E.D. Sontag. Asymptotic controllability and input-to-state stabilization: the effect of actuator errors. In Optimal control, stabilization and nonsmooth analysis, volume 301 of
Lecture Notes in Control and Inform. Sci., pages 155-171. Springer, Berlin, 2004. [PDF] Keyword(s): input to state stability, control-Lyapunov functions, nonlinear control, feedback
stabilization, ISS.
│ │We discuss several issues related to the stabilizability of nonlinear systems. First, for continuously stabilizable systems, we review constructions of feedbacks that render the │
│ │system input-to-state stable with respect to actuator errors. Then, we discuss a recent paper which provides a new feedback design that makes globally asymptotically controllable │
│ │systems input-to-state stable to actuator errors and small observation noise. We illustrate our constructions using the nonholonomic integrator, and discuss a related feedback design │
│Abstract:│for systems with disturbances. │
3. D. Angeli, J. E. Ferrell, and E.D. Sontag. Detection of multistability, bifurcations, and hysteresis in a large class of biological positive-feedback systems.. Proc Natl Acad Sci USA, 101
(7):1822-1827, 2004. Note: A revision of Suppl. Fig. 7(b) is here: http://sontaglab.org/FTPDIR/nullclines-f-g-REV.jpg; and typos can be found here: http://sontaglab.org/FTPDIR/
angeli-ferrell-sontag-pnas04-errata.txt. [WWW] [PDF] [doi:10.1073/pnas.0308265100] Keyword(s): MAPK cascades, multistability, systems biology, biochemical networks, nonlinear stability, dynamical
systems, monotone systems.
│ │Multistability is an important recurring theme in cell signaling, of particular relevance to biological systems that switch between discrete states, generate oscillatory responses, or│
│ │"remember" transitory stimuli. Standard mathematical methods allow the detection of bistability in some very simple feedback systems (systems with one or two proteins or genes that │
│ │either activate each other or inhibit each other), but realistic depictions of signal transduction networks are invariably much more complex than this. Here we show that for a class │
│ │of feedback systems of arbitrary order, the stability properties of the system can be deduced mathematically from how the system behaves when feedback is blocked. Provided that this │
│ │"open loop," feedback-blocked system is monotone and possesses a sigmoidal characteristic, the system is guaranteed to be bistable for some range of feedback strengths. We present a │
│ │simple graphical method for deducing the stability behavior and bifurcation diagrams for such systems, and illustrate the method with two examples taken from recent experimental │
│Abstract:│studies of bistable systems: a two-variable Cdc2/Wee1 system and a more complicated five-variable MAPK cascade. │
4. D. Angeli, B.P. Ingalls, E.D. Sontag, and Y. Wang. Separation principles for input-output and integral-input-to-state stability. SIAM J. Control Optim., 43(1):256-276, 2004. [PDF] [doi:http://
dx.doi.org/10.1137/S0363012902419047] Keyword(s): input to state stability, integral input to state stability, iISS, ISS, input to output stability.
│ │We present new characterizations of input-output-to-state stability. This is a notion of detectability formulated in the ISS framework. Equivalent properties are presented in terms of│
│ │asymptotic estimates of the state trajectories based on the magnitudes of the external input and output signals. These results provide a set of "separation principles" for │
│ │input-output-to-state stability , characterizations of the property in terms of weaker stability notions. When applied to the closely related notion of integral ISS, these │
│Abstract:│characterizations yield analogous results. │
5. D. Angeli, B.P. Ingalls, E.D. Sontag, and Y. Wang. Uniform global asymptotic stability of differential inclusions. J. Dynam. Control Systems, 10(3):391-412, 2004. [PDF] [doi:http://dx.doi.org/
10.1023/B:JODS.0000034437.54937.7f] Keyword(s): differential inclusions.
│ │The stability of differential inclusions defined by locally Lipschitz compact valued maps is addressed. It is shown that if such a differential inclusion is globally asymptotically │
│ │stable, then in fact it is uniformly globally asymptotically stable (with respect to initial states in compacts). This statement is trivial for differential equations, but here we │
│ │provide the extension to compact (not necessarily convex) valued differential inclusions. The main result is presented in a context which is useful for control-theoretic applications:│
│Abstract:│a differential inclusion with two outputs is considered, and the result applies to the property of global error detectability. │
6. D. Angeli and E.D. Sontag. Multi-stability in monotone input/output systems. Systems Control Lett., 51(3-4):185-202, 2004. [PDF] Keyword(s): multistability, systems biology, biochemical networks,
nonlinear stability, dynamical systems, monotone systems.
│ │This paper studies the emergence of multistability and hysteresis in those systems that arise, under positive feedback, from monotone systems with well-defined steady-state responses.│
│ │Such feedback configurations appear routinely in several fields of application, and especially in biology. The results are stated in terms of directly checkable conditions which do │
│Abstract:│not involve explicit knowledge of basins of attractions of each equilibria. │
7. D. Angeli, P. de Leenheer, and E.D. Sontag. A small-gain theorem for almost global convergence of monotone systems. Systems Control Lett., 52(5):407-414, 2004. [PDF] Keyword(s): systems biology,
biochemical networks, nonlinear stability, dynamical systems, monotone systems.
│ │A small-gain theorem is presented for almost global stability of monotone control systems which are open-loop almost globally stable, when constant inputs are applied. The theorem │
│Abstract:│assumes "negative feedback" interconnections. This typically destroys the monotonicity of the original flow and potentially destabilizes the resulting closed-loop system. │
8. M. Chaves, R.J. Dinerstein, and E.D. Sontag. Optimal length and signal amplification in weakly activated signal transduction cascades. J. Physical Chemistry, 108:15311-15320, 2004. [PDF] Keyword
(s): systems biology, biochemical networks, dynamical systems.
│ │Weakly activated signaling cascades can be modeled as linear systems. The input-to-output transfer function and the internal gain of a linear system, provide natural measures for the │
│ │propagation of the input signal down the cascade and for the characterization of the final outcome. The most efficient design of a cascade for generating sharp signals, is obtained by│
│Abstract:│choosing all the off rates equal, and a "universal" finite optimal length. │
9. M. Chaves, E.D. Sontag, and R. J. Dinerstein. Steady-states of receptor-ligand dynamics: A theoretical framework. J. Theoret. Biol., 227(3):413-428, 2004. [PDF] Keyword(s): zero-deficiency
networks, systems biology, biochemical networks, receptor-ligand models, dynamical systems.
│ │This paper studies aspects of the dynamics of a conventional mechanism of ligand-receptor interactions, with a focus on the stability and location of steady-states. A theoretical │
│ │framework is developed, and, as an application, a minimal parametrization is provided for models for two- or multi-state receptor interaction with ligand. In addition, an "affinity │
│Abstract:│quotient" is introduced, which allows an elegant classification of ligands into agonists, neutral agonists, and inverse agonists. │
10. G.A. Enciso and E.D. Sontag. On the stability of a model of testosterone dynamics. J. Math. Biol., 49(6):627-634, 2004. [PDF] Keyword(s): systems biology, biochemical networks, nonlinear
stability, dynamical systems, monotone systems, delay-differential systems.
│ │We prove the global asymptotic stability of a well-known delayed negative-feedback model of testosterone dynamics, which has been proposed as a model of oscillatory behavior. We │
│Abstract:│establish stability (and hence the impossibility of oscillations) even in the presence of delays of arbitrary length. │
11. P. Kuusela, D. Ocone, and E.D. Sontag. Learning Complexity Dimensions for a Continuous-Time Control System. SIAM J. Control Optim., 43(3):872-898, 2004. [PDF] [doi:http://dx.doi.org/10.1137/
S0363012901384302] Keyword(s): machine learning, theory of computing and complexity, VC dimension, neural networks.
│ │This paper takes a computational learning theory approach to a problem of linear systems identification. It is assumed that input signals have only a finite number k of frequency │
│ │components, and systems to be identified have dimension no greater than n. The main result establishes that the sample complexity needed for identification scales polynomially with n │
│Abstract:│and logarithmically with k. │
12. M. Malisoff, L. Rifford, and E.D. Sontag. Global Asymptotic Controllability Implies Input-to-State Stabilization. SIAM J. Control Optim., 42(6):2221-2238, 2004. [PDF] [doi:http://dx.doi.org/
10.1137/S0363012903422333] Keyword(s): input to state stability, control-Lyapunov functions, nonlinear control, feedback stabilization.
│ │The main problem addressed in this paper is the design of feedbacks for globally asymptotically controllable (GAC) control affine systems that render the closed loop systems input to │
│ │state stable with respect to actuator errors. Extensions for fully nonlinear GAC systems with actuator errors are also discussed. Our controllers have the property that they tolerate │
│Abstract:│small observation noise as well. │
13. E.D. Sontag. Some new directions in control theory inspired by systems biology. IET Systems Biology, 1:9-18, 2004. [PDF] Keyword(s): systems biology, biochemical networks, nonlinear stability,
dynamical systems, monotone systems, cellular signaling.
│ │This paper, addressed primarily to engineers and mathematicians with an interest in control theory, argues that entirely new theoretical problems arise naturally when addressing │
│Abstract:│questions in the field of systems biology. Examples from the author's recent work are used to illustrate this point. │
14. E.D. Sontag, A. Kiyatkin, and B.N. Kholodenko. Inferring dynamic architecture of cellular networks using time series of gene expression, protein and metabolite data. Bioinformatics, 20
(12):1877-1886, 2004. Note: Supplementary materials are here: http://sontaglab.org/FTPDIR/sontag-kiyatkin-kholodenko-informatics04-supplement.pdf. [PDF] [doi:http://dx.doi.org/10.1093/
bioinformatics/bth173] Keyword(s): modular response analysis, systems biology, biochemical networks, reverse engineering, gene and protein networks, protein networks, gene networks, systems
│ │High-throughput technologies have facilitated the acquisition of large genomics and proteomics data sets. However, these data provide snapshots of cellular behavior, rather than help │
│ │us reveal causal relations. Here, we propose how these technologies can be utilized to infer the topology and strengths of connections among genes, proteins, and metabolites by │
│ │monitoring time-dependent responses of cellular networks to experimental interventions. We show that all connections leading to a given network node, e.g., to a particular gene, can │
│ │be deduced from responses to perturbations none of which directly influences that node, e.g., using strains with knock-outs to other genes. To infer all interactions from stationary │
│Abstract:│data, each node should be perturbed separately or in combination with other nodes. Monitoring time series provides richer information and does not require perturbations to all nodes. │
15. D. Angeli and E.D. Sontag. An analysis of a circadian model using the small-gain approach to monotone systems. In Proc. IEEE Conf. Decision and Control, Paradise Island, Bahamas, Dec. 2004, IEEE
Publications, pages 575-578, 2004. [PDF] Keyword(s): circadian rhythms, tridiagonal systems, nonlinear dynamics, systems biology, biochemical networks, oscillations, periodic behavior, monotone
systems, delay-differential systems.
│ │We show how certain properties of Goldbeter's original 1995 model for circadian oscillations can be proved mathematically. We establish global asymptotic stability, and in particular │
│ │no oscillations, if the rate of transcription is somewhat smaller than that assumed by Goldbeter, but, on the other hand, this stability persists even under arbitrary delays in the │
│ │feedback loop. We are mainly interested in illustrating certain mathematical techniques, including the use of theorems concerning tridiagonal cooperative systems and the recently │
│Abstract:│developed theory of monotone systems with inputs and outputs. │
16. D. Angeli, P. de Leenheer, and E.D. Sontag. A tutorial on monotone systems- with an application to chemical reaction networks. In Proc. 16th Int. Symp. Mathematical Theory of Networks and Systems
(MTNS 2004), CD-ROM, WP9.1, Katholieke Universiteit Leuven, 2004. [PDF] Keyword(s): systems biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
│ │Monotone systems are dynamical systems for which the flow preserves a partial order. Some applications will be briefly reviewed in this paper. Much of the appeal of the class of │
│ │monotone systems stems from the fact that roughly, most solutions converge to the set of equilibria. However, this usually requires a stronger monotonicity property which is not │
│ │always satisfied or easy to check in applications. Following work of J.F. Jiang, we show that monotonicity is enough to conclude global attractivity if there is a unique equilibrium │
│ │and if the state space satisfies a particular condition. The proof given here is self-contained and does not require the use of any of the results from the theory of monotone systems.│
│Abstract:│We will illustrate it on a class of chemical reaction networks with monotone, but otherwise arbitrary, reaction kinetics. │
17. D. Angeli, P. de Leenheer, and E.D. Sontag. Remarks on monotonicity and convergence in chemical reaction networks. In Proc. IEEE Conf. Decision and Control, Paradise Island, Bahamas, Dec. 2004,
IEEE Publications, pages 243-248, 2004. Keyword(s): systems biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
18. M. Chaves, E.D. Sontag, and R.J. Dinerstein. Gains and optimal design in signaling pathways. In Proc. IEEE Conf. Decision and Control, Paradise Island, Bahamas, Dec. 2004, IEEE Publications,
pages 596-601, 2004. Keyword(s): systems biology, biochemical networks, dynamical systems.
19. B. DasGupta, J.P. Hespanha, and E.D. Sontag. Aggregation-based approaches to honey-pot searching with local sensory information. In Proceedings American Control Conf., Boston, June 2004, 2004.
Note: (CD-ROM WeM17.4, IEEE Publications, Piscataway). [PDF]
│ │We investigate the problem of searching for a hidden target in a bounded region by an autonomous agent that is only able to use limited local sensory information. We propose an │
│ │aggregation-based approach to solve this problem, in which the continuous search space is partitioned into a finite collection of regions on which we define a discrete search problem.│
│ │A solution to the original problem is then obtained through a refinement procedure that lifts the discrete path into a continuous one. The resulting solution is in general not optimal│
│Abstract:│but one can construct bounds to gauge the cost penalty incurred. │
20. B. DasGupta, J.P. Hespanha, and E.D. Sontag. Computational complexities of honey-pot searching with local sensory information. In Proceedings American Control Conf., Boston, June 2004, CD-ROM,
ThA06.1, IEEE Publications, Piscataway, 2004. [PDF]
│ │In this paper we investigate the problem of searching for a hidden target in a bounded region of the plane, by an autonomous robot which is only able to use limited local sensory │
│Abstract:│information. We formalize a discrete version of the problem as a "reward-collecting" path problem and provide efficient approximation algorithms for various cases. │
21. G.A. Enciso and E.D. Sontag. A remark on multistability for monotone systems. In Proc. IEEE Conf. Decision and Control, Paradise Island, Bahamas, Dec. 2004, IEEE Publications, pages 249-254,
2004. Keyword(s): multistability, systems biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
22. J.L. Mancilla-Aguilar, R. Garca, E.D. Sontag, and Y. Wang. Representation of switched systems by perturbed control systems. In Proc. IEEE Conf. Decision and Control, Paradise Island, Bahamas,
Dec. 2004, IEEE Publications, pages 3259-3264, 2004.
1. P. de Leenheer, D. Angeli, and E.D. Sontag. A feedback perspective for chemostat models with crowding effects. In Positive systems (Rome, 2003), volume 294 of Lecture Notes in Control and Inform.
Sci., pages 167-174. Springer, Berlin, 2003. Keyword(s): systems biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
2. P. de Leenheer, D. Angeli, and E.D. Sontag. Small-gain theorems for predator-prey systems. In Positive systems (Rome, 2003), volume 294 of Lecture Notes in Control and Inform. Sci., pages
191-198. Springer, Berlin, 2003. Keyword(s): systems biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
3. D. Angeli and E.D. Sontag. Monotone control systems. IEEE Trans. Automat. Control, 48(10):1684-1698, 2003. Note: Errata are here: http://sontaglab.org/FTPDIR/
angeli-sontag-monotone-TAC03-typos.txt. [PDF] Keyword(s): MAPK cascades, systems biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
│ │Monotone systems constitute one of the most important classes of dynamical systems used in mathematical biology modeling. The objective of this paper is to extend the notion of │
│ │monotonicity to systems with inputs and outputs, a necessary first step in trying to understand interconnections, especially including feedback loops, built up out of monotone │
│Abstract:│components. Basic definitions and theorems are provided, as well as an application to the study of a model of one of the cell's most important subsystems. │
4. D. Angeli, E.D. Sontag, and Y. Wang. Input-to-state stability with respect to inputs and their derivatives. Internat. J. Robust Nonlinear Control, 13(11):1035-1056, 2003. [PDF] Keyword(s): input
to state stability, ISS, input to state stability, ISS.
│ │A new notion of input-to-state stability involving infinity norms of input derivatives up to a finite order k is introduced and characterized. An example shows that this notion of │
│Abstract:│stability is indeed weaker than the usual ISS. Applications to the study of global asymptotic stability of cascaded nonlinear systems are discussed. │
5. M. Chyba, N. E. Leonard, and E.D. Sontag. Singular trajectories in multi-input time-optimal problems: Application to controlled mechanical systems. Journal of Dynamical and Control Systems, 9
(1):103-129, 2003. [PDF] [doi:http://dx.doi.org/10.1023/A:1022159318457] Keyword(s): optimal control.
│ │This paper addresses the time-optimal control problem for a class of control systems which includes controlled mechanical systems with possible dissipation terms. The Lie algebras │
│ │associated with such mechanical systems enjoy certain special properties. These properties are explored and are used in conjunction with the Pontryagin maximum principle to determine │
│ │the structure of singular extremals and, in particular, time-optimal trajectories. The theory is illustrated with an application to a time-optimal problem for a class of underwater │
│Abstract:│vehicles. │
6. B.P. Ingalls, E.D. Sontag, and Y. Wang. An infinite-time relaxation theorem for differential inclusions. Proc. Amer. Math. Soc., 131(2):487-499, 2003. [PDF]
│ │The fundamental relaxation result for Lipschitz differential inclusions is the Filippov-Wazewski Relaxation Theorem, which provides approximations of trajectories of a relaxed │
│ │inclusion on finite intervals. A complementary result is presented, which provides approximations on infinite intervals, but does not guarantee that the approximation and the │
│Abstract:│reference trajectory satisfy the same initial condition. │
7. L. Moreau and E.D. Sontag. Balancing at the border of instability. Phys. Rev. E (3), 68(2):020901, 4, 2003. [PDF] Keyword(s): bifurcations, adaptive control.
│ │Some biological systems operate at the critical point between stability and instability and this requires a fine-tuning of parameters. We bring together two examples from the │
│ │literature that illustrate this: neural integration in the nervous system and hair cell oscillations in the auditory system. In both examples the question arises as to how the │
│ │required fine-tuning may be achieved and maintained in a robust and reliable way. We study this question using tools from nonlinear and adaptive control theory. We illustrate our │
│ │approach on a simple model which captures some of the essential features of neural integration. As a result, we propose a large class of feedback adaptation rules that may be │
│Abstract:│responsible for the experimentally observed robustness of neural integration. We mention extensions of our approach to the case of hair cell oscillations in the ear. │
8. L. Moreau, E.D. Sontag, and M. Arcak. Feedback tuning of bifurcations. Systems Control Lett., 50(3):229-239, 2003. [PDF] Keyword(s): bifurcations, adaptive control.
│ │This paper studies a feedback regulation problem that arises in at least two different biological applications. The feedback regulation problem under consideration may be interpreted │
│ │as an adaptive control problem for tuning bifurcation parameters, and it has not been studied in the control literature. The goal of the paper is to formulate this problem and to │
│Abstract:│present some preliminary results. │
9. J. R. Pomerening, E.D. Sontag, and J. E. Ferrell. Building a cell cycle oscillator: hysteresis and bistability in the activation of Cdc2. Nature Cell Biology, 5(4):346-351, 2003. Note:
Supplementary materials 2-4 are here: http://sontaglab.org/FTPDIR/pomerening-sontag-ferrell-additional.pdf. [WWW] [PDF] [doi:10.1038/ncb954] Keyword(s): systems biology, biochemical networks,
oscillations, nonlinear stability, dynamical systems, monotone systems.
│ │In the early embryonic cell cycle, Cdc2-cyclin B functions like an autonomous oscillator, at whose core is a negative feedback loop: cyclins accumulate and produce active mitotic │
│ │Cdc2-cyclin B Cdc2 activates the anaphase-promoting complex (APC); the APC then promotes cyclin degradation and resets Cdc2 to its inactive, interphase state. Cdc2 regulation also │
│ │involves positive feedback4, with active Cdc2-cyclin B stimulating its activator Cdc25 and inactivating its inhibitors Wee1 and Myt1. Under the correct circumstances, these positive │
│ │feedback loops could function as a bistable trigger for mitosis, and oscillators with bistable triggers may be particularly relevant to biological applications such as cell cycle │
│ │regulation. This paper examined whether Cdc2 activation is bistable, confirming that the response of Cdc2 to non-degradable cyclin B is temporally abrupt and switchlike, as would be │
│ │expected if Cdc2 activation were bistable. It is also shown that Cdc2 activation exhibits hysteresis, a property of bistable systems with particular relevance to biochemical │
│Abstract:│oscillators. These findings help establish the basic systems-level logic of the mitotic oscillator. │
10. E.D. Sontag. A remark on the converging-input converging-state property. IEEE Trans. Automat. Control, 48(2):313-314, 2003. [PDF]
│ │Suppose that an equilibrium is asymptotically stable when external inputs vanish. Then, every bounded trajectory which corresponds to a control which approaches zero and which lies in│
│Abstract:│the domain of attraction of the unforced system, must also converge to the equilibrium. This "well-known" but hard-to-cite fact is proved and slightly generalized here. │
11. E.D. Sontag. Adaptation and regulation with signal detection implies internal model. Systems Control Lett., 50(2):119-126, 2003. [PDF] Keyword(s): biological adaptation, internal model principle.
│ │This note provides a simple result showing, under suitable technical assumptions, that if a system S adapts to a class of external signals U, then S must necessarily contain a │
│ │subsystem which is capable of generating all the signals in U. It is not assumed that regulation is robust, nor is there a prior requirement for the system to be partitioned into │
│ │separate plant and controller components. Instead, a "signal detection" capability is imposed. These weaker assumptions make the result better applicable to cellular phenomena such as│
│Abstract:│the adaptation of E-coli chemotactic tumbling rate to constant concentrations. │
12. E.D. Sontag and M. Krichman. An example of a GAS system which can be destabilized by an integrable perturbation. IEEE Trans. Automat. Control, 48(6):1046-1049, 2003. [PDF] Keyword(s): nonlinear
│ │A construction is given of a globally asymptotically stable time-invariant system which can be destabilized by some integrable perturbation. Besides its intrinsic interest, this │
│Abstract:│serves to provide counterexamples to an open question regarding Lyapunov functions. │
13. D. Angeli and E.D. Sontag. A note on multistability and monotone I/O systems. In Proc. IEEE Conf. Decision and Control, Maui, Dec. 2003, IEEE Publications, 2003, pages 67-72, 2003. Keyword(s):
systems biology, biochemical networks, nonlinear stability, dynamical systems, monotone systems.
14. M. Malisoff, L. Rifford, and E.D. Sontag. Remarks on input to state stabilization. In Proc. IEEE Conf. Decision and Control, Maui, Dec. 2003, IEEE Publications, 2003, pages 1053-1058, 2003. [PDF]
Keyword(s): nonlinear control, feedback stabilization.
15. L. Moreau, E.D. Sontag, and M. Arcak. How feedback can tune a bifurcation parameter towards its unknown critical bifurcation value. In Proc. IEEE Conf. Decision and Control, Maui, Dec. 2003, IEEE
Publications, 2003, pages 2401-2406, 2003.
1. M. Arcak, D. Angeli, and E.D. Sontag. A unifying integral ISS framework for stability of nonlinear cascades. SIAM J. Control Optim., 40(6):1888-1904, 2002. [PDF] [doi:http://dx.doi.org/10.1137/
S0363012901387987] Keyword(s): input to state stability, integral input to state stability, iISS, ISS.
│ │We analyze nonlinear cascades in which the driven subsystem is integral ISS, and characterize the admissible integral ISS gains for stability. This characterization makes use of the │
│ │convergence speed of the driving subsystem, and allows a larger class of gain functions when the convergence is faster. We show that our integral ISS gain characterization unifies │
│Abstract:│different approaches in the literature which restrict the nonlinear growth of the driven subsystem and the convergence speed of the driving subsystem. │
2. M. Chaves and E.D. Sontag. State-Estimators for chemical reaction networks of Feinberg-Horn-Jackson zero deficiency type. European J. Control, 8:343-359, 2002. [PDF] Keyword(s): observability,
zero-deficiency networks, systems biology, biochemical networks, observers, nonlinear stability, dynamical systems.
│ │This paper provides a necessary and sufficient condition for detectability, and an explicit construction of observers when this condition is satisfied, for chemical reaction networks │
│Abstract:│of the Feinberg-Horn-Jackson zero deficiency type. │
3. B.N. Kholodenko, A. Kiyatkin, F.J. Bruggeman, E.D. Sontag, H.V. Westerhoff, and J. Hoek. Untangling the wires: a novel strategy to trace functional interactions in signaling and gene networks.
Proceedings of the National Academy of Sciences USA, 99:12841-12846, 2002. [PDF] Keyword(s): modular response analysis, MAPK cascades, systems biology, biochemical networks, reverse engineering,
gene and protein networks, protein networks, gene networks, systems identification.
│ │Emerging technologies have enabled the acquisition of large genomics and proteomics data sets. This paper proposes a novel quantitative method for determining functional interactions │
│ │in cellular signaling and gene networks. It can be used to explore cell systems at a mechanistic level, or applied within a modular framework, which dramatically decreases the number │
│ │of variables to be assayed. The topology and strength of network connections are retrieved from experimentally measured network responses to successive perturbations of all modules. │
│ │In addition, the method can reveal functional interactions even when the components of the system are not all known, in which case some connections retrieved by the analysis will not │
│ │be direct but correspond to the interaction routes through unidentified elements. The method is tested and illustrated using computer-generated responses of a modeled MAPK cascade and│
│Abstract:│gene network. │
4. M. Krichman and E.D. Sontag. Characterizations of detectability notions in terms of discontinuous dissipation functions. Internat. J. Control, 75(12):882-900, 2002. [PDF] Keyword(s): input to
state stability, detectability, input to output stability, detectability.
│ │We consider a new Lyapunov-type characterization of detectability for nonlinear systems without controls, in terms of lower-semicontinuous (not necessarily smooth, or even continuous)│
│ │dissipation functions, and prove its equivalence to the GASMO (global asymptotic stability modulo outputs) and UOSS (uniform output-to-state stability) properties studied in previous │
│ │work. The result is then extended to provide a construction of a discontinuous dissipation function characterization of the IOSS (input-to-state stability) property for systems with │
│ │controls. This paper complements a recent result on smooth Lyapunov characterizations of IOSS. The utility of non-smooth Lyapunov characterizations is illustrated by application to a │
│Abstract:│well-known transistor network example. │
5. D. Liberzon, A. S. Morse, and E.D. Sontag. Output-input stability and minimum-phase nonlinear systems. IEEE Trans. Automat. Control, 47(3):422-436, 2002. [PDF] Keyword(s): input to state
stability, detectability, minimum-phase systems, ISS, nonlinear control, minimum phase, adaptive control.
│ │This paper introduces and studies a new definition of the minimum-phase property for general smooth nonlinear control systems. The definition does not rely on a particular choice of │
│ │coordinates in which the system takes a normal form or on the computation of zero dynamics. In the spirit of the ``input-to-state stability'' philosophy, it requires the state and the│
│ │input of the system to be bounded by a suitable function of the output and derivatives of the output, modulo a decaying term depending on initial conditions. The class of │
│ │minimum-phase systems thus defined includes all affine systems in global normal form whose internal dynamics are input-to-state stable and also all left-invertible linear systems │
│ │whose transmission zeros have negative real parts. As an application, we explain how the new concept enables one to develop a natural extension to nonlinear systems of a basic result │
│Abstract:│from linear adaptive control. │
6. D. Liberzon, E.D. Sontag, and Y. Wang. Universal construction of feedback laws achieving ISS and integral-ISS disturbance attenuation. Systems Control Lett., 46(2):111-127, 2002. Note: Errata
here: http://sontaglab.org/FTPDIR/iiss-clf-errata.pdf. [PDF] Keyword(s): input to state stability, integral input to state stability, ISS, iISS, nonlinear control, feedback stabilization.
│ │We study nonlinear systems with both control and disturbance inputs. The main problem addressed in the paper is design of state feedback control laws that render the closed-loop │
│ │system integral-input-to-state stable (iISS) with respect to the disturbances. We introduce an appropriate concept of control Lyapunov function (iISS-CLF), whose existence leads to an│
│ │explicit construction of such a control law. The same method applies to the problem of input-to-state stabilization. Converse results and techniques for generating iISS-CLFs are also │
│Abstract:│discussed. │
7. E.D. Sontag. Asymptotic amplitudes and Cauchy gains: A small-gain principle and an application to inhibitory biological feedback. Systems Control Lett., 47(2):167-179, 2002. [PDF] Keyword(s):
MAPK cascades, cyclic feedback systems, small-gain.
│ │The notions of asymptotic amplitude for signals, and Cauchy gain for input/output systems, and an associated small-gain principle, are introduced. These concepts allow the │
│ │consideration of systems with multiple, and possibly feedback-dependent, steady states. A Lyapunov-like characterization allows the computation of gains for state-space systems, and │
│ │the formulation of sufficient conditions insuring the lack of oscillations and chaotic behaviors in a wide variety of cascades and feedback loops. An application in biology (MAPK │
│Abstract:│signaling) is worked out in detail. │
8. E.D. Sontag. Correction to: ``Structure and stability of certain chemical networks and applications to the kinetic proofreading model of T-cell receptor signal transduction'' [IEEE Trans.
Automat. Control 46 (2001), no. 7, 1028--1047; MR1842137 (2002e:92006)]. IEEE Trans. Automat. Control, 47(4):705, 2002. [PDF] Keyword(s): zero-deficiency networks, systems biology, biochemical
networks, nonlinear stability, dynamical systems.
│Abstract:│errata for Structure and stability of certain chemical networks and applications to the kinetic proofreading model of T-cell receptor signal transduction│
9. E.D. Sontag. For differential equations with r parameters, 2r+1 experiments are enough for identification. J. Nonlinear Sci., 12(6):553-583, 2002. [PDF] Keyword(s): identifiability,
observability, systems biology, biochemical networks, parameter identification, real-analytic functions.
│ │Given a set of differential equations whose description involves unknown parameters, such as reaction constants in chemical kinetics, and supposing that one may at any time measure │
│ │the values of some of the variables and possibly apply external inputs to help excite the system, how many experiments are sufficient in order to obtain all the information that is │
│ │potentially available about the parameters? This paper shows that the best possible answer (assuming exact measurements and real analiticity) is 2r+1 experiments, where r is the │
│Abstract:│number of parameters. │
10. E.D. Sontag and B.P. Ingalls. A small-gain theorem with applications to input/output systems, incremental stability, detectability, and interconnections. J. Franklin Inst., 339(2):211-229, 2002.
[PDF] Keyword(s): input to state stability, ISS, Small-Gain Theorem, small gain.
│ │A general ISS-type small-gain result is presented. It specializes to a small-gain theorem for ISS operators, and it also recovers the classical statement for ISS systems in │
│Abstract:│state-space form. In addition, we highlight applications to incrementally stable systems, detectable systems, and to interconnections of stable systems. │
11. D. Angeli and E.D. Sontag. A remark on monotone control systems. In Proc. IEEE Conf. Decision and Control, Las Vegas, Dec. 2002, IEEE Publications, pages 1876-1881, 2002.
12. J.P. Hespanha, D. Liberzon, and E.D. Sontag. Nonlinear observability and an invariance principle for switched systems. In Proc. IEEE Conf. Decision and Control, Las Vegas, Dec. 2002, IEEE
Publications, pages 4300-4305, 2002. [PDF] Keyword(s): observability.
13. B.P. Ingalls, E.D. Sontag, and Y. Wang. A relaxation theorem for differential inclusions with applications to stability properties. In D. Gilliam and J. Rosenthal, editors, Mathematical Theory of
Networks and Systems, Electronic Proceedings of MTNS-2002 Symposium held at the University of Notre Dame, August 2002, 2002. Note: (12 pages). [PDF]
│ │The fundamental Filippov--Wazwski Relaxation Theorem states that the solution set of an initial value problem for a locally Lipschitz inclusion is dense in the solution set of the │
│ │same initial value problem for the corresponding relaxation inclusion on compact intervals. In a recent paper of ours, a complementary result was provided for inclusions with finite │
│ │dimensional state spaces which says that the approximation can be carried out over non-compact or infinite intervals provided one does not insist on the same initial values. This note│
│ │extends the infinite-time relaxation theorem to the inclusions whose state spaces are Banach spaces. To illustrate the motivations for studying such approximation results, we briefly │
│Abstract:│discuss a quick application of the result to output stability and uniform output stability properties. │
14. B.P. Ingalls, E.D. Sontag, and Y. Wang. Measurement to error stability: a notion of partial detectability for nonlinear systems. In Proc. IEEE Conf. Decision and Control, Las Vegas, Dec. 2002,
IEEE Publications, pages 3946-3951, 2002. [PDF] Keyword(s): input to state stability.
│ │For systems whose output is to be kept small (thought of as an error output), the notion of input to output stability (IOS) arises. Alternatively, when considering a system whose │
│ │output is meant to provide information about the state (i.e. a measurement output), one arrives at the detectability notion of output to state stability (OSS). Combining these │
│ │concepts, one may consider a system with two types of outputs, an error and a measurement. This leads naturally to a notion of partial detectability which we call measurement to error│
│ │stability (MES). This property characterizes systems in which the error signal is detectable through the measurement signal. This paper provides a partial Lyapunov characterization of│
│ │the MES property. A closely related property of stability in three measures (SIT) is introduced, which characterizes systems for which the error decays whenever it dominates the │
│ │measurement. The SIT property is shown to imply MES, and the two are shown to be equivalent under an additional boundedness assumption. A nonsmooth Lyapunov characterization of the │
│ │SIT property is provided, which yields the partial characterization of MES. The analysis is carried out on systems described by differential inclusions -- implicitly incorporating a │
│Abstract:│disturbance input with compact value-set. │
15. E.D. Sontag. Asymptotic amplitudes, Cauchy gains, an associated small-gain principle, and an application to inhibitory biological feedback. In Proc. IEEE Conf. Decision and Control, Las Vegas,
Dec. 2002, IEEE Publications, pages 4318-4323, 2002. Keyword(s): cyclic feedback systems, small-gain.
1. A. C. Antoulas, E. D. Sontag, and Y. Yamamoto. Controllability and Observability, pages 264-281. John Wiley & Sons, Inc., 2001. [WWW] [PDF] [doi:10.1002/047134608X.W1006] Keyword(s):
reachability, controllability, observability, Lie algebra accessibility.
2. E.D. Sontag. The ISS philosophy as a unifying framework for stability-like behavior. In Nonlinear control in the year 2000, Vol. 2 (Paris), volume 259 of Lecture Notes in Control and Inform. Sci.
, pages 443-467. Springer, London, 2001. [PDF] Keyword(s): input to state stability, integral input to state stability, iISS, ISS, input to output stability.
│ │(This is an expository paper prepared for a plenary talk given at the Second Nonlinear Control Network Workshop, Paris, June 9, 2000.) The input to state stability (ISS) paradigm is │
│ │motivated as a generalization of classical linear systems concepts under coordinate changes. A summary is provided of the main theoretical results concerning ISS and related notions │
│Abstract:│of input/output stability and detectability. A bibliography is also included, listing extensions, applications, and other current work. │
3. B. DasGupta and E.D. Sontag. A polynomial-time algorithm for checking equivalence under certain semiring congruences motivated by the state-space isomorphism problem for hybrid systems. Theor.
Comput. Sci., 262(1-2):161-189, 2001. [PDF] [doi:http://dx.doi.org/10.1016/S0304-3975(00)00188-2] Keyword(s): hybrid systems, computational complexity.
│ │The area of hybrid systems concerns issues of modeling, computation, and control for systems which combine discrete and continuous components. The subclass of piecewise linear (PL) │
│ │systems provides one systematic approach to discrete-time hybrid systems, naturally blending switching mechanisms with classical linear components. PL systems model arbitrary │
│ │interconnections of finite automata and linear systems. Tools from automata theory, logic, and related areas of computer science and finite mathematics are used in the study of PL │
│ │systems, in conjunction with linear algebra techniques, all in the context of a "PL algebra" formalism. PL systems are of interest as controllers as well as identification models. │
│ │Basic questions for any class of systems are those of equivalence, and, in particular, if state spaces are equivalent under a change of variables. This paper studies this state-space │
│ │equivalence problem for PL systems. The problem was known to be decidable, but its computational complexity was potentially exponential; here it is shown to be solvable in │
│Abstract:│polynomial-time. │
4. W. Desch, H. Logemann, E. P. Ryan, and E.D. Sontag. Meagre functions and asymptotic behaviour of dynamical systems. Nonlinear Anal., 44(8, Ser. A: Theory Methods):1087-1109, 2001. [PDF] [doi:
http://dx.doi.org/10.1016/S0362-546X(99)00323-5] Keyword(s): invariance principle.
│ │A measurable function x from a subset J of R into a metric space X is said to be C-meagre if C is non-empty subset of X and, for every closed subset K of X disjoint from C, the │
│ │preimage of K under x has finite Lebesgue measure. This concept of meagreness, applied to trajectories, is shown to provide a unifying framework which facilitates a variety of │
│Abstract:│characterizations, extensions or generalizations of diverse facts pertaining to asymptotic behaviour of dynamical systems. │
5. M. Krichman, E.D. Sontag, and Y. Wang. Input-output-to-state stability. SIAM J. Control Optim., 39(6):1874-1928, 2001. [PDF] [doi:http://dx.doi.org/10.1137/S0363012999365352] Keyword(s): input to
state stability.
│ │This work explores Lyapunov characterizations of the input-output-to-state stability (IOSS) property for nonlinear systems. The notion of IOSS is a natural generalization of the │
│ │standard zero-detectability property used in the linear case. The main contribution of this work is to establish a complete equivalence between the input-output-to-state stability │
│ │property and the existence of a certain type of smooth Lyapunov function. As corollaries, one shows the existence of "norm-estimators", and obtains characterizations of nonlinear │
│Abstract:│detectability in terms of relative stability and of finite-energy estimates. │
6. E.D. Sontag. Structure and stability of certain chemical networks and applications to the kinetic proofreading model of T-cell receptor signal transduction. IEEE Trans. Automat. Control, 46
(7):1028-1047, 2001. [PDF] Keyword(s): zero-deficiency networks, systems biology, biochemical networks, nonlinear stability, dynamical systems, kinetic proofreading, T cells, immunology.
│ │This paper deals with the theory of structure, stability, robustness, and stabilization for an appealing class of nonlinear systems which arises in the analysis of chemical networks. │
│ │The results given here extend, but are also heavily based upon, certain previous work by Feinberg, Horn, and Jackson, of which a self-contained and streamlined exposition is included.│
│Abstract:│The theoretical conclusions are illustrated through an application to the kinetic proofreading model proposed by McKeithan for T-cell receptor signal transduction. │
7. D. Angeli, E.D. Sontag, and Y. Wang. A note on input-to-state stability with input derivatives. In Proc. Nonlinear Control System Design Symposium, St. Petersburg, July 2001, pages 720-725, 2001.
Keyword(s): input to state stability, ISS.
8. M. Arcak, D. Angeli, and E.D. Sontag. Stabilization of cascades using integral input-to-state stability. In Proc. IEEE Conf. Decision and Control, Orlando, Dec. 2001, IEEE Publications, 2001,
pages 3814-3819, 2001. Keyword(s): nonlinear control, feedback stabilization, input to state stability.
9. M. Chaves and E.D. Sontag. An alternative observer for zero deficiency chemical networks. In Proc. Nonlinear Control System Design Symposium, St. Petersburg, July 2001, pages 575-578, 2001.
Keyword(s): observability, observers, zero-deficiency networks, systems biology, biochemical networks, nonlinear stability, dynamical systems.
10. M. Chaves and E.D. Sontag. Observers for certain chemical reaction networks. In Proc. 2001 European Control Conf., Sep. 2001, pages 3715-3720, 2001. Keyword(s): zero-deficiency networks, systems
biology, biochemical networks, nonlinear stability, dynamical systems, observability, observers.
11. M. Chyba, N.E. Leonard, and E.D. Sontag. Optimality for underwater vehicles. In Proc. IEEE Conf. Decision and Control, Orlando, Dec. 2001,IEEE Publications, 2001, pages 4204-4209, 2001. [PDF]
Keyword(s): optimal control.
12. B.P. Ingalls, D. Angeli, E.D. Sontag, and Y. Wang. Asymptotic characterizations of IOSS. In Proc. IEEE Conf. Decision and Control, Orlando, Dec. 2001, IEEE Publications, 2001, pages 881-886,
2001. Keyword(s): nonlinear control, feedback stabilization, input to state stability.
13. P. Kuusela, D. Ocone, and E.D. Sontag. Remarks on the sample complexity for linear control systems identification. In IFAC Workshop on Adaptation and Learning in Control and Signal Processing,
ALCOSP2001, Cernobbio-Como, Italy, 29-31 August, 2001, pages 431-436, 2001.
14. D. Liberzon, A.S. Morse, and E.D. Sontag. Output-input stability: a new variant of the minimum-phase property for nonlinear systems. In Proc. Nonlinear Control System Design Symposium, St.
Petersburg, July 2001, pages 743-748, 2001. Keyword(s): input to state stability.
15. E.D. Sontag, B.P. Ingalls, and Y. Wang. Generalizations of asymptotic gain characterizations of ISS to input-to-output stability. In Proc. American Control Conf., Arlington, June 2001, pages
2279-2284, 2001. Keyword(s): input to state stability, ISS.
1. L. Grne, E.D. Sontag, and F.R. Wirth. On equivalence of exponential and asymptotic stability under changes of variables. In International Conference on Differential Equations, Vol. 1, 2 (Berlin,
1999), pages 850-852. World Sci. Publishing, River Edge, NJ, 2000. Keyword(s): input to state stability.
2. D. Angeli, E.D. Sontag, and Y. Wang. A characterization of integral input-to-state stability. IEEE Trans. Automat. Control, 45(6):1082-1097, 2000. [PDF] Keyword(s): input to state stability,
integral input to state stability, iISS, ISS.
│ │Just as input to state stability (ISS) generalizes the idea of finite gains with respect to supremum norms, the new notion of integral input to state stability (IISS) generalizes the │
│ │concept of finite gain when using an integral norm on inputs. In this paper, we obtain a necessary and sufficient characterization of the IISS property, expressed in terms of │
│Abstract:│dissipation inequalities. │
3. D. Angeli, E.D. Sontag, and Y. Wang. Further equivalences and semiglobal versions of integral input to state stability. Dynamics and Control, 10(2):127-149, 2000. [PDF] [doi:http://dx.doi.org/
10.1023/A:1008356223747] Keyword(s): input to state stability, integral input to state stability, iISS, ISS.
│ │This paper continues the study of the integral input-to-state stability (IISS) property. It is shown that the IISS property is equivalent to one which arises from the consideration of│
│ │mixed norms on states and inputs, as well as to the superposition of a ``bounded energy bounded state'' requirement and the global asymptotic stability of the unforced system. A │
│ │semiglobal version of IISS is shown to imply the global version, though a counterexample shows that the analogous fact fails for input to state stability (ISS). The results in this │
│Abstract:│note complete the basic theoretical picture regarding IISS and ISS. │
4. X. Bao, Z. Lin, and E.D. Sontag. Finite gain stabilization of discrete-time linear systems subject to actuator saturation. Automatica, 36(2):269-277, 2000. [PDF] Keyword(s): discrete-time,
saturation, input-to-state stability, stabilization, ISS, bounded inputs.
│ │It is shown that, for neutrally stable discrete-time linear systems subject to actuator saturation, finite gain lp stabilization can be achieved by linear output feedback, for all p> │
│ │1. An explicit construction of the corresponding feedback laws is given. The feedback laws constructed also result in a closed-loop system that is globally asymptotically stable, and │
│Abstract:│in an input-to-state estimate. │
5. W. Maass and E.D. Sontag. Neural Systems as Nonlinear Filters. Neural Comput., 12(8):1743-1772, 2000. [PDF] [doi:http://dx.doi.org/10.1162/089976600300015123] Keyword(s): neural networks,
Volterra series.
│ │We analyze computations on temporal patterns and spatio-temporal patterns in formal network models whose temporal dynamics arises from empirically established quantitative models for │
│ │short term dynamics at biological synapses. We give a complete characterization of all linear and nonlinear filters that can be approximated by such dynamic network models: it is the │
│ │class of all filters that can be approximated by Volterra series. This characterization is shown to be rather stable with regard to changes in the model. For example it is shown that │
│ │synaptic facilitation and one layer of neurons suffices for approximating arbitrary filters from this class. Our results provide a new complexity hierarchy for all filters that are │
│ │approximable by Volterra series, which appears to be closer related to the actual cost of implementing such filters in neural hardware than preceding complexity measures. Our results │
│Abstract:│also provide a new parameterization for approximations to such filters in terms of parameters that are arguable related to those that are tunable in biological neural systems. │
6. M. Malisoff and E.D. Sontag. Universal formulas for feedback stabilization with respect to Minkowski balls. Systems Control Lett., 40(4):247-260, 2000. [PDF] Keyword(s): nonlinear control,
feedback stabilization, saturation, control-Lyapunov functions, bounded inputs.
│ │This note provides explicit algebraic stabilizing formulas for clf's when controls are restricted to certain Minkowski balls in Euclidean space. Feedbacks of this kind are known to │
│ │exist by a theorem of Artstein, but the proof of Artstein's theorem is nonconstructive. The formulas are obtained from a general feedback stabilization technique and are used to │
│Abstract:│construct approximation solutions to some stabilization problems. │
7. L. Rosier and E.D. Sontag. Remarks regarding the gap between continuous, Lipschitz, and differentiable storage functions for dissipation inequalities appearing in H infinity control. Systems
Control Lett., 41(4):237-249, 2000. [PDF] Keyword(s): viscosity solutions, H-infinity control.
│ │This paper deals with the regularity of solutions of the Hamilton-Jacobi Inequality which arises in H-infinity control. It shows by explicit counterexamples that there are gaps │
│ │between existence of continuous and locally Lipschitz (positive definite and proper) solutions, and between Lipschitz and continuously differentiable ones. On the other hand, it is │
│Abstract:│shown that it is always possible to smooth-out solutions, provided that an infinitesimal increase in gain is allowed. │
8. E.D. Sontag and Y. Wang. Lyapunov characterizations of input to output stability. SIAM J. Control Optim., 39(1):226-249, 2000. [PDF] [doi:http://dx.doi.org/10.1137/S0363012999350213] Keyword(s):
input to state stability.
│ │This paper presents necessary and sufficient characterizations of several notions of input to output stability. Similar Lyapunov characterizations have been found to play a key role │
│ │in the analysis of the input to state stability property, and the results given here extend their validity to the case when the output, but not necessarily the entire internal state, │
│Abstract:│is being regulated. │
9. M. Chyba, N.E. Leonard, and E.D. Sontag. Time-optimal control for underwater vehicles. In N.E. Leonard and R. Ortega, editors, Lagrangian and Hamiltonian Methods for Nonlinear Control, pages
117-122, 2000. Pergamon Press, Oxford. [PDF]
10. D. Liberzon, A.S. Morse, and E.D. Sontag. A new definition of the minimum-phase property for nonlinear systems, with an application to adaptive control. In Proc. IEEE Conf. Decision and Control,
Sydney, Dec. 2000, IEEE Publications, 2000, pages 2106-2111, 2000.
11. T. Natschlger, W. Maass, E.D. Sontag, and A. Zador. Processing of time series by neural circuits with biologically realistic synaptic dynamics. In Todd K. Leen, T. G. Dietterich, and V. Tresp,
editors, Advances in Neural Information Processing Systems 13 (NIPS2000), pages 145-151, 2000. MIT Press, Cambridge. [PDF] Keyword(s): neural networks, Volterra series.
│ │Experimental data show that biological synapses are dynamic, i.e., their weight changes on a short time scale by several hundred percent in dependence of the past input to the │
│ │synapse. In this article we explore the consequences that this synaptic dynamics entails for the computational power of feedforward neural networks. It turns out that even with just a│
│ │single hidden layer such networks can approximate a surprisingly large large class of nonlinear filters: all filters that can be characterized by Volterra series. This result is │
│ │robust with regard to various changes in the model for synaptic dynamics. Furthermore we show that simple gradient descent suffices to approximate a given quadratic filter by a rather│
│Abstract:│small neural system with dynamic synapses. │
1. V.D. Blondel, E.D Sontag, M. Vidyasagar, and J.C. Willems. Open Problems in Mathematical Systems and Control Theory (edited book). Springer Verlag, 1999.
2. E.D. Sontag. Control-Lyapunov functions. In Open problems in mathematical systems and control theory, Comm. Control Engrg. Ser., pages 211-216. Springer, London, 1999. Keyword(s):
control-Lyapunov functions.
3. E.D. Sontag. Nonlinear feedback stabilization revisited. In Dynamical systems, control, coding, computer vision (Padova, 1998), volume 25 of Progr. Systems Control Theory, pages 223-262.
Birkhuser, Basel, 1999. Note: This is a short conference proceedings paper. Please consult the full version Stability and stabilization: discontinuities and the effect of disturbances.
4. E.D. Sontag. Stability and stabilization: discontinuities and the effect of disturbances. In Nonlinear analysis, differential equations and control (Montreal, QC, 1998), volume 528 of NATO Sci.
Ser. C Math. Phys. Sci., pages 551-598. Kluwer Acad. Publ., Dordrecht, 1999. [PDF] Keyword(s): feedback stabilization, nonlinear control, input to state stability.
│ │In this expository paper, we deal with several questions related to stability and stabilization of nonlinear finite-dimensional continuous-time systems. We review the basic problem of│
│ │feedback stabilization, placing an emphasis upon relatively new areas of research which concern stability with respect to "noise" (such as errors introduced by actuators or sensors). │
│ │The table of contents is as follows: Review of Stability and Asymptotic Controllability, The Problem of Stabilization, Obstructions to Continuous Stabilization, Control-Lyapunov │
│ │Functions and Artstein's Theorem, Discontinuous Feedback, Nonsmooth CLF's, Insensitivity to Small Measurement and Actuator Errors, Effect of Large Disturbances: Input-to-State │
│Abstract:│Stability, Comments on Notions Related to ISS. │
5. F. Albertini and E.D. Sontag. Continuous control-Lyapunov functions for asymptotically controllable time-varying systems. Internat. J. Control, 72(18):1630-1641, 1999. [PDF] Keyword(s):
control-Lyapunov functions.
│ │This paper shows that, for time varying systems, global asymptotic controllability to a given closed subset of the state space is equivalent to the existence of a continuous │
│Abstract:│control-Lyapunov function with respect to the set. │
6. D. Angeli and E.D. Sontag. Forward completeness, unboundedness observability, and their Lyapunov characterizations. Systems Control Lett., 38(4-5):209-217, 1999. [PDF] Keyword(s): observability,
input to state stability, dynamical systems.
│ │A finite-dimensional continuous-time system is forward complete if solutions exist globally, for positive time. This paper shows that forward completeness can be characterized in a │
│ │necessary and sufficient manner by means of smooth scalar growth inequalities. Moreover, a version of this fact is also proved for systems with inputs, and a generalization is also │
│ │provided for systems with outputs and a notion (unboundedness observability) of relative completeness. We apply these results to obtain a bound on reachable states in terms of │
│Abstract:│energy-like estimates of inputs. │
7. L. Grne, E.D. Sontag, and F.R. Wirth. Asymptotic stability equals exponential stability, and ISS equals finite energy gain---if you twist your eyes. Systems Control Lett., 38(2):127-134, 1999. [
PDF] Keyword(s): input to state stability, ISS.
│ │This paper shows that uniformly global asymptotic stability for a family of ordinary differential equations is equivalent to uniformly global exponential stability under a suitable │
│ │nonlinear change of variables. The same is shown respectively for input-to-state stability, input-to-state exponential stability, and the property of finite square-norm gain │
│Abstract:│("nonlinear H-infty"). The results are shown for systems of any dimension not equal to 4 or 5. │
8. Y.S. Ledyaev and E.D. Sontag. A Lyapunov characterization of robust stabilization. Nonlinear Anal., 37(7, Ser. A: Theory Methods):813-840, 1999. [PDF] Keyword(s): nonlinear control, feedback
│ │One of the fundamental facts in control theory (Artstein's theorem) is the equivalence, for systems affine in controls, between continuous feedback stabilizability to an equilibrium │
│ │and the existence of smooth control Lyapunov functions. This equivalence breaks down for general nonlinear systems, not affine in controls. One of the main results in this paper │
│ │establishes that the existence of smooth Lyapunov functions implies the existence of (in general, discontinuous) feedback stabilizers which are insensitive to small errors in state │
│ │measurements. Conversely, it is shown that the existence of such stabilizers in turn implies the existence of smooth control Lyapunov functions. Moreover, it is established that, for │
│ │general nonlinear control systems under persistently acting disturbances, the existence of smooth Lyapunov functions is equivalent to the existence of (possibly) discontinuous) │
│Abstract:│feedback stabilizers which are robust with respect to small measurement errors and small additive external disturbances. │
9. W. Maass and E.D. Sontag. Analog neural nets with Gaussian or other common noise distributions cannot recognize arbitrary regular languages. Neural Comput., 11(3):771-782, 1999. [PDF] [doi:http:/
/dx.doi.org/10.1162/089976699300016656] Keyword(s): machine learning, neural networks.
│ │We consider recurrent analog neural nets where the output of each gate is subject to Gaussian noise, or any other common noise distribution that is nonzero on a large set. We show │
│ │that many regular languages cannot be recognized by networks of this type, and we give a precise characterization of those languages which can be recognized. This result implies │
│ │severe constraints on possibilities for constructing recurrent analog neural nets that are robust against realistic types of analog noise. On the other hand we present a method for │
│Abstract:│constructing feedforward analog neural nets that are robust with regard to analog noise of this type. │
10. D. Nesic, A.R. Teel, and E.D. Sontag. Formulas relating KL stability estimates of discrete-time and sampled-data nonlinear systems. Systems Control Lett., 38(1):49-60, 1999. [PDF] Keyword(s):
input to state stability, sampled-data systems, discrete-time systems, sampling, ISS.
│ │We provide an explicit KL stability or input-to-state stability (ISS) estimate for a sampled-data nonlinear system in terms of the KL estimate for the corresponding discrete-time │
│ │system and a K function describing inter-sample growth. It is quite obvious that a uniform inter-sample growth condition, plus an ISS property for the exact discrete-time model of a │
│ │closed-loop system, implies uniform ISS of the sampled-data nonlinear system; our results serve to quantify these facts by means of comparison functions. Our results can be used as an│
│ │alternative to prove and extend results of Aeyels et al and extend some results by Chen et al to a class of nonlinear systems. Finally, the formulas we establish can be used as a tool│
│Abstract:│for some other problems which we indicate. │
11. E.D. Sontag. Clocks and insensitivity to small measurement errors. ESAIM Control Optim. Calc. Var., 4:537-557, 1999. [PDF] Keyword(s): nonlinear control, feedback stabilization, hybrid systems,
discontinuous feedback, measurement noise.
│ │This paper provides a precise result which shows that insensitivity to small measurement errors in closed-loop stabilization can be attained provided that the feedback controller │
│Abstract:│ignores observations during small time intervals. │
12. E.D. Sontag and Y. Qiao. Further results on controllability of recurrent neural networks. Systems Control Lett., 36(2):121-129, 1999. [PDF] Keyword(s): machine learning, controllability,
recurrent neural networks, neural networks.
│ │This paper studies controllability properties of recurrent neural networks. The new contributions are: (1) an extension of the result in "Complete controllability of continuous-time │
│ │recurrent neural networks" to a slightly different model, where inputs appear in an affine form, (2) a formulation and proof of a necessary and sufficient condition, in terms of │
│Abstract:│local-local controllability, and (3) a complete analysis of the 2-dimensional case for which the hypotheses made in previous work do not apply. │
13. E.D. Sontag and Y. Wang. Notions of input to output stability. Systems Control Lett., 38(4-5):235-248, 1999. [PDF] Keyword(s): input to state stability, ISS, input to output stability.
│ │This paper deals with several related notions of output stability with respect to inputs (which may be thought of as disturbances). The main such notion is called input to output │
│ │stability (IOS), and it reduces to input to state stability (ISS) when the output equals the complete state. For systems with no inputs, IOS provides a generalization of the classical│
│Abstract:│concept of partial stability. Several variants, which formalize in different manners the transient behavior, are introduced. The main results provide a comparison among these notions │
14. D. Angeli and E.D. Sontag. Characterizations of forward completeness. In Proc. IEEE Conf. Decision and Control, Phoenix, Dec. 1999, IEEE Publications, 1999, pages 2551-2556, 1999.
15. L. Grune, E.D. Sontag, and F.R. Wirth. On the equivalence between asymptotic and exponential stability, and between ISS and finite H infinity gain. In Proc. IEEE Conf. Decision and Control,
Phoenix, Dec. 1999, IEEE Publications, 1999, pages 1220-1225, 1999. Keyword(s): input to state stability.
16. B.P. Ingalls, E.D. Sontag, and Y. Wang. Remarks on input to output stability. In Proc. IEEE Conf. Decision and Control, Phoenix, Dec. 1999, IEEE Publications, 1999, pages 1226-1231, 1999. Keyword
(s): input to state stability, integral input to state stability, input to output stability.
17. Z-P. Jiang, E.D. Sontag, and Y. Wang. Input-to-state stability for discrete-time nonlinear systems. In Proc. 14th IFAC World Congress, Vol E (Beijing), pages 277-282, 1999. [PDF] Keyword(s):
input to state stability, input to state stability, ISS, discrete-time.
│ │This paper studies the input-to-state stability (ISS) property for discrete-time nonlinear systems. We show that many standard ISS results may be extended to the discrete-time case. │
│ │More precisely, we provide a Lyapunov-like sufficient condition for ISS, and we show the equivalence between the ISS property and various other properties, as well as provide a small │
│Abstract:│gain theorem. │
18. M. Krichman, E.D. Sontag, and Y. Wang. Lyapunov characterizations of input-ouput-to-state stability. In Proc. IEEE Conf. Decision and Control, Phoenix, Dec. 1999, IEEE Publications, 1999, pages
2070-2075, 1999. Keyword(s): input to state stability, ISS, detectability.
19. D. Liberzon, E.D. Sontag, and Y. Wang. On integral-input-to-state stabilization. In Proc. American Control Conf., San Diego, June 1999, pages 1598-1602, 1999. [PDF] Keyword(s): input to state
stability, integral input to state stability, iISS, ISS, control-Lyapunov functions.
│ │This paper continues the investigation of the recently introduced integral version of input-to-state stability (iISS). We study the problem of designing control laws that achieve iISS│
│ │disturbance attenuation. The main contribution is an appropriate concept of control Lyapunov function (iISS-CLF), whose existence leads to an explicit construction of such a control │
│Abstract:│law. The results are compared and contrasted with the ones available for the ISS case. │
20. W. Maass and E.D. Sontag. A precise characterization of the class of languages recognized by neural nets under Gaussian and other common noise distributions. In Proceedings of the 1998 conference
on Advances in neural information processing systems II, Cambridge, MA, USA, pages 281-287, 1999. MIT Press. [PDF] Keyword(s): machine learning, neural networks.
21. M. Malisoff and E.D. Sontag. Universal formulas for CLF's with respect to Minkowski balls. In Proc. American Control Conf., San Diego, June 1999, pages 3033-3037, 1999.
22. D. Nesic, A.R. Teel, and E.D. Sontag. On stability and input-to-state stability ${\cal K}{\cal L}$ estimates of discrete-time and sampled-data nonlinear systems. In Proc. American Control Conf.,
San Diego, June 1999, pages 3990-3994, 1999. Keyword(s): input to state stability, sampled-data systems, discrete-time systems, sampling.
23. E.D. Sontag. Feedback insensitive to small measurement errors. In Proc. IEEE Conf. Decision and Control, Phoenix, Dec. 1999, IEEE Publications, 1999, pages 2661-2666, 1999.
1. E.D. Sontag. Mathematical Control Theory. Deterministic Finite-Dimensional Systems, volume 6 of Texts in Applied Mathematics. Springer-Verlag, New York, Second edition, 1998. [PDF]
│ │This book is copyrighted by Springer-Verlag. Springer has kindly allowed me to place a copy on the web, as a reference and for ease of web searches. Please consider buying your own │
│Abstract:│hardcopy. │
2. E.D. Sontag. A general approach to path planning for systems without drift. In J. Baillieul, S. S. Sastry, and H.J. Sussmann, editors, Essays on mathematical robotics (Minneapolis, MN, 1993),
volume 104 of IMA Vol. Math. Appl., pages 151-168. Springer, New York, 1998. [PDF] Keyword(s): path-planning, systems without drift, nonlinear control, controllability, real-analytic functions.
│ │This paper proposes a generally applicable technique for the control of analytic systems with no drift. The method is based on the generation of "nonsingular loops" that allow │
│Abstract:│linearized controllability. One can then implement Newton and/or gradient searches in the search for a control. A general convergence theorem is proved. │
3. E.D. Sontag. Automata and neural networks. In The handbook of brain theory and neural networks, pages 119-122. MIT Press, Cambridge, MA, USA, 1998. [PDF] Keyword(s): neural networks.
4. E.D. Sontag. VC dimension of neural networks. In C.M. Bishop, editor, Neural Networks and Machine Learning, pages 69-95. Springer, Berlin, 1998. [PDF] Keyword(s): machine learning, VC dimension,
learning, neural networks, shattering.
│ │The Vapnik-Chervonenkis (VC) dimension is an integer which helps to characterize distribution-independent learning of binary concepts from positive and negative samples. This paper, │
│ │based on lectures delivered at the Isaac Newton Institute in August of 1997, presents a brief introduction, establishes various elementary results, and discusses how to estimate the │
│ │VC dimension in several examples of interest in neural network theory. (It does not address the learning and estimation-theoretic applications of VC dimension, and the applications to│
│Abstract:│uniform convergence theorems for empirical probabilities, for which many suitable references are available.) │
5. P. Koiran and E.D. Sontag. Vapnik-Chervonenkis dimension of recurrent neural networks. Discrete Appl. Math., 86(1):63-79, 1998. [PDF] [doi:http://dx.doi.org/10.1016/S0166-218X(98)00014-6] Keyword
(s): machine learning, neural networks, recurrent neural networks.
│ │This paper provides lower and upper bounds for the VC dimension of recurrent networks. Several types of activation functions are discussed, including threshold, polynomial, │
│ │piecewise-polynomial and sigmoidal functions. The bounds depend on two independent parameters: the number w of weights in the network, and the length k of the input sequence. Ignoring│
│ │multiplicative constants, the main results say roughly the following: 1. For architectures whose activation is any fixed nonlinear polynomial, the VC dimension is proportional to wk. │
│ │2. For architectures whose activation is any fixed piecewise polynomial, the VC dimension is between wk and w**2k. 3. For architectures with threshold activations, the VC dimension is│
│Abstract:│between wlog(k/w) and the smallest of wklog(wk) and w**2+wlog(wk). 4. For the standard sigmoid tanh(x), the VC dimension is between wk and w**4 k**2. │
6. D. Nesic and E.D. Sontag. Input-to-state stabilization of linear systems with positive outputs. Systems Control Lett., 35(4):245-255, 1998. [PDF] Keyword(s): input to state stability, ISS,
│ │This paper considers the problem of stabilization of linear systems for which only the magnitudes of outputs are measured. It is shown that, if a system is controllable and │
│Abstract:│observable, then one can find a stabilizing controller, which is robust with respect to observation noise (in the ISS sense). │
7. E.D. Sontag. A learning result for continuous-time recurrent neural networks. Systems Control Lett., 34(3):151-158, 1998. [PDF] [doi:http://dx.doi.org/10.1016/S0167-6911(98)00006-1] Keyword(s):
machine learning, neural networks, VC dimension, recurrent neural networks.
│ │The following learning problem is considered, for continuous-time recurrent neural networks having sigmoidal activation functions. Given a ``black box'' representing an unknown │
│ │system, measurements of output derivatives are collected, for a set of randomly generated inputs, and a network is used to approximate the observed behavior. It is shown that the │
│ │number of inputs needed for reliable generalization (the sample complexity of the learning problem) is upper bounded by an expression that grows polynomially with the dimension of the│
│Abstract:│network and logarithmically with the number of output derivatives being matched. │
8. E.D. Sontag. Comments on integral variants of ISS. Systems Control Lett., 34(1-2):93-100, 1998. [PDF] [doi:http://dx.doi.org/10.1016/S0167-6911(98)00003-6] Keyword(s): input to state stability,
integral input to state stability, iISS, ISS.
│ │This note discusses two integral variants of the input-to-state stability (ISS) property, which represent nonlinear generalizations of L2 stability, in much the same way that ISS │
│ │generalizes L-infinity stability. Both variants are equivalent to ISS for linear systems. For general nonlinear systems, it is shown that one of the new properties is strictly weaker │
│ │than ISS, while the other one is equivalent to it. For bilinear systems, a complete characterization is provided of the weaker property. An interesting fact about functions of type KL│
│Abstract:│is proved as well. │
9. E.D. Sontag and F.R. Wirth. Remarks on universal nonsingular controls for discrete-time systems. Systems Control Lett., 33(2):81-88, 1998. [PDF] [doi:http://dx.doi.org/10.1016/S0167-6911(97)
00117-5] Keyword(s): discrete time, controllability, real-analytic functions.
│ │For analytic discrete-time systems, it is shown that uniform forward accessibility implies the generic existence of universal nonsingular control sequences. A particular application │
│ │is given by considering forward accessible systems on compact manifolds. For general systems, it is proved that the complement of the set of universal sequences of infinite length is │
│ │of the first category. For classes of systems satisfying a descending chain condition, and in particular for systems defined by polynomial dynamics, forward accessibility implies │
│Abstract:│uniform forward accessibility. │
10. D. Angeli, E.D. Sontag, and Y. Wang. A remark on integral input to state stability. In Proc. IEEE Conf. Decision and Control, Tampa, Dec. 1998, IEEE Publications, 1998, pages 2491-2496, 1998.
Keyword(s): input to state stability.
11. X. Bao, Z. Lin, and E.D. Sontag. Some new results on finite gain $l_p$ stabilization of discrete-time linear systems subject to actuator saturation. In Proc. IEEE Conf. Decision and Control,
Tampa, Dec. 1998, IEEE Publications, 1998, pages 4628-4629, 1998. Keyword(s): saturation, bounded inputs.
12. B. Dasgupta and E.D. Sontag. A polynomial-time algorithm for an equivalence problem which arises in hybrid systems theory. In Proc. IEEE Conf. Decision and Control, Tampa, Dec. 1998, IEEE
Publications, 1998, pages 1629-1634, 1998.
13. M. Krichman and E.D. Sontag. A version of a converse Lyapunov theorem for input-output to state stability. In Proc. IEEE Conf. Decision and Control, Tampa, Dec. 1998, IEEE Publications, 1998,
pages 4121-4126, 1998. Keyword(s): input to state stability.
14. P. Kuusela, D. Ocone, and E.D. Sontag. On the VC dimension of continuous-time linear control systems. In Proc. 32nd Annual Conf. on Information Sciences and Systems (CISS 98), Princeton, NJ,
pages 795-800, 1998.
15. Y.S. Ledyaev and E.D. Sontag. Stabilization under measurement noise: Lyapunov characterization. In Proc. American Control Conf., Philadelphia, June 1998, pages 1658-166, 1998.
16. D. Nesic and E.D. Sontag. Output stabilization of nonlinear systems: Linear systems with positive outputs as a case study. In Proc. IEEE Conf. Decision and Control, Tampa, Dec. 1998, IEEE
Publications, 1998, pages 885-890, 1998.
17. E.D. Sontag. Notions of integral input-to-state stability. In Proc. American Control Conf., Philadelphia, June 1998, pages 3215-321, 1998. Keyword(s): input to state stability, integral input to
state stability, iISS, ISS.
18. E.D. Sontag. Recent results on discontinuous stabilization and control-Lyapunov functions. In Proc. Workshop on Control of Nonlinear and Uncertain Systems, London, Feb. 1998, 1998. Keyword(s):
control-Lyapunov functions.
19. E.D. Sontag and Y. Qiao. Remarks on controllability of recurrent neural networks. In Proc. IEEE Conf. Decision and Control, Tampa, Dec. 1998, IEEE Publications, 1998, pages 501-506, 1998. Keyword
(s): machine learning, neural networks, recurrent neural networks.
1. P. Koiran and E.D. Sontag. Vapnik-Chervonenkis dimension of recurrent neural networks. In Computational learning theory (Jerusalem, 1997), volume 1208 of Lecture Notes in Comput. Sci., pages
223-237. Springer-Verlag, London, UK, 1997. Keyword(s): machine learning, neural networks, VC dimension, recurrent neural networks.
2. Y.S. Ledyaev and E.D. Sontag. A notion of discontinuous feedback. In Control using logic-based switching (Block Island, RI, 1995), volume 222 of Lecture Notes in Control and Inform. Sci., pages
97-103. Springer, London, 1997.
3. E.D. Sontag. Recurrent neural networks: Some systems-theoretic aspects. In M. Karny, K. Warwick, and V. Kurkova, editors, Dealing with Complexity: a Neural Network Approach, pages 1-12.
Springer-Verlag, London, 1997. [PDF] Keyword(s): machine learning, neural networks, recurrent neural networks, learning, VC dimension.
│ │This paper provides an exposition of some recent results regarding system-theoretic aspects of continuous-time recurrent (dynamic) neural networks with sigmoidal activation functions.│
│ │The class of systems is introduced and discussed, and a result is cited regarding their universal approximation properties. Known characterizations of controllability, observability, │
│Abstract:│and parameter identifiability are reviewed, as well as a result on minimality. Facts regarding the computational power of recurrent nets are also mentioned. │
4. F. H. Clarke, Y.S. Ledyaev, E.D. Sontag, and A.I. Subbotin. Asymptotic controllability implies feedback stabilization. IEEE Trans. Automat. Control, 42(10):1394-1407, 1997. [PDF]
│ │It is shown that every asymptotically controllable system can be stabilized by means of some (discontinuous) feedback law. One of the contributions of the paper is in defining │
│ │precisely the meaning of stabilization when the feedback rule is not continuous. The main ingredients in our construction are: (a) the notion of control-Lyapunov function, (b) methods│
│Abstract:│of nonsmooth analysis, and (c) techniques from positional differential games. │
5. M. J. Donahue, L. Gurvits, C. Darken, and E.D. Sontag. Rates of convex approximation in non-Hilbert spaces. Constr. Approx., 13(2):187-220, 1997. [PDF] Keyword(s): machine learning, neural
networks, optimization, approximation theory.
│ │This paper deals with sparse approximations by means of convex combinations of elements from a predetermined "basis" subset S of a function space. Specifically, the focus is on the │
│ │rate at which the lowest achievable error can be reduced as larger subsets of S are allowed when constructing an approximant. The new results extend those given for Hilbert spaces by │
│ │Jones and Barron, including in particular a computationally attractive incremental approximation scheme. Bounds are derived for broad classes of Banach spaces. The techniques used │
│Abstract:│borrow from results regarding moduli of smoothness in functional analysis as well as from the theory of stochastic processes on function spaces. │
6. P. Koiran and E.D. Sontag. Neural networks with quadratic VC dimension. J. Comput. System Sci., 54(1, part 2):190-198, 1997. Note: (1st Annual Dagstuhl Seminar on Neural Computing, 1994). [PDF]
[doi:http://dx.doi.org/10.1006/jcss.1997.1479] Keyword(s): machine learning, neural networks, VC dimension.
│ │This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights w. This result settles the │
│ │open question of whether whether the well-known O(w log w) bound, known for hard-threshold nets, also held for more general sigmoidal nets. Implications for the number of samples │
│Abstract:│needed for valid generalization are discussed. │
7. R. Koplon and E.D. Sontag. Using Fourier-neural recurrent networks to fit sequential input/output data. Neurocomputing, 15:225-248, 1997. [PDF] Keyword(s): machine learning, neural networks,
recurrent neural networks.
│ │This paper suggests the use of Fourier-type activation functions in fully recurrent neural networks. The main theoretical advantage is that, in principle, the problem of recovering │
│Abstract:│internal coefficients from input/output data is solvable in closed form. │
8. E.D. Sontag. Shattering all sets of k points in `general position' requires (k-1)/2 parameters. Neural Comput., 9(2):337-348, 1997. [PDF] Keyword(s): machine learning, neural networks, VC
dimension, real-analytic functions.
│ │For classes of concepts defined by certain classes of analytic functions depending on k parameters, there are nonempty open sets of samples of length 2k+2 which cannot be shattered. A│
│Abstract:│slighly weaker result is also proved for piecewise-analytic functions. The special case of neural networks is discussed. │
9. E.D. Sontag and H.J. Sussmann. Complete controllability of continuous-time recurrent neural networks. Systems Control Lett., 30(4):177-183, 1997. [PDF] [doi:http://dx.doi.org/10.1016/S0167-6911
(97)00002-9] Keyword(s): machine learning, neural networks, recurrent neural networks.
│ │This paper presents a characterization of controllability for the class of control systems commonly called (continuous-time) recurrent neural networks. The characterization involves a│
│Abstract:│simple condition on the input matrix, and is proved when the activation function is the hyperbolic tangent. │
10. E.D. Sontag and Y. Wang. Output-to-state stability and detectability of nonlinear systems. Systems Control Lett., 29(5):279-290, 1997. [PDF] [doi:http://dx.doi.org/10.1016/S0167-6911(97)90013-X]
Keyword(s): input to state stability, integral input to state stability, iISS, ISS, detectability, output to state stability, detectability, input to state stability.
│ │The notion of input-to-state stability (ISS) has proved to be useful in nonlinear systems analysis. This paper discusses a dual notion, output-to-state stability (OSS). A │
│ │characterization is provided in terms of a dissipation inequality involving storage (Lyapunov) functions. Combining ISS and OSS there results the notion of input/output-to-state │
│Abstract:│stability (IOSS), which is also studied and related to the notion of detectability, the existence of observers, and output injection. │
11. Y. Yang, E.D. Sontag, and H.J. Sussmann. Global stabilization of linear discrete-time systems with bounded feedback. Systems Control Lett., 30(5):273-281, 1997. [PDF] [doi:http://dx.doi.org/
10.1016/S0167-6911(97)00021-2] Keyword(s): discrete-time, saturation, bounded inputs.
│ │This paper deals with the problem of global stabilization of linear discrete time systems by means of bounded feedback laws. The main result proved is an analog of one proved for the │
│ │continuous time case by the authors, and shows that such stabilization is possible if and only if the system is stabilizable with arbitrary controls and the transition matrix has │
│ │spectral radius less or equal to one. The proof provides in principle an algorithm for the construction of such feedback laws, which can be implemented either as cascades or as │
│Abstract:│parallel connections (``single hidden layer neural networks'') of simple saturation functions. │
12. F. Albertini and E.D. Sontag. Control-Lyapunov functions for time-varying set stabilization. In Proc. European Control Conf., Brussels, July 1997, 1997. Note: (Paper WE-E A5, CD-ROM file
ECC515.pdf, 6 pages). Keyword(s): control-Lyapunov functions.
13. Y.S. Ledyaev and E.D. Sontag. A remark on robust stabilization of general asymptotically controllable systems. In Proc. Conf. on Information Sciences and Systems (CISS 97), Johns Hopkins,
Baltimore, MD, March 1997, pages 246-251, 1997. [PDF]
│ │We showned in another recent paper that any asymptotically controllable system can be stabilized by means of a certain type of discontinuous feedback. The feedback laws constructed in│
│ │that work are robust with respect to actuator errors as well as to perturbations of the system dynamics. A drawback, however, is that they may be highly sensitive to errors in the │
│ │measurement of the state vector. This paper addresses this shortcoming, and shows how to design a dynamic hybrid stabilizing controller which, while preserving robustness to external │
│ │perturbations and actuator error, is also robust with respect to measurement error. This new design relies upon a controller which incorporates an internal model of the system driven │
│Abstract:│by the previously constructed feedback. │
14. E.D. Sontag. Some learning and systems-theoretic questions regarding recurrent neural networks. In Proc. Conf. on Information Sciences and Systems (CISS 97), Johns Hopkins, Baltimore, MD, March
1997, pages 630-635, 1997. Keyword(s): machine learning, neural networks, VC dimension, recurrent neural networks.
15. E.D. Sontag and Y. Wang. A notion of input to output stability. In Proc. European Control Conf., Brussels, July 1997, 1997. Note: (Paper WE-E A2, CD-ROM file ECC958.pdf, 6 pages). [PDF] Keyword
(s): input to state stability, ISS, input to output stability, input to state stability.
│ │This paper deals with a notion of "input to output stability (IOS)", which formalizes the idea that outputs depend in an "aymptotically stable" manner on inputs, while internal │
│ │signals remain bounded. When the output equals the complete state, one recovers the property of input to state stability (ISS). When there are no inputs, one has a generalization of │
│Abstract:│the classical concept of partial stability. The main results provide Lyapunov-function characterizations of IOS. │
1. R. Alur, T.A. Henzinger, and E.D. Sontag. Hybrid Systems III. Verification and Control (edited book). Springer Verlag, Berlin, 1996. Note: (LNCS 1066).
2. E.D. Sontag. Interconnected automata and linear systems: a theoretical framework in discrete-time. In R. Alur, T.A. Henzinger, and E.D. Sontag, editors, Proceedings of the DIMACS/SYCON workshop
on Hybrid systems III : verification and control, pages 436-448. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1996. [PDF] Keyword(s): hybrid systems.
│ │This paper summarizes the definitions and several of the main results of an approach to hybrid systems, which combines finite automata and linear systems, developed by the author in │
│Abstract:│the early 1980s. Some related more recent results are briefly mentioned as well. │
3. E.D. Sontag and H.J. Sussmann. General classes of control-Lyapunov functions. In Stability theory (Ascona, 1995), volume 121 of Internat. Ser. Numer. Math., pages 87-96. Birkhuser, Basel, 1996. [
PDF] Keyword(s): control-Lyapunov functions.
│Abstract:│Shorter and more expository version of "Nonsmooth control-Lyapunov functions" │
4. B. DasGupta and E.D. Sontag. Sample complexity for learning recurrent perceptron mappings. IEEE Trans. Inform. Theory, 42(5):1479-1487, 1996. [PDF] Keyword(s): machine learning, neural networks,
VC dimension, recurrent neural networks.
│ │Recurrent perceptron classifiers generalize the usual perceptron model. They correspond to linear transformations of input vectors obtained by means of "autoregressive moving-average │
│ │schemes", or infinite impulse response filters, and allow taking into account those correlations and dependences among input coordinates which arise from linear digital filtering. │
│ │This paper provides tight bounds on sample complexity associated to the fitting of such models to experimental data. The results are expressed in the context of the theory of probably│
│Abstract:│approximately correct (PAC) learning. │
5. Y. Lin, E.D. Sontag, and Y. Wang. A smooth converse Lyapunov theorem for robust stability. SIAM J. Control Optim., 34(1):124-160, 1996. [PDF] [doi:http://dx.doi.org/10.1137/S0363012993259981]
Keyword(s): input to state stability.
│ │This paper presents a Converse Lyapunov Function Theorem motivated by robust control analysis and design. Our result is based upon, but generalizes, various aspects of well-known │
│ │classical theorems. In a unified and natural manner, it (1) allows arbitrary bounded time-varying parameters in the system description, (2) deals with global asymptotic stability, (3)│
│Abstract:│results in smooth (infinitely differentiable) Lyapunov functions, and (4) applies to stability with respect to not necessarily compact invariant sets. │
6. W. Liu, Y. Chitour, and E.D. Sontag. On finite-gain stabilizability of linear systems subject to input saturation. SIAM J. Control Optim., 34(4):1190-1219, 1996. [PDF] [doi:http://dx.doi.org/
10.1137/S0363012994263469] Keyword(s): saturation, bounded inputs.
│ │This paper deals with (global) finite-gain input/output stabilization of linear systems with saturated controls. For neutrally stable systems, it is shown that the linear feedback law│
│ │suggested by the passivity approach indeed provides stability, with respect to every Lp-norm. Explicit bounds on closed-loop gains are obtained, and they are related to the norms for │
│ │the respective systems without saturation. These results do not extend to the class of systems for which the state matrix has eigenvalues on the imaginary axis with nonsimple (size > │
│ │1) Jordan blocks, contradicting what may be expected from the fact that such systems are globally asymptotically stabilizable in the state-space sense; this is shown in particular for│
│Abstract:│the double integrator. │
7. E.D. Sontag. Critical points for least-squares problems involving certain analytic functions, with applications to sigmoidal nets. Adv. Comput. Math., 5(2-3):245-268, 1996. [PDF] Keyword(s):
machine learning, subanalytic sets, semianalytic sets, critical points, approximation theory, neural networks, real-analytic functions.
│ │This paper deals with nonlinear least-squares problems involving the fitting to data of parameterized analytic functions. For generic regression data, a general result establishes the│
│ │countability, and under stronger assumptions finiteness, of the set of functions giving rise to critical points of the quadratic loss function. In the special case of what are usually│
│ │called "single-hidden layer neural networks", which are built upon the standard sigmoidal activation tanh(x) or equivalently 1/(1+exp(-x)), a rough upper bound for this cardinality is│
│Abstract:│provided as well. │
8. E.D. Sontag and Y. Wang. New characterizations of input-to-state stability. IEEE Trans. Automat. Control, 41(9):1283-1294, 1996. [PDF] Keyword(s): input to state stability, ISS.
│ │We present new characterizations of the Input to State Stability property. As a consequence of these results, we show the equivalence between the ISS property and several (apparent) │
│Abstract:│variations proposed in the literature. │
9. F.H. Clarke, Y.S. Ledyaev, E.D. Sontag, and A.I. Subbotin. Asymptotic controllability and feedback stabilization. In Proc. Conf. on Information Sciences and Systems (CISS 96)Princeton, NJ, pages
1232-1237, 1996. Keyword(s): control-Lyapunov functions, feedback stabilization.
10. B. Dasgupta and E.D. Sontag. Sample complexity for learning recurrent perceptron mappings. In D.S. Touretzky, M.C. Moser, and M.E. Hasselmo, editors, Advances in Neural Information Processing
Systems 8, pages 204-210, 1996. MIT Press, Cambridge, MA. Keyword(s): machine learning, neural networks, VC dimension, recurrent neural networks.
11. P. Koiran and E.D. Sontag. Neural networks with quadratic VC dimension. In D.S. Touretzky, M.C. Moser, and M.E. Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages
197-203, 1996. MIT Press, Cambridge, MA. Keyword(s): machine learning, neural networks, VC dimension.
12. E.D. Sontag and Y. Wang. Detectability of nonlinear systems. In Proc. Conf. on Information Sciences and Systems (CISS 96), Princeton, NJ, pages 1031-1036, 1996. [PDF] Keyword(s): detectability,
input to state stability, ISS.
│Abstract:│Contains a proof of a technical step, which was omitted from the journal paper due to space constraints │
13. E.D. Sontag and F.R. Wirth. Remarks on universal nonsingular controls for discrete-time systems. Technical report 381, Institute for Dynamical Systems, University of Bremen, 1996.
1. E.D. Sontag. State-space and i/o stability for nonlinear systems. In Feedback control, nonlinear systems, and complexity (Montreal, PQ, 1994), volume 202 of Lecture Notes in Control and Inform.
Sci., pages 215-235. Springer, London, 1995. Note: (Expository paper, placed online per request. The paper ``Input to state stability: Basic concepts and results'' is far more up to date and
should be downloaded instead of this one!). [PDF] Keyword(s): input to state stability.
2. A.R. Teel, T.T. Georgiou, L. Praly, and E.D. Sontag. Input-Output Stability. In W. S. Levine, editor, The Control Handbook, pages 895-908. CRC Press, Boca Raton, 1995. [PDF]
│Abstract:│An encyclopedia-type article on foundations of input/output stability. │
3. Y. Chitour, W. Liu, and E.D. Sontag. On the continuity and incremental-gain properties of certain saturated linear feedback loops. Internat. J. Robust Nonlinear Control, 5(5):413-440, 1995. [PDF]
Keyword(s): saturation, bounded inputs, incremental gains.
│ │This paper discusses various continuity and incremental-gain properties for neutrally stable linear systems under linear feedback subject to actuator saturation. The results │
│Abstract:│complement our previous ones, which applied to the same class of problems and provided finite-gain stability. │
4. M. A. Dahleh, E.D. Sontag, D. N. C. Tse, and J. N. Tsitsiklis. Worst-case identification of nonlinear fading memory systems. Automatica, 31(3):503-508, 1995. [PDF] [doi:http://dx.doi.org/10.1016/
0005-1098(94)00131-2] Keyword(s): information-based complexity, fading-memory systems, stability, system identification, structured uncertainty.
│ │We consider the problem of characterizing possible supply functions for a given dissipative nonlinear system, and provide a result that allows some freedom in the modification of such│
│Abstract:│functions. │
5. B. DasGupta, H.T. Siegelmann, and E.D. Sontag. On the complexity of training neural networks with continuous activation functions. IEEE Trans. Neural Networks, 6:1490-1504, 1995. [PDF] Keyword
(s): machine learning, neural networks, analog computing, theory of computing, neural networks, computational complexity, machine learning.
│ │Blum and Rivest showed that any possible neural net learning algorithm based on fixed architectures faces severe computational barriers. This paper extends their NP-completeness │
│ │result, which applied only to nets based on hard threshold activations, to nets that employ a particular continuous activation. In view of neural network practice, this is a more │
│Abstract:│relevant result to understanding the limitations of backpropagation and related techniques. │
6. Y. Lin and E.D. Sontag. Control-Lyapunov universal formulas for restricted inputs. Control Theory Adv. Tech., 10(4, part 5):1981-2004, 1995. [PDF] Keyword(s): control-Lyapunov functions,
saturation, bounded inputs.
│ │We deal with the question of obtaining explicit feedback control laws that stabilize a nonlinear system, under the assumption that a "control Lyapunov function" is known. In previous │
│ │work, the case of unbounded controls was considered. Here we obtain results for bounded and/or positive controls. We also provide some simple preliminary remarks regarding a set │
│Abstract:│stability version of the problem and a version for systems subject to disturbances. │
7. Y. Lin, E.D. Sontag, and Y. Wang. Input to state stabilizability for parametrized families of systems. Internat. J. Robust Nonlinear Control, 5(3):187-205, 1995. [PDF] Keyword(s): ISS,
│ │This paper studies various stability issues for parameterized families of systems, including problems of stabilization with respect to sets. The study of such families is motivated by│
│ │robust control applications. A Lyapunov-theoretic necessary and sufficient characterization is obtained for a natural notion of robust uniform set stability; this characterization │
│ │allows replacing ad hoc conditions found in the literature by more conceptual stability notions. We then use these techniques to establish a result linking state space stability to │
│Abstract:│``input to state'' (bounded-input bounded-state) stability. In addition, the preservation of stabilizability under certain types of cascade interconnections is analyzed. │
8. H. T. Siegelmann and E.D. Sontag. On the computational power of neural nets. J. Comput. System Sci., 50(1):132-150, 1995. [PDF] [doi:http://dx.doi.org/10.1006/jcss.1995.1013] Keyword(s): machine
learning, neural networks, recurrent neural networks, machine learning, analog computing, theory of computing, neural networks, computational complexity, super-Turing computation.
│ │This paper deals with finite size networks which consist of interconnections of synchronously evolving processors. Each processor updates its state by applying a "sigmoidal" function │
│ │to a rational-coefficient linear combination of the previous states of all units. We prove that one may simulate all Turing Machines by such nets. In particular, one can simulate any │
│ │multi-stack Turing Machine in real time, and there is a net made up of 886 processors which computes a universal partial-recursive function. Products (high order nets) are not │
│ │required, contrary to what had been stated in the literature. Non-deterministic Turing Machines can be simulated by non-deterministic rational nets, also in real time. The simulation │
│Abstract:│result has many consequences regarding the decidability, or more generally the complexity, of questions about recursive nets. │
9. E.D. Sontag. Control of systems without drift via generic loops. IEEE Trans. Automat. Control, 40(7):1210-1219, 1995. [PDF] Keyword(s): stabilization, non-holonomic systems, path-planning,
systems without drift, nonlinear control, controllability, real-analytic functions.
│ │This paper proposes a simple numerical technique for the steering of arbitrary analytic systems with no drift. It is based on the generation of "nonsingular loops" which allow │
│ │linearized controllability along suitable trajetories. Once such loops are available, it is possible to employ standard Newton or steepest descent methods, as classically done in │
│Abstract:│numerical control. The theoretical justification of the approach relies on recent results establishing the genericity of nonsingular controls, as well as a simple convergence lemma. │
10. E.D. Sontag. On the input-to-state stability property. European J. Control, 1:24-36, 1995. [PDF] Keyword(s): input to state stability, ISS.
│ │The "input to state stability" (ISS) property provides a natural framework in which to formulate notions of stability with respect to input perturbations. In this expository paper, we│
│ │review various equivalent definitions expressed in stability, Lyapunov-theoretic, and dissipation terms. We sketch some applications to the stabilization of cascades of systems and of│
│Abstract:│linear systems subject to control saturation. │
11. E.D. Sontag and A.R. Teel. Changing supply functions in input/state stable systems. IEEE Trans. Automat. Control, 40(8):1476-1478, 1995. [PDF] Keyword(s): input to state stability, ISS, input to
state stability, Lyapunov functions.
│ │We consider the problem of characterizing possible supply functions for a given dissipative nonlinear system, and provide a result that allows some freedom in the modification of such│
│Abstract:│functions. │
12. E.D. Sontag and Y. Wang. On characterizations of the input-to-state stability property. Systems Control Lett., 24(5):351-359, 1995. [PDF] [doi:http://dx.doi.org/10.1016/0167-6911(94)00050-6]
Keyword(s): input to state stability, ISS.
│ │We show that the well-known Lyapunov sufficient condition for input-to-state stability is also necessary, settling positively an open question raised by several authors during the │
│Abstract:│past few years. Additional characterizations of the ISS property, including one in terms of nonlinear stability margins, are also provided. │
13. Y. Wang and E.D. Sontag. Orders of input/output differential equations and state-space dimensions. SIAM J. Control Optim., 33(4):1102-1126, 1995. [PDF] [doi:http://dx.doi.org/10.1137/
S0363012993246828] Keyword(s): identifiability, observability, realization theory, real-analytic functions.
│ │This paper deals with the orders of input/output equations satisfied by nonlinear systems. Such equations represent differential (or difference, in the discrete-time case) relations │
│ │between high-order derivatives (or shifts, respectively) of input and output signals. It is shown that, under analyticity assumptions, there cannot exist equations of order less than │
│ │the minimal dimension of any observable realization; this generalizes the known situation in the classical linear case. The results depend on new facts, themselves of considerable │
│ │interest in control theory, regarding universal inputs for observability in the discrete case, and observation spaces in both the discrete and continuous cases. Included in the paper │
│Abstract:│is also a new and simple self-contained proof of Sussmann's universal input theorem for continuous-time analytic systems. │
14. E.D. Sontag. An abstract approach to dissipation. In Proc. IEEE Conf. Decision and Control, New Orleans, Dec. 1995, IEEE Publications, 1995, pages 2702-2703, 1995. Note: Full version, never
submitted, is here: http://sontaglab.org/FTPDIR/dissipation.pdf. [PDF] Keyword(s): quasimetric spaces, dissipative systems, nonlinear systems.
│ │We suggest that a very natural mathematical framework for the study of dissipation -in the sense of Willems, Moylan and Hill, and others- is that of indefinite quasimetric spaces. │
│ │Several basic facts about dissipative systems are seen to be simple consequences of the properties of such spaces. Quasimetric spaces provide also one natural context for optimal │
│Abstract:│control problems, and even for "gap" formulations of robustness. │
15. E.D. Sontag. Critical points for neural net least-squares problems. In Proc. 1995 IEEE Internat. Conf. Neural Networks, IEEE Publications, 1995, pages 2949-2954, 1995. Keyword(s): neural
16. E.D. Sontag. From linear to nonlinear: some complexity comparisons. In Proc. IEEE Conf. Decision and Control, New Orleans, Dec. 1995, IEEE Publications, 1995, pages 2916-2920, 1995. [PDF] Keyword
(s): theory of computing and complexity, computational complexity, controllability, observability.
│ │This paper deals with the computational complexity, and in some cases undecidability, of several problems in nonlinear control. The objective is to compare the theoretical difficulty │
│ │of solving such problems to the corresponding problems for linear systems. In particular, the problem of null-controllability for systems with saturations (of a "neural network" type)│
│ │is mentioned, as well as problems regarding piecewise linear (hybrid) systems. A comparison of accessibility, which can be checked fairly simply by Lie-algebraic methods, and │
│Abstract:│controllability, which is at least NP-hard for bilinear systems, is carried out. Finally, some remarks are given on analog computation in this context. │
17. E.D. Sontag. Spaces of observables in nonlinear control. In Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Zrich, 1994), Basel, pages 1532-1545, 1995. Birkhuser. [PDF]
Keyword(s): observability, dynamical systems.
│ │Invited talk at the 1994 ICM. Paper deals with the notion of observables for nonlinear systems, and their role in realization theory, minimality, and several control and path planning│
│Abstract:│questions. │
18. E.D. Sontag and H.J. Sussmann. Nonsmooth control-Lyapunov functions. In Proc. IEEE Conf. Decision and Control, New Orleans, Dec. 1995, IEEE Publications, 1995, pages 2799-2805, 1995. [PDF]
Keyword(s): control-Lyapunov functions.
│ │It is shown that the existence of a continuous control-Lyapunov function (CLF) is necessary and sufficient for null asymptotic controllability of nonlinear finite-dimensional control │
│ │systems. The CLF condition is expressed in terms of a concept of generalized derivative (upper contingent derivative). This result generalizes to the non-smooth case the theorem of │
│ │Artstein relating closed-loop feedback stabilization to smooth CLF's. It relies on viability theory as well as optimal control techniques. A "non-strict" version of the results, │
│Abstract:│analogous to the LaSalle Invariance Principle, is also provided. │
19. E.D. Sontag and Y. Wang. On characterizations of input-to-state stability with respect to compact sets. In Proceedings of IFAC Non-Linear Control Systems Design Symposium, (NOLCOS '95), Tahoe
City, CA, June 1995, pages 226-231, 1995. [PDF] Keyword(s): input to state stability, ISS.
│ │Previous characterizations of ISS-stability are shown to generalize without change to the case of stability with respect to sets. Some results on ISS-stabilizability are mentioned as │
│Abstract:│well. │
20. E.D. Sontag and Y. Wang. Various results concerning set input-to-state stability. In Proc. IEEE Conf. Decision and Control, New Orleans, Dec. 1995, IEEE Publications, 1995, pages 1330-1335, 1995.
Keyword(s): input to state stability, ISS.
1. B. DasGupta, H.T. Siegelmann, and E.D. Sontag. On the Intractability of Loading Neural Networks. In V. P. Roychowdhury, Siu K. Y., and Orlitsky A., editors, Theoretical Advances in Neural
Computation and Learning, pages 357-389. Kluwer Academic Publishers, 1994. [PDF] Keyword(s): analog computing, neural networks, computational complexity, machine learning.
2. W. Maass, G. Schnitger, and E.D. Sontag. A comparison of the computational power of sigmoid and Boolean threshold circuits. In V. P. Roychowdhury, Siu K. Y., and Orlitsky A., editors, Theoretical
Advances in Neural Computation and Learning, pages 127-151. Kluwer Academic Publishers, 1994. [PDF] Keyword(s): machine learning, neural networks, boolean systems.
│ │We examine the power of constant depth circuits with sigmoid threshold gates for computing boolean functions. It is shown that, for depth 2, constant size circuits of this type are │
│ │strictly more powerful than constant size boolean threshold circuits (i.e. circuits with linear threshold gates). On the other hand it turns out that, for any constant depth d, │
│Abstract:│polynomial size sigmoid threshold circuits with polynomially bounded weights compute exactly the same boolean functions as the corresponding circuits with linear threshold gates. │
3. F. Albertini and E.D. Sontag. Further results on controllability properties of discrete-time nonlinear systems. Dynam. Control, 4(3):235-253, 1994. [PDF] [doi:http://dx.doi.org/10.1007/BF01985073
] Keyword(s): discrete-time, nonlinear control.
│ │Controllability questions for discrete-time nonlinear systems are addressed in this paper. In particular, we continue the search for conditions under which the group-like notion of │
│ │transitivity implies the stronger and semigroup-like property of forward accessibility. We show that this implication holds, pointwise, for states which have a weak Poisson stability │
│Abstract:│property, and globally, if there exists a global "attractor" for the system. │
4. F. Albertini and E.D. Sontag. State observability in recurrent neural networks. Systems Control Lett., 22(4):235-244, 1994. [PDF] [doi:http://dx.doi.org/10.1016/0167-6911(94)90054-X] Keyword(s):
machine learning, neural networks, recurrent neural networks, observability, identifiability.
│ │This paper concerns recurrent networks x'=s(Ax+Bu), y=Cx, where s is a sigmoid, in both discrete time and continuous time. Our main result is that observability can be characterized, │
│ │if one assumes certain conditions on the nonlinearity and on the system, in a manner very analogous to that of the linear case. Recall that for the latter, observability is equivalent│
│ │to the requirement that there not be any nontrivial A-invariant subspace included in the kernel of C. We show that the result generalizes in a natural manner, except that one now │
│Abstract:│needs to restrict attention to certain special "coordinate" subspaces. │
5. R. Koplon, E.D. Sontag, and M. L. J. Hautus. Observability of linear systems with saturated outputs. Linear Algebra Appl., 205/206:909-936, 1994. [PDF] Keyword(s): observability, saturation,
bounded inputs.
│ │In this paper, we present necessary and sufficient conditions for observability of the class of output-saturated systems. These are linear systems whose output passes through a │
│Abstract:│saturation function before it can be measured. │
6. H. T. Siegelmann and E.D. Sontag. Analog computation via neural networks. Theoret. Comput. Sci., 131(2):331-360, 1994. [PDF] [doi:http://dx.doi.org/10.1016/0304-3975(94)90178-3] Keyword(s):
analog computing, neural networks, computational complexity, super-Turing computation, recurrent neural networks, neural networks, computational complexity.
│ │We consider recurrent networks with real-valued weights. If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints│
│ │there are limits on their capabilities, though being more powerful than Turing Machines. Moreover, there is a precise correspondence between nets and standard non-uniform circuits │
│ │with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. We note that these networks are not likely to solve polynomially NP-hard │
│ │problems, as the equality "P=NP" in our model implies the almost complete collapse of the standard polynomial hierarchy. We show that a large class of different networks and dynamical│
│ │system models have no more computational power than this neural (first-order) model with real weights. The results suggest the following Church-like Thesis of Time-bounded Analog │
│Abstract:│Computing: "Any reasonable analog computer will have no more power (up to polynomial time) than first-order recurrent networks." │
7. H.J. Sussmann, E.D. Sontag, and Y. Yang. A general result on the stabilization of linear systems using bounded controls. IEEE Trans. Automat. Control, 39(12):2411-2425, 1994. [PDF] Keyword(s):
saturation, neural networks, global stability, nonlinear stability, bounded inputs.
│ │We present two constructions of controllers that globally stabilize linear systems subject to control saturation. We allow essentially arbitrary saturation functions. The only │
│ │conditions imposed on the system are the obvious necessary ones, namely that no eigenvalues of the uncontrolled system have positive real part and that the standard stabilizability │
│ │rank condition hold. One of the constructions is in terms of a "neural-network type" one-hidden layer architecture, while the other one is in terms of cascades of linear maps and │
│Abstract:│saturations. │
8. Y. Chitour, W. Liu, and E.D. Sontag. On the continuity and incremental gain properties of certain saturated linear feedback loops. In Proc. IEEE Conf. Decision and Control, Orlando, Dec. 1994,
IEEE Publications, 1994, pages 127-132, 1994. [PDF] Keyword(s): saturation, bounded inputs.
9. B. DasGupta, H. T. Siegelmann, and E.D. Sontag. On a learnability question associated to neural networks with continuous activations (extended abstract). In COLT '94: Proceedings of the seventh
annual conference on Computational learning theory, New York, NY, USA, pages 47-56, 1994. ACM Press. [doi:http://doi.acm.org/10.1145/180139.181009] Keyword(s): machine learning, analog computing,
neural networks, computational complexity.
10. R. Koplon and E.D. Sontag. Techniques for parameter reconstruction in Fourier-Neural recurrent networks. In Proc. IEEE Conf. Decision and Control, Orlando, Dec. 1994, IEEE Publications, 1994,
pages 213-218, 1994. Keyword(s): machine learning, neural networks, recurrent neural networks.
11. Y. Lin and E.D. Sontag. On control-Lyapunov functions under input constraints. In Proc. IEEE Conf. Decision and Control, Orlando, Dec. 1994, IEEE Publications, 1994, pages 640-645, 1994. Keyword
(s): control-Lyapunov functions.
12. Y. Lin, E.D. Sontag, and Y. Wang. Recent results on Lyapunov-theoretic techniques for nonlinear stability. In Proc. Amer. Automatic Control Conf., Baltimore, June 1994, pages 1771-1775, 1994.
13. E.D. Sontag and Y. Wang. Notions equivalent to input-to-state stability. In Proc. IEEE Conf. Decision and Control, Orlando, Dec. 1994, IEEE Publications, 1994, pages 3438-3443, 1994. Keyword(s):
input to state stability, ISS.
14. E.D. Sontag and Y. Wang. Orders of I/O equations and uniformly universal inputs. In Proc. IEEE Conf. Decision and Control, Orlando, Dec. 1994, IEEE Publications, 1994, pages 1270-1275, 1994.
Keyword(s): identifiability, observability, realization theory.
1. F. Albertini, E.D. Sontag, and V. Maillot. Uniqueness of weights for neural networks. In R. Mammone, editor, Artificial Neural Networks for Speech and Vision, pages 115-125. Chapman and Hall,
London, 1993. [PDF] Keyword(s): machine learning, neural networks, recurrent neural networks.
│ │In this short expository survey, we sketch various known facts about uniqueness of weights in neural networks, including results about recurrent nets, and we provide a new and │
│Abstract:│elementary complex-variable proof of a uniqueness result that applies in the single hidden layer case. │
2. E.D. Sontag. Neural networks for control. In H. L. Trentelman and J. C. Willems, editors, Essays on control: perspectives in the theory and its applications (Groningen, 1993), volume 14 of Progr.
Systems Control Theory, pages 339-380. Birkhuser Boston, Boston, MA, 1993. Note: A longer version (tech report with more details) is here: http://sontaglab.org/FTPDIR/neural-nets-siemens.pdf. [
PDF] Keyword(s): neural networks, recurrent neural networks, machine learning, neural networks.
│ │This paper has an expository introduction to two related topics: (a) Some mathematical results regarding "neural networks", and (b) so-called "neurocontrol" and "learning control" │
│Abstract:│(each part can be read independently of the other). It was prepared for a short course given at the 1993 European Control Conference. │
3. E.D. Sontag and H.J. Sussmann. Time-optimal control of manipulators (reprint of 1986 IEEE Int Conf on Robotics and Automation paper. In M.W. Spong, F.L. Lewis, and C.T. Abdallah, editors, Robot
Control, pages 266-271. IEEE Press, New York, 1993. Keyword(s): robotics, optimal control.
4. F. Albertini and E.D. Sontag. Discrete-time transitivity and accessibility: analytic systems. SIAM J. Control Optim., 31(6):1599-1622, 1993. [PDF] [doi:http://dx.doi.org/10.1137/0331075] Keyword
(s): controllability, discrete-time systems, accessibility, real-analytic functions.
│ │A basic open question for discrete-time nonlinear systems is that of determining when, in analogy with the classical continuous-time "positive form of Chow's Lemma", accessibility │
│ │follows from transitivity of a natural group action. This paper studies the problem, and establishes the desired implication for analytic systems in several cases: (i) compact state │
│ │space, (ii) under a Poisson stability condition, and (iii) in a generic sense. In addition, the paper studies accessibility properties of the "control sets" recently introduced in the│
│Abstract:│context of dynamical systems studies. Finally, various examples and counterexamples are provided relating the various Lie algebras introduced in past work. │
5. F. Albertini and E.D. Sontag. For neural networks, function determines form. Neural Networks, 6(7):975-990, 1993. [PDF] Keyword(s): machine learning, neural networks, identifiability, recurrent
neural networks, realization theory, observability, neural networks.
│ │This paper shows that the weights of continuous-time feedback neural networks x'=s(Ax+Bu), y=Cx (where s is a sigmoid) are uniquely identifiable from input/output measurements. Under │
│ │very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function s; if the two nets have equal behaviors │
│ │as "black boxes" then necessarily they must have the same number of neurons and -except at most for sign reversals at each node- the same weights. Moreover, even if the activations │
│Abstract:│are not a priori known to coincide, they are shown to be also essentially determined from the external measurements. │
6. R. Koplon and E.D. Sontag. Linear systems with sign-observations. SIAM J. Control Optim., 31(5):1245-1266, 1993. [PDF] [doi:http://dx.doi.org/10.1137/0331059] Keyword(s): observability.
│ │This paper deals with systems that are obtained from linear time-invariant continuous- or discrete-time devices followed by a function that just provides the sign of each output. Such│
│ │systems appear naturally in the study of quantized observations as well as in signal processing and neural network theory. Results are given on observability, minimal realizations, │
│Abstract:│and other system-theoretic concepts. Certain major differences exist with the linear case, and other results generalize in a surprisingly straightforward manner. │
7. F. Albertini and E.D. Sontag. Controllability of discrete-time nonlinear systems. In Systems and Networks: Mathematical Theory and Applications, Proc. MTNS '93, Vol. 2, Akad. Verlag, Regensburg,
pages 35-38, 1993.
8. F. Albertini and E.D. Sontag. Identifiability of discrete-time neural networks. In Proc. European Control Conf., Groningen, June 1993, pages 460-465, 1993. Keyword(s): machine learning, neural
networks, recurrent neural networks.
9. F. Albertini and E.D. Sontag. State observability in recurrent neural networks. In Proc. IEEE Conf. Decision and Control, San Antonio, Dec. 1993, IEEE Publications, 1993, pages 3706-3707, 1993.
Keyword(s): machine learning, neural networks, observability, recurrent neural networks.
10. F. Albertini and E.D. Sontag. Uniqueness of weights for recurrent nets. In Systems and Networks: Mathematical Theory and Applications, Proc. MTNS '93, Vol. 2, Akad. Verlag, Regensburg, pages
599-602, 1993. Note: Full version, never submitted for publication, is here: http://sontaglab.org/FTPDIR/93mtns-nn-extended.pdf. [PDF] Keyword(s): machine learning, neural networks,
identifiability, recurrent neural networks.
│ │This paper concerns recurrent networks x'=s(Ax+Bu), y=Cx, where s is a sigmoid, in both discrete time and continuous time. The paper establishes parameter identifiability under │
│Abstract:│stronger assumptions on the activation than in "For neural networks, function determines form", but on the other hand deals with arbitrary (nonzero) initial states. │
11. J. L. Balczar, R. Gavald, H. T. Siegelmann, and E.D. Sontag. Some structural complexity aspects of neural computation. In Proceedings of the Eighth Annual Structure in Complexity Theory
Conference (San Diego, CA, 1993), Los Alamitos, CA, pages 253-265, 1993. IEEE Comput. Soc. Press. [PDF] Keyword(s): machine learning, analog computing, neural networks, computational complexity,
super-Turing computation, theory of computing and complexity.
│ │Recent work by H.T. Siegelmann and E.D. Sontag (1992) has demonstrated that polynomial time on linear saturated recurrent neural networks equals polynomial time on standard │
│ │computational models: Turing machines if the weights of the net are rationals, and nonuniform circuits if the weights are real. Here, further connections between the languages │
│ │recognized by such neural nets and other complexity classes are developed. Connections to space-bounded classes, simulation of parallel computational models such as Vector Machines, │
│Abstract:│and a discussion of the characterizations of various nonuniform classes in terms of Kolmogorov complexity are presented. │
12. C. Darken, M.J. Donahue, L. Gurvits, and E.D. Sontag. Rate of approximation results motivated by robust neural network learning. In COLT '93: Proceedings of the sixth annual conference on
Computational learning theory, New York, NY, USA, pages 303-309, 1993. ACM Press. [doi:http://doi.acm.org/10.1145/168304.168357] Keyword(s): machine learning, neural networks, optimization
problems, approximation theory.
13. R. Koplon and E.D. Sontag. Sign-linear systems as cascades of automata and continuous variable systems. In Proc. IEEE Conf. Decision and Control, San Antonio, Dec. 1993, IEEE Publications, 1993,
pages 2290-2291, 1993.
14. G.A. Lafferriere and E.D. Sontag. Remarks on control Lyapunov functions for discontinuous stabilizing feedback. In Proc. IEEE Conf. Decision and Control, San Antonio, Dec. 1993, IEEE
Publications, 1993, pages 306-308, 1993. [PDF] Keyword(s): feedback stabilization.
│ │We present a formula for a stabilizing feedback law under the assumption that a piecewise smooth control-Lyapunov function exists. The resulting feedback is continuous at the origin │
│Abstract:│and smooth everywhere except on a hypersurface of codimension 1, assuming that certain transversality conditions are imposed there. │
15. Y. Lin, E.D. Sontag, and Y. Wang. Lyapunov-function characterizations of stability and stabilization for parameterized families of systems. In Proc. IEEE Conf. Decision and Control, San Antonio,
Dec. 1993, IEEE Publications, 1993, pages 1978-1983, 1993.
16. W. Liu, Y. Chitour, and E.D. Sontag. Remarks on finite gain stabilizability of linear systems subject to input saturation. In Proc. IEEE Conf. Decision and Control, San Antonio, Dec. 1993, IEEE
Publications, 1993, pages 1808-1813, 1993. Keyword(s): saturation, bounded inputs.
17. A. Macintyre and E.D. Sontag. Finiteness results for sigmoidal neural networks. In STOC '93: Proceedings of the twenty-fifth annual ACM symposium on Theory of computing, New York, NY, USA, pages
325-334, 1993. ACM Press. [PDF] [doi:http://doi.acm.org/10.1145/167088.167192] Keyword(s): machine learning, neural networks, theory of computing and complexity, real-analytic functions.
│ │This paper deals with analog circuits. It establishes the finiteness of VC dimension, teaching dimension, and several other measures of sample complexity which arise in learning │
│ │theory. It also shows that the equivalence of behaviors, and the loading problem, are effectively decidable, modulo a widely believed conjecture in number theory. The results, the │
│ │first ones that are independent of weight size, apply when the gate function is the "standard sigmoid" commonly used in neural networks research. The proofs rely on very recent │
│ │developments in the elementary theory of real numbers with exponentiation. (Some weaker conclusions are also given for more general analytic gate functions.) Applications to │
│Abstract:│learnability of sparse polynomials are also mentioned. │
18. H.T. Siegelmann and E.D. Sontag. Analog computation via neural networks. In Proc. 2nd Israel Symposium on Theory of Computing and Systems (ISTCS93), IEEE Computer Society Press, 1993, 1993.
Keyword(s): analog computing, neural networks, computational complexity, super-Turing computation, recurrent neural networks.
19. E.D. Sontag. Gradient techniques for systems with no drift: A classical idea revisited. In Proc. IEEE Conf. Decision and Control, San Antonio, Dec. 1993, IEEE Publications, 1993, pages 2706-2711,
1993. [PDF] Keyword(s): path-planning, systems without drift, nonlinear control, controllability, real-analytic functions.
│ │This paper proposes a technique for the control of analytic systems with no drift. It is based on the generation of "nonsingular loops" which allow linearized controllability. Once │
│ │such loops are available, it is possible to employ standard Newton or steepest descent methods. The theoretical justification of the approach relies on results on genericity of │
│Abstract:│nonsingular controls as well as a simple convergence lemma. │
20. H.J. Sussmann, E.D. Sontag, and Y. Yang. A general result on the stabilization of linear systems using bounded controls. In Proc. IEEE Conf. Decision and Control, San Antonio, Dec. 1993, IEEE
Publications, 1993, pages 1802-1807, 1993. Keyword(s): saturation, bounded inputs.
21. Y. Yang and E.D. Sontag. Stabilization with saturated actuators, a worked example: F-8 longitudinal flight control. In Proc. 1993 IEEE Conf. on Aerospace Control Systems, Thousand Oaks, CA, May
1993, pages 289-293, 1993. [PDF] Keyword(s): saturation, bounded inputs, aircraft, airplanes.
│ │This paper develops in detail an explicit design for control under saturation limits for the linearized equations of longitudinal flight control for an F-8 aircraft, and tests the │
│Abstract:│obtained controller on the original nonlinear model. │
1. E.D. Sontag. Feedback stabilization using two-hidden-layer nets. IEEE Trans. Neural Networks, 3:981-990, 1992. [PDF] Keyword(s): machine learning, neural networks, feedback stabilization.
│ │This paper compares the representational capabilities of one hidden layer and two hidden layer nets consisting of feedforward interconnections of linear threshold units. It is │
│ │remarked that for certain problems two hidden layers are required, contrary to what might be in principle expected from the known approximation theorems. The differences are not based│
│ │on numerical accuracy or number of units needed, nor on capabilities for feature extraction, but rather on a much more basic classification into "direct" and "inverse" problems. The │
│ │former correspond to the approximation of continuous functions, while the latter are concerned with approximating one-sided inverses of continuous functions - and are often │
│ │encountered in the context of inverse kinematics determination or in control questions. A general result is given showing that nonlinear control systems can be stabilized using two │
│Abstract:│hidden layers, but not in general using just one. │
2. E.D. Sontag. Feedforward nets for interpolation and classification. J. Comput. System Sci., 45(1):20-48, 1992. [PDF] [doi:http://dx.doi.org/10.1016/0022-0000(92)90039-L] Keyword(s): machine
learning, neural networks, VC dimension, boolean systems.
│ │This paper deals with single-hidden-layer feedforward nets, studying various aspects of classification power and interpolation capability. In particular, a worst-case analysis shows │
│ │that direct input to output connections in threshold nets double the recognition but not the interpolation power, while using sigmoids rather than thresholds allows doubling both. For│
│ │other measures of classification, including the Vapnik-Chervonenkis dimension, the effect of direct connections or sigmoidal activations is studied in the special case of │
│Abstract:│two-dimensional inputs. │
3. E.D. Sontag. Universal nonsingular controls. Systems Control Lett., 19(3):221-224, 1992. Note: Erratum appeared in SCL 20(1993), p. 77, can be found in same file.[PDF] [doi:http://dx.doi.org/
10.1016/0167-6911(92)90116-A] Keyword(s): controllability, real-analytic functions.
│ │For analytic systems satisfying the strong accessibility rank condition, generic inputs produce trajectories along which the linearized system is controllable. Applications to the │
│Abstract:│steering of systems without drift are briefly mentioned. │
4. Y. Wang and E.D. Sontag. Algebraic differential equations and rational control systems. SIAM J. Control Optim., 30(5):1126-1149, 1992. [PDF] Keyword(s): identifiability, observability,
realization theory, input/output system representations.
│ │It is shown that realizability of an input/output operators by a finite-dimensional continuous-time rational control system is equivalent to the existence of a high-order algebraic │
│ │differential equation satisfied by the corresponding input/output pairs ("behavior"). This generalizes, to nonlinear systems, the classical equivalence between autoregressive │
│Abstract:│representations and finite dimensional linear realizability. │
5. Y. Wang and E.D. Sontag. Generating series and nonlinear systems: analytic aspects, local realizability, and i/o representations. Forum Math., 4(3):299-322, 1992. [PDF] Keyword(s):
identifiability, observability, realization theory, input/output system representations, real-analytic functions.
│ │This paper studies fundamental analytic properties of generating series for nonlinear control systems, and of the operators they define. It then applies the results obtained to the │
│Abstract:│extension of facts, which relate realizability and algebraic input/output equations, to local realizability and analytic equations. │
6. F. Albertini and E.D. Sontag. For neural networks, function determines form. In Proc. IEEE Conf. Decision and Control, Tucson, Dec. 1992, IEEE Publications, 1992, pages 26-31, 1992. Keyword(s):
machine learning, neural networks, recurrent neural networks.
7. M.A. Dahleh, E.D. Sontag, D.N.C. Tse, and J.N. Tsitsiklis. Worst-case identification of nonlinear fading memory systems. In Proc. Amer. Automatic Control Conf., Chicago, June 1992, pages 241-245,
1992. [PDF] Keyword(s): information-based complexity, fading-memory systems, stability, system identification, structured uncertainty.
│Abstract:│Preliminary version of paper published in Automatica in 1995. │
8. Y. Lin and E.D. Sontag. Gradient techniques for steering systems with no drift. In Proc. Conf. Inform. Sci. and Systems, Princeton University, March 1992, pages 1003-1008, 1992.
9. R. Schwarzschild, E.D. Sontag, and M.L.J. Hautus. Output-Saturated Systems. In Proc. Amer. Automatic Control Conf. , Chicago, June 1992, pages 2504-2509, 1992.
10. H.T. Siegelmann and E.D. Sontag. On the computational power of neural nets. In COLT '92: Proceedings of the fifth annual workshop on Computational learning theory, New York, NY, USA, pages
440-449, 1992. ACM Press. [doi:http://doi.acm.org/10.1145/130385.130432] Keyword(s): analog computing, neural networks, computational complexity, super-Turing computation, recurrent neural
11. H.T. Siegelmann and E.D. Sontag. Some results on computing with neural nets. In Proc. IEEE Conf. Decision and Control, Tucson, Dec. 1992, IEEE Publications, 1992, pages 1476-1481, 1992. Keyword
(s): analog computing, neural networks, computational complexity, super-Turing computation, recurrent neural networks.
12. H.T. Siegelmann, E.D. Sontag, and C.L. Giles. The Complexity of Language Recognition by Neural Networks. In Proceedings of the IFIP 12th World Computer Congress on Algorithms, Software,
Architecture - Information Processing '92, Volume 1, pages 329-335, 1992. North-Holland. Keyword(s): machine learning, neural networks, computational complexity, machine learning, recurrent
neural networks, theory of computing and complexity.
13. E.D. Sontag. Neural nets as systems models and controllers. In Proc. Seventh Yale Workshop on Adaptive and Learning Systems, Yale University, 1992, pages 73-79, 1992. [PDF] Keyword(s): machine
learning, neural networks, recurrent neural networks, neural networks.
│Abstract:│A conference paper. Placed here because it was requested, but contains little that is not also contained in the survey on neural nets mentioned above.│
14. E.D. Sontag. Systems combining linearity and saturations, and relations to neural nets. In Nonlinear Control Systems Design 1992, IFAC Symposia Series, 1993, M. Fliess Ed., Pergamon Press,
Oxford, 1993, pages 15-21, 1992. Note: (Also in Proc. Nonlinear Control Systems Design Symp., Bordeaux, June 1992, M. Fliess, Ed., IFAC Publications, pp. 242-247). Keyword(s): machine learning,
neural networks, recurrent neural networks.
15. E.D. Sontag and Y. Lin. Stabilization with respect to noncompact sets: Lyapunov characterizations and effect of bounded inputs. In Nonlinear Control Systems Design 1992, IFAC Symposia Series, M.
Fliess Ed., Pergamon Press, Oxford, 1993, pages 43-49, 1992. Note: Also in Proc. Nonlinear Control Systems Design Symp., Bordeaux, June 1992,(M. Fliess, Ed.), IFAC Publications, pp. 9--14. [PDF]
Keyword(s): saturation, bounded inputs.
16. E.D. Sontag and Y. Wang. I/O equations in discrete and continuous time. In Proc. IEEE Conf. Decision and Control, Tucson, Dec. 1992, IEEE Publications, 1992, pages 3661-3662, 1992. Keyword(s):
identifiability, observability, realization theory.
17. Y. Yang, H.J. Sussmann, and E.D. Sontag. Stabilization of linear systems with bounded controls. In Nonlinear Control Systems Design 1992, IFAC Symposia Series, 1993, M. Fliess Ed., Pergamon
Press, Oxford, 1993, pages 51-56, 1992. Note: Also in Proc. Nonlinear Control Systems Design Symp., Bordeaux, June 1992,(M. Fliess, Ed.), IFAC Publications, pp. 15-20.Keyword(s): saturation,
bounded inputs.
1. F. Albertini and E.D. Sontag. Transitivity and forward accessibility of discrete-time nonlinear systems. In Analysis of controlled dynamical systems (Lyon, 1990), volume 8 of Progr. Systems
Control Theory, pages 21-34. Birkhuser Boston, Boston, MA, 1991.
2. E.D. Sontag. Capabilities and training of feedforward nets. In Neural networks (New Brunswick, NJ, 1990), pages 303-321. Academic Press, Boston, MA, 1991. [PDF] Keyword(s): machine learning,
machine learning, neural networks.
│ │This paper surveys recent work by the author on learning and representational capabilities of feedforward nets. The learning results show that, among two possible variants of the │
│ │so-called backpropagation training method for sigmoidal nets, both of which variants are used in practice, one is a better generalization of the older perceptron training algorithm │
│ │than the other. The representation results show that nets consisting of sigmoidal neurons have at least twice the representational capabilities of nets that use classical threshold │
│ │neurons, at least when this increase is quantified in terms of classification power. On the other hand, threshold nets are shown to be more useful when approximating implicit │
│Abstract:│functions, as illustrated with an application to a typical control problem. │
3. E.D. Sontag. Input/output and state-space stability. In New trends in systems theory (Genoa, 1990), volume 7 of Progr. Systems Control Theory, pages 684-691. Birkhuser Boston, Boston, MA, 1991. [
PDF] Keyword(s): input to state stability, input to state stability.
│ │This conference paper reviews various results relating state-space (Lyapunov) stabilization and exponential stabilization to several notions of input/output or bounded-input │
│ │bounded-output stabilization. It also provides generalizations of some of these results to systems with saturating controls. Some of these latter results were not included in journal │
│Abstract:│papers. │
4. E.D. Sontag. Kalman's controllability rank condition: from linear to nonlinear. In Mathematical system theory, pages 453-462. Springer, Berlin, 1991. [PDF] Keyword(s): controllability.
│ │The notion of controllability was identified by Kalman as one of the central properties determining system behavior. His simple rank condition is ubiquitous in linear systems │
│ │analysis. This article presents an elementary and expository overview of the generalizations of this test to a condition for testing accessibility of discrete and continuous time │
│Abstract:│nonlinear systems. │
5. Y. Lin and E.D. Sontag. A universal formula for stabilization with bounded controls. Systems Control Lett., 16(6):393-397, 1991. [PDF] [doi:http://dx.doi.org/10.1016/0167-6911(91)90111-Q] Keyword
(s): stabilization, nonlinear systems, saturation, bounded inputs, control-Lyapunov functions, real-analytic functions.
│ │We provide a formula for a stabilizing feedback law using a bounded control, under the assumption that an appropriate control-Lyapunov function is known. Such a feedback, smooth away │
│ │from the origin and continuous everywhere, is known to exist via Artstein's Theorem. As in the unbounded-control case treated in a previous note, we provide an explicit and │
│ │``universal'' formula given by an algebraic function of Lie derivatives. In particular, we extend to the bounded case the result that the feedback can be chosen analytic if the │
│Abstract:│Lyapunov function and the vector fields defining the system are analytic. │
6. H. T. Siegelmann and E.D. Sontag. Turing computability with neural nets. Appl. Math. Lett., 4(6):77-80, 1991. [PDF] Keyword(s): machine learning, neural networks, computational complexity,
recurrent neural networks.
│ │This paper shows the existence of a finite neural network, made up of sigmoidal neurons, which simulates a universal Turing machine. It is composed of less than 100,000 synchronously │
│ │evolving processors, interconnected linearly. High-order connections are not required. (Note: this paper was placed here by special request. The results in this paper have been by now│
│Abstract:│improved considerably: see the JCSS pape which among other aspects provides a polynomial time simulation. This paper, based on a unary encoding, results in an exponential slowdown). │
7. E.D. Sontag and H.J. Sussmann. Back propagation separates where perceptrons do. Neural Networks, 4(2):243-249, 1991. [PDF] [doi:http://dx.doi.org/10.1016/0893-6080(91)90008-S] Keyword(s): machine
learning, neural networks.
│ │Feedforward nets with sigmoidal activation functions are often designed by minimizing a cost criterion. It has been pointed out before that this technique may be outperformed by the │
│ │classical perceptron learning rule, at least on some problems. In this paper, we show that no such pathologies can arise if the error criterion is of a threshold LMS type, i.e., is │
│ │zero for values ``beyond'' the desired target values. More precisely, we show that if the data are linearly separable, and one considers nets with no hidden neurons, then an error │
│ │function as above cannot have any local minima that are not global. In addition, the proof gives the following stronger result, under the stated hypotheses: the continuous gradient │
│ │adjustment procedure is such that from any initial weight configuration a separating set of weights is obtained in finite time. This is a precise analogue of the Perceptron Learning │
│ │Theorem. The results are then compared with the more classical pattern recognition problem of threshold LMS with linear activations, where no spurious local minima exist even for │
│ │nonseparable data: here it is shown that even if using the threshold criterion, such bad local minima may occur, if the data are not separable and sigmoids are used. keywords = { │
│Abstract:│neural networks , feedforward neural nets }, │
8. F. Albertini and E.D. Sontag. Accessibility of discrete-time nonlinear systems, and some relations to chaotic dynamics. In Proc. Conf. Inform. Sci. and Systems, John Hopkins University, March
1991, pages 731-736, 1991.
9. F. Albertini and E.D. Sontag. Some connections between chaotic dynamical systems and control systems. In Proc. European Control Conf. , Vol 1, Grenoble, July 1991, pages 58-163, 1991. [PDF]
Keyword(s): chaotic systems, controllability.
│ │This paper shows how to extend recent results of Colonius and Kliemann, regarding connections between chaos and controllability, from continuous to discrete time. The extension is │
│ │nontrivial because the results all rely on basic properties of the accessibility Lie algebra which fail to hold in discrete time. Thus, this paper first develops further results in │
│ │nonlinear accessibility, and then shows how a theorem can be proved, which while analogous to the one given in the work by Colonius and Klieman, also exhibits some important │
│Abstract:│differences. A counterexample is used to show that the theorem given in continuous time cannot be generalized in a straightforward manner. │
10. Y. Lin and E.D. Sontag. Further universal formulas for Lyapunov approaches to nonlinear stabilization. In Proc. Conf. Inform. Sci. and Systems, John Hopkins University, March 1991, pages 541-546,
11. W. Maass, G. Schnitger, and E.D. Sontag. On the computational power of sigmoid versus Boolean threshold circuits (extended abstract). In Proceedings of the 32nd annual symposium on Foundations of
computer science, Los Alamitos, CA, USA, pages 767-776, 1991. IEEE Computer Society Press. Keyword(s): machine learning, neural networks, theory of computing and complexity.
12. R. Schwarzschild and E.D. Sontag. Algebraic theory of sign-linear systems. In Proc. Amer. Automatic Control Conf., Boston, June 1991, pages 799-804, 1991.
13. R. Schwarzschild and E.D. Sontag. Quantized systems, saturated measurements, and sign-linear systems. In Proc. Conf. Inform. Sci. and Systems, John Hopkins University, March 1991, pages 134-139,
1991. Keyword(s): observability, saturation.
14. E.D. Sontag. Capabilities of four- vs three-layer nets, and control applications. In Proc. Conf. Inform. Sci. and Systems, John Hopkins University, March 1991, pages 558-563, 1991.
15. E.D. Sontag. Feedback Stabilization Using Two-Hidden-Layer Nets. In Proc. Amer. Automatic Control Conf. , Boston, June 1991, pages 815-820, 1991.
16. E.D. Sontag and Y. Wang. I/O equations for nonlinear systems and observation spaces. In Proc. IEEE Conf. Decision and Control, Brighton, UK, Dec. 1991, IEEE Publications, 1991, pages 720-725,
1991. [PDF] Keyword(s): identifiability, observability, realization theory, real-analytic functions.
│ │This paper studies various types of input/output representations for nonlinear continuous time systems. The algebraic and analytic i/o equations studied in previous papers by the │
│ │authors are generalized to integral and integro-differential equations, and an abstract notion is also considered. New results are given on generic observability, and these results │
│ │are then applied to give conditions under which that the minimal order of an equation equals the minimal possible dimension of a realization, just as with linear systems but in │
│Abstract:│contrast to the discrete time nonlinear theory. │
1. E.D. Sontag. Mathematical Control Theory. Deterministic Finite-Dimensional Systems, volume 6 of Texts in Applied Mathematics. Springer-Verlag, New York, 1990.
│Abstract:│The second edition (1998) is now online; please follow that link. │
2. E.D. Sontag. Constant McMillan degree and the continuous stabilization of families of transfer matrices. In Control of uncertain systems (Bremen, 1989), volume 6 of Progr. Systems Control Theory,
pages 289-295. Birkhuser Boston, Boston, MA, 1990. [PDF] Keyword(s): systems over rings.
3. E.D. Sontag. Feedback stabilization of nonlinear systems. In Robust control of linear systems and nonlinear control (Amsterdam, 1989), volume 4 of Progr. Systems Control Theory, pages 61-81.
Birkhuser Boston, Boston, MA, 1990. [PDF]
│ │This paper surveys some well-known facts as well as some recent developments on the topic of stabilization of nonlinear systems. (NOTE: figures are not included in file; they were │
│Abstract:│pasted-in.) │
4. E.D. Sontag. Integrability of certain distributions associated with actions on manifolds and applications to control problems. In Nonlinear controllability and optimal control, volume 133 of
Monogr. Textbooks Pure Appl. Math., pages 81-131. Dekker, New York, 1990. [PDF] Keyword(s): controllability.
│ │Results are given on the integrability of certain distributions which arise from smoothly parametrized families of diffeomorphisms acting on manifolds. Applications to control │
│ │problems and in particular to the problem of sampling are discussed. Pages 42-50 apply the results to the control of continuous time systems; this is an exposition of some of the │
│Abstract:│basic results of the Lie algebraic accessibility theory. │
5. E.D. Sontag and Y. Wang. Input/output equations and realizability. In Realization and modelling in system theory (Amsterdam, 1989), volume 3 of Progr. Systems Control Theory, pages 125-132.
Birkhuser Boston, Boston, MA, 1990. [PDF] Keyword(s): identifiability, observability, realization theory.
6. B. Jakubczyk and E.D. Sontag. Controllability of nonlinear discrete-time systems: a Lie-algebraic approach. SIAM J. Control Optim., 28(1):1-33, 1990. [PDF] [doi:http://dx.doi.org/10.1137/0328001]
Keyword(s): discrete-time.
│ │This paper presents a geometric study of controllability for discrete-time nonlinear systems. Various accessibility properties are characterized in terms of Lie algebras of vector │
│Abstract:│fields. Some of the results obtained are parallel to analogous ones in continuous-time, but in many respects the theory is substantially different and many new phenomena appear. │
7. E.D. Sontag. Further facts about input to state stabilization. IEEE Trans. Automat. Control, 35(4):473-476, 1990. [PDF] Keyword(s): input to state stability, ISS, stabilization.
│ │Previous results about input to state stabilizability are shown to hold even for systems which are not linear in controls, provided that a more general type of feedback be allowed. │
│ │Applications to certain stabilization problems and coprime factorizations, as well as comparisons to other results on input to state stability, are also briefly discussed.d local │
│Abstract:│minima may occur, if the data are not separable and sigmoids are used. │
8. E.D. Sontag and Y. Wang. Pole shifting for families of linear systems depending on at most three parameters. Linear Algebra Appl., 137/138:3-38, 1990. [PDF] Keyword(s): systems over rings,
systems over rings.
│ │We prove that for any family of n-dimensional controllable linear systems, continuously parameterized by up to three parameters, and for any continuous selection of n eigenvalues (in │
│ │complex conjugate pairs), there is some dynamic controller of dimension 3n which is itself continuously parameterized and for which the closed-loop eigenvalues are these same │
│Abstract:│eigenvalues, each counted 4 times. An analogous result holds also for smooth parameterizations. │
9. T. Asano, J. Hershberger, J. Pach, E.D. Sontag, D. Souivaine, and S. Suri. Separating bi-chromatic points by parallel lines. In Proceedings of the Second Canadian Conf. on Computational Geometry,
Ottawa, Canada, 1990, pages 46-49, 1990. [PDF] Keyword(s): computational geometry.
│ │Given a 2-coloring of the vertices of a regular n-gon P, how many parallel lines are needed to separate the vertices into monochromatic subsets? We prove that floor(n/2) is a tight │
│ │upper bound, and also provide an O(n log n) time algorithm to determine the direction that gives the minimum number of lines. If the polygon is a non-regular convex polygon, then n-3 │
│Abstract:│lines may be necessary, while n-2 lines always suffice. This problem arises in machine learning and has implications about the representational capabilities of some neural networks. │
10. H. Dewan and E.D. Sontag. Extrapolatory methods for speeding up the BP algorithm. In Proc. Int. Joint Conf. on Neural Networks, Washington, DC, Jan. 1990, Lawrence Erlbaum Associates, Inc.,
Publishers, ISBN 0-8058-0775-6, pages I.613-616, 1990. [PDF] Keyword(s): machine learning, neural networks.
│ │We describe a speedup technique that uses extrapolatory methods to predict the weights in a Neural Network using Back Propagation (BP) learning. The method is based on empirical │
│ │observations of the way the weights change as a function of time. We use numerical function fitting techniques to determine the parameters of an extrapolation function and then use │
│ │this function to project weights into the future. Significant computational savings result by using the extrapolated weights to jump over many iterations of the standard algorithm, │
│Abstract:│achieving comparable performance with fewer iterations. │
11. E.D. Sontag. Comparing sigmoids and heavisides. In Proc. Conf. Info. Sci. and Systems, Princeton, 1990, pages 654-659, 1990. Keyword(s): machine learning, neural networks, boolean systems.
12. E.D. Sontag. Remarks on interpolation and recognition using neural nets. In NIPS-3: Proceedings of the 1990 conference on Advances in neural information processing systems 3, San Francisco, CA,
USA, pages 939-945, 1990. Morgan Kaufmann Publishers Inc.. Keyword(s): machine learning, neural networks.
13. E.D. Sontag and H.J. Sussmann. Nonlinear output feedback design for linear systems with saturating controls. In Proc. IEEE Conf. Decision and Control, Honolulu, Dec. 1990, IEEE Publications, 1990
, pages 3414-3416, 1990. [PDF] Keyword(s): saturation, bounded inputs.
│ │This paper shows the existence of (nonlinear) smooth dynamic feedback stabilizers for linear time invariant systems under input constraints, assuming only that open-loop asymptotic │
│Abstract:│controllability and detectability hold. │
14. Y. Wang and E.D. Sontag. Realization of families of generating series: differential algebraic and state space equations. In Proc. 11th IFAC World Congress, Tallinn, former USSR, 1990, pages
62-66, 1990. Keyword(s): identifiability, observability, realization theory.
15. F. Albertini and E.D. Sontag. Some connections between chaotic dynamical systems and control systems. Technical report SYCON-90-13, Rutgers Center for Systems and Control, 1990.
1. B. Jakubczyk and E.D. Sontag. Nonlinear discrete-time systems. Accessibility conditions. In Modern optimal control, volume 119 of Lecture Notes in Pure and Appl. Math., pages 173-185. Dekker, New
York, 1989. [PDF]
2. A. Arapostathis, B. Jakubczyk, H.-G. Lee, S. I. Marcus, and E.D. Sontag. The effect of sampling on linear equivalence and feedback linearization. Systems Control Lett., 13(5):373-381, 1989. [PDF]
[doi:http://dx.doi.org/10.1016/0167-6911(89)90103-5] Keyword(s): discrete-time, sampled-data systems, discrete-time systems, sampling.
│ │We investigate the effect of sampling on linearization for continuous time systems. It is shown that the discretized system is linearizable by state coordinate change for an open set │
│ │of sampling times if and only if the continuous time system is linearizable by state coordinate change. Also, it is shown that linearizability via digital feedback imposes highly │
│Abstract:│nongeneric constraints on the structure of the plant, even if this is known to be linearizable with continuous-time feedback. │
3. E.D. Sontag. A ``universal'' construction of Artstein's theorem on nonlinear stabilization. Systems Control Lett., 13(2):117-123, 1989. [PDF] Keyword(s): control-Lyapunov functions,
stabilization, real-analytic functions.
│ │This note presents an explicit proof of the theorem - due to Artstein - which states that the existence of a smooth control-Lyapunov function implies smooth stabilizability. Moreover,│
│ │the result is extended to the real-analytic and rational cases as well. The proof uses a "universal" formula given by an algebraic function of Lie derivatives; this formula originates│
│Abstract:│in the solution of a simple Riccati equation. │
4. E.D. Sontag. Sigmoids distinguish more efficiently than Heavisides. Neural Computation, 1:470-472, 1989. [PDF] Keyword(s): machine learning, neural networks, boolean systems.
│ │Every dichotomy on a 2k-point set in Rn can be implemented by a neural net with a single hidden layer containing k sigmoidal neurons. If the neurons were of a hardlimiter (Heaviside) │
│Abstract:│type, 2k-1 would be in general needed. │
5. E.D. Sontag. Smooth stabilization implies coprime factorization. IEEE Trans. Automat. Control, 34(4):435-443, 1989. [PDF] Keyword(s): input to state stability, ISS, input to state stability.
│ │This paper shows that coprime right factorizations exist for the input to state mapping of a continuous time nonlinear system provided that the smooth feedback stabilization problem │
│ │be solvable for this system. In particular, it follows that feedback linearizable systems admit such factorizations. In order to establish the result a Lyapunov-theoretic definition │
│ │is proposed for bounded input bounded output stability. The main technical fact proved relates the notion of stabilizability studied in the state space nonlinear control literature to│
│ │a notion of stability under bounded control perturbations analogous to those studied in operator theoretic approaches to systems; it states that smooth stabilization implies smooth │
│ │input-to-state stabilization. (Note: This is the original ISS paper, but the ISS results have been much improved in later papers. The material on coprime factorizations is still of │
│Abstract:│interest, but the 89 CDC paper has some improvements and should be read too.) │
6. E.D. Sontag and H.J. Sussmann. Backpropagation can give rise to spurious local minima even for networks without hidden layers. Complex Systems, 3(1):91-106, 1989. [PDF] Keyword(s): machine
learning, neural networks.
│ │We give an example of a neural net without hidden layers and with a sigmoid transfer function, together with a training set of binary vectors, for which the sum of the squared errors,│
│ │regarded as a function of the weights, has a local minimum which is not a global minimum. The example consists of a set of 125 training instances, with four weights and a threshold to│
│Abstract:│be learnt. We do not know if substantially smaller binary examples exist. │
7. E.D. Sontag and H.J. Sussmann. Further comments on the stabilizability of the angular velocity of a rigid body. Systems Control Lett., 12(3):213-217, 1989. [PDF] [doi:http://dx.doi.org/10.1016/
0167-6911(89)90052-2] Keyword(s): satellite control, feedback stabilization.
│ │We prove that the angular velocity equations can be smoothly stabilized with a single torque controller for bodies having an axis of symmetry. This complements a recent result of │
│Abstract:│Aeyels and Szafranski. │
8. E.D. Sontag and Y. Yamamoto. On the existence of approximately coprime factorizations for retarded systems. Systems Control Lett., 13(1):53-58, 1989. [PDF] [doi:http://dx.doi.org/10.1016/
0167-6911(89)90020-0] Keyword(s): delay-differential systems.
│ │This note establishes a result linking algebraically coprime factorizations of transfer matrices of delay systems to approximately coprime factorizations in the sense of │
│Abstract:│distributions. The latter have been employed by the second author in the study of function-space controllability for such systems. │
9. Y. Wang and E.D. Sontag. On two definitions of observation spaces. Systems Control Lett., 13(4):279-289, 1989. [PDF] [doi:http://dx.doi.org/10.1016/0167-6911(89)90116-3] Keyword(s):
observability, identifiability, observables, observation space, realization theory.
│Abstract:│This paper establishes the equality of the observation spaces defined by means of piecewise constant controls with those defined in terms of differentiable controls.│
10. E.D. Sontag. Remarks on stabilization and input-to-state stability. In Proceedings of the 28th IEEE Conference on Decision and Control, Vol. 1--3 (Tampa, FL, 1989), New York, pages 1376-1378,
1989. IEEE. [PDF] Keyword(s): input to state stability, ISS, stabilization.
│ │This paper describes how notions of input-to-state stabilization are useful when stabilizing cascades of systems. The simplest result along these lines is local, and it states that a │
│ │cascade of two locally asymptotically stable systems is again asystable. A global result is obtained if both systems have the origin as a globally asymptotically stable state and the │
│Abstract:│"converging input bounded state" property holds for the second system. Relations to input to state stability and the "bounded input bounded state" property as mentioned as well. │
11. E.D. Sontag. Remarks on the time-optimal control of a class of Hamiltonian systems. In Proceedings of the 28th IEEE Conference on Decision and Control, Vol. 1--3 (Tampa, FL, 1989), New York,
pages 217-221, 1989. IEEE. [PDF] Keyword(s): robotics, optimal control.
│ │This paper introduces a subclass of Hamiltonian control systems motivated by mechanical models. It deals with time-optimal control problems. The main results characterize regions of │
│Abstract:│the state space where singular trajectories cannot exist, and provide high-order conditions for optimality. │
12. E.D. Sontag. Some connections between stabilization and factorization. In Proceedings of the 28th IEEE Conference on Decision and Control, Vol. 1--3 (Tampa, FL, 1989), New York, pages 990-995,
1989. IEEE. [PDF]
│ │Coprime right fraction representations are obtained for nonlinear systems defined by differential equations, under assumptions of stabilizability and detectability. A result is also │
│Abstract:│given on left (not necessarily coprime) factorizations. │
13. E.D. Sontag. Some recent results on nonlinear feedback. In Proc. Conf. Info. Sciences and Systems, Johns Hopkins University Press, 1989, pages 151-156, 1989.
14. E.D. Sontag and H.J. Sussmann. Backpropagation Separates when Perceptrons Do. In Proc. IEEE Int. Conf. Neural Networks, Washington, DC, June 1989, pages 639-642, 1989. [PDF]
15. E.D. Sontag and H.J. Sussmann. Remarks on local minima in backpropagation. In Proc. Conf. Info. Sciences and Systems, Johns Hopkins University Press, 1989, pages 432-435, 1989. Keyword(s):
machine learning, neural networks.
16. Y. Wang and E.D. Sontag. A new result on the relation between differential-algebraic realizability and state space realizations. In Proc. Conf. Info. Sciences and Systems, Johns Hopkins
University Press, 1989, pages 143-147, 1989. Keyword(s): observables, observation space, identifiability, observability, realization theory.
17. Y. Wang and E.D. Sontag. Realization and input/output relations: the analytic case. In Proceedings of the 28th IEEE Conference on Decision and Control, Vol. 1--3 (Tampa, FL, 1989), New York,
pages 1975-1980, 1989. IEEE. Keyword(s): identifiability, observability, realization theory, real-analytic functions.
18. E.D. Sontag. Sigmoids distinguish more efficiently than Heavisides. Technical report SYCON-89-12, Rutgers Center for Systems and Control, 1989. Keyword(s): machine learning, neural networks.
1. B.N. Datta, C.R. Johnson, M.A. Kaashoek, R.J. Plemmons, and E.D. Sontag. Linear Algebra in Signals, Systems, and Control (edited book). SIAM, 1988.
2. E.D. Sontag. A Chow property for sampled bilinear systems. In C.I. Byrnes, C.F. Martin, and R. Saeks, editors, Analysis and Control of Nonlinear Systems, pages 205-211. North Holland, Amsterdam,
1988. [PDF] Keyword(s): discrete-time, bilinear systems.
│ │This paper studies accessibility (weak controllability) of bilinear systems under constant sampling rates. It is shown that the property is preserved provided that the sampling period│
│ │satisfies a condition related to the eigenvalues of the autonomous dynamics matrix. This condition generalizes the classical Kalman-Ho-Narendra criterion which is well known in the │
│Abstract:│linear case, and which, for observability, results in the classical Nyquist theorem. │
3. E.D. Sontag. An explicit construction of the equilinearization controller. In C.I. Byrnes, C.F. Martin, and R. Saek, editors, Analysis and Control of Nonlinear Systems, pages 483-492. North
Holland, Amsterdam, 1988. [PDF]
│ │This paper provides further results about the equilinearization method of control design recently introduced by the author. A simplified derivation of the controller is provided, as │
│Abstract:│well as a theorem on local stabilization along reference trajectories. │
4. E.D. Sontag. Bilinear realizability is equivalent to existence of a singular affine differential I/O equation. Systems Control Lett., 11(3):181-187, 1988. [PDF] [doi:http://dx.doi.org/10.1016/
0167-6911(88)90057-6] Keyword(s): identification, identifiability, observability, observation space, real-analytic functions.
│ │For continuous time analytic input/output maps, the existence of a singular differential equation relating derivatives of controls and outputs is shown to be equivalent to bilinear │
│ │realizability. A similar result holds for the problem of immersion into bilinear systems. The proof is very analogous to that of the corresponding, and previously known, result for │
│Abstract:│discrete time. │
5. E.D. Sontag. Controllability is harder to decide than accessibility. SIAM J. Control Optim., 26(5):1106-1118, 1988. [PDF] [doi:http://dx.doi.org/10.1137/0326061] Keyword(s): computational
complexity, controllability, computational complexity.
│ │The present article compares the difficulties of deciding controllability and accessibility. These are standard properties of control systems, but complete algebraic characterizations│
│Abstract:│of controllability have proved elusive. We show in particular that for subsystems of bilinear systems, accessibility can be decided in polynomial time, but controllability is NP-hard.│
6. E.D. Sontag. Finite-dimensional open-loop control generators for nonlinear systems. Internat. J. Control, 47(2):537-556, 1988. [PDF]
│ │This paper concerns itself with the existence of open-loop control generators for nonlinear (continuous-time) systems. The main result is that, under relatively mild assumptions on │
│ │the original system, and for each fixed compact subset of the state space, there always exists one such generator. This is a new system with the property that the controls it produces│
│ │are sufficiently rich to preserve complete controllability along nonsingular trajectories. General results are also given on the continuity and differentiability of the input to state│
│Abstract:│mapping for various p-norms on controls, as well as a comparison of various nonlinear controllability notions. │
7. E.D. Sontag. Some complexity questions regarding controllability. In Proc. IEEE Conf. Decision and Control, Austin, Dec. 1988, pages 1326-1329, 1988. [PDF] Keyword(s): theory of computing and
complexity, computational complexity, controllability, computational complexity.
│ │It has been known for a long time that certain controllability properties are more difficult to verify than others. This article makes this fact precise, comparing controllability │
│ │with accessibility, for a wide class of nonlinear continuous time systems. The original contribution is in formalizing this comparison in the context of computational complexity. │
│Abstract:│(This paper placed here by special request.) │
8. E.D. Sontag. Stabilizability, i/o stability, and coprime factorizations. In Proc. IEEE Conf. Decision and Control, Austin, Dec. 1988, pages 457-458, 1988. Keyword(s): input to state stability,
coprime factorizations, stabilization.
9. E.D. Sontag. Integrability of certain distributions associated to actions on manifolds and an introduction to Lie-algebraic control. Technical report SYCON-88-04, Rutgers Center for Systems and
Control, 1988.
10. E.D. Sontag. Some remarks on the backpropagation algorithm for neural net learning. Technical report SYCON-88-02, Rutgers Center for Systems and Control, 1988. [PDF] Keyword(s): machine learning,
neural networks.
│ │This is a very old informal report that discusses the study of local minima of quadratic loss functions for fitting errors in sigmoidal neural net learning. It also includes several │
│ │remarks concerning the growth of weights during gradient descent. There is nothing very interesting here - far better knowledge is now available - but the report was placed here by │
│Abstract:│request. │
1. E.D. Sontag. Reachability, observability, and realization of a class of discrete-time nonlinear systems. In Encycl. of Systems and Control, pages 3288-3293. Pergamon Press, 1987. Keyword(s):
2. E.D. Sontag. A remark on bilinear systems and moduli spaces of instantons. Systems Control Lett., 9(5):361-367, 1987. [PDF] [doi:http://dx.doi.org/10.1016/0167-6911(87)90064-8] Keyword(s):
bilinear systems, moduli spaces, instantons.
│ │Explicit equations are given for the moduli space of framed instantons as a quasi-affine variety, based on the representation theory of noncommutative power series, or equivalently, │
│Abstract:│the minimal realization theory of bilinear systems. │
3. E.D. Sontag. Controllability and linearized regulation. IEEE Trans. Automat. Control, 32(10):877-888, 1987. [PDF]
│ │A nonlinear controllable plant, under mild technical conditions, admits a precompensator with the following property: along control trajectories joining pairs of states, the composite│
│Abstract:│system (precompensator plus plant) is, up to first order, isomorphic to a parallel connection of integrators. │
4. E.D. Sontag. Review of Multidimensional Systems Theory. Linear Alg. and Applications, 87:273-278, 1987. [PDF]
│Abstract:│A book review which also provides a quick introduction to questions of stability and positivity of multivariable polynomials for 2D and spatially-distributed systems.│
5. B. Jakubczyk and E.D. Sontag. The effect of sampling on feedback linearization. In Proc. IEEE Conf. Decision and Control, Los Angeles, Dec.1987, pages 1374-1379, 1987.
6. E.D. Sontag. An approach to the automatic design of first-order controllers along reference trajectories. In Proc. IEEE Conf. Decision and Control, Los Angeles, Dec.1987, pages 363-1367, 1987.
7. E.D. Sontag. Equilinearization: A simplified derivation and experimental results. In Proc. Conf. Info. Sciences and Systems, Johns Hopkins University Press, pages 490-495, 1987.
8. E.D. Sontag and H.J. Sussmann. Optimization algorithms for image restoration and segmentation. Technical report 34, Rutgers Center for Computer Aids for Industrial Productivity, 1987.
1. E.D. Sontag. Orbit theorems and sampling. In Algebraic and geometric methods in nonlinear control theory, volume 29 of Math. Appl., pages 441-483. Reidel, Dordrecht, 1986. [PDF]
│ │This paper proposes a notion of smooth action on a manifold, and establishes a general integrability result for certain associated distributions. As corollaries, various classical and│
│Abstract:│new results on manifold structures of orbits are established, and the main theorem on preservation of transitivity under sampling is shown to be a simple consequence. │
2. M. L. J. Hautus and E.D. Sontag. New results on pole-shifting for parametrized families of systems. J. Pure Appl. Algebra, 40(3):229-244, 1986. [PDF] Keyword(s): systems over rings,
pole-shifting, parametric classes of systems, real-analytic functions.
│ │New results are given on the pole-shifting problem for commutative rings, and these are then applied to conclude that rings of continuous, smooth, or real-analytic functions on a │
│Abstract:│manifold X are PA rings if and only if X is one-dimensional. │
3. E.D. Sontag. An eigenvalue condition for sample weak controllability of bilinear systems. Systems Control Lett., 7(4):313-315, 1986. [PDF] [doi:http://dx.doi.org/10.1016/0167-6911(86)90045-9]
Keyword(s): discrete-time.
│ │Weak controllability of bilinear systems is preserved under sampling provided that the sampling period satisfies a condition related to the eigenvalues of the autonomous dynamics │
│Abstract:│matrix. This condition generalizes the classical Kalman-Ho-Narendra criterion which is well known in the linear case. │
4. E.D. Sontag. Comments on: ``Some results on pole-placement and reachability'' [Systems Control Lett. 6 (1986), no. 5, 325--328; MR0821927 (87c:93032)] by P. K. Sharma. Systems Control Lett., 8
(1):79-83, 1986. [PDF] [doi:http://dx.doi.org/10.1016/0167-6911(86)90034-4]
│ │We present various comments on a question about systems over rings posed in a recent note by Sharma, proving that a ring R is pole-assignable if and only if, for every reachable │
│Abstract:│system (F,G), G contains a rank-one summand of the state space. We also provide a generalization to deal with dynamic feedback. │
5. E.D. Sontag. Continuous stabilizers and high-gain feedback. IMA Journal of Mathematical Control and Information, 3:237-253, 1986. [PDF] Keyword(s): adaptive control, systems over rings.
│ │A controller is shown to exist, universal for the family of all systems of fixed dimension n, and m controls, which stabilizes those systems that are stabilizable, if certain gains │
│ │are large enough. The controller parameters are continuous, in fact polynomial, functions of the entries of the plant. As a consequence, a result is proved on polynomial stabilization│
│Abstract:│of families of systems. │
6. E.D. Sontag. Controllability and linearized regulation. In Proc. Conf. Info. Sci. and Systems, Princeton, 1986, pages 67-671, 1986.
7. E.D. Sontag and H.J. Sussmann. Time-optimal control of manipulators. In Proc. IEEE Int.Conf.on Robotics and Automation, San Francisco, April 1986, pages 1692-1697, 1986. [PDF] Keyword(s):
robotics, optimal control.
│ │This paper studies time-optimal control questions for a certain class of nonlinear systems. This class includes a large number of mechanical systems, in particular, rigid robotic │
│Abstract:│manipulators with torque constraints. As nonlinear systems, these systems have many properties that are false for generic systems of the same dimensions. │
1. E.D. Sontag. An introduction to the stabilization problem for parametrized families of linear systems. In Linear algebra and its role in systems theory (Brunswick, Maine, 1984), volume 47 of
Contemp. Math., pages 369-400. Amer. Math. Soc., Providence, RI, 1985. [PDF] Keyword(s): systems over rings.
│ │This paper provides an introduction to definitions and known facts relating to the stabilization of parametrized families of linear systems using static and dynamic controllers. New │
│Abstract:│results are given in the rational and polynomial cases. │
2. B.W. Dickinson and E.D. Sontag. Dynamic realizations of sufficient sequences. IEEE Trans. Inform. Theory, 31(5):670-676, 1985. [PDF] Keyword(s): realization theory, statistics, innovations,
sufficient statistics.
│ │Let Ul, U2, ... be a sequence of observed random variables and (T1(U1),T2(Ul,U2),...) be a corresponding sequence of sufficient statistics (a sufficient sequence). Under certain │
│ │regularity conditions, the sufficient sequence defines the input/output map of a time-varying, discrete-time nonlinear system. This system provides a recursive way of updating the │
│ │sufficient statistic as new observations are made. Conditions are provided assuring that such a system evolves in a state space of minimal dimension. Several examples are provided to │
│ │illustrate how this notion of dimensional minimality is related to other properties of sufficient sequences. The results can be used to verify the form of the minimum dimension │
│Abstract:│(discrete-time) nonlinear filter associated with the autoregressive parameter estimation problem. │
3. E.D. Sontag. Real addition and the polynomial hierarchy. Inform. Process. Lett., 20(3):115-120, 1985. [PDF]
│Abstract:│The k-th alternation level of the theory of real numbers under addition and order is log-complete for the k-th level of the polynomial hierarchy.│
4. E.D. Sontag. Further results on accessibility under sampling. In Proc.Conf. Info. Sci. and Systems, Johns Hopkins University, March 1985, 1985.
5. E.D. Sontag and H.J. Sussmann. Image restoration and segmentation using the annealing algorithm. In Proc. IEEE Conf. Dec. and Control, 1985, pages 768-773, 1985. [PDF] Keyword(s): image
processing, optimization.
│ │We consider the problem of estimating a signal, which is known -- or assumed -- to be constant on each of the members of a partition of a square lattice into m unknown regions, from │
│ │the observation of the signal plus Gaussian noise. This is a nonlinear estimation problem, for which it is not appropriate to use the conditional expectation as the estimate. We show │
│ │that, at least in principle, the "maximum iikelihood estimator" (MLE) proposed by Geman and Geman lends itself to numerical computation using the annealing algorithm. We argue that │
│ │the MLE by itself can be, under certain conditions (low signal to noise ratio), a very unsatisfactory estimator, in that it does worse than just deciding that the signal was zero. │
│ │However, if combined with a rule which we propose, for deciding when to use and when to ignore it, the MLE can provide a reasonable suboptimal estimator. We then discuss preliminary │
│ │numerical data obtained using the annealing method. These results indicate that: (a) the annealing algorithm performs remarkably well, and (b) a criterion can be formulated in terms │
│Abstract:│of quantities computed from the observed image (without using a priori knowledge of the signal-to-noise ratio) for deciding when to keep the MLE. │
6. E.D. Sontag and H.J. Sussmann. Remarks on the time-optimal control of two-link manipulators. In Proc. IEEE Conf. Dec. and Control, 1985, pages 1646-1652, 1985. [PDF] Keyword(s): optimal control,
1. E.D. Sontag. An approximation theorem in nonlinear sampling. In Mathematical theory of networks and systems (Beer Sheva, 1983), volume 58 of Lecture Notes in Control and Inform. Sci., pages
806-812. Springer, London, 1984. [PDF]
│ │We continue here our investigation into the preservation of structural properties under the sampling of nonlinear systems. The main new result is that, under minimal hypothesis, a │
│Abstract:│controllable system always satisfies a strong type of approximate sampled controllability. │
2. C.A. Schwartz, B.W. Dickinson, and E.D. Sontag. Characterizing innovations realizations for random processes. Stochastics, 11(3-4):159-172, 1984. [PDF] Keyword(s): statistics, innovations,
sufficient statistics.
│ │In this paper we are concerned with the theory of second order (linear) innovations for discrete random processes. We show that of existence of a finite dimensional linear filter │
│ │realizing the mapping from a discrete random process to its innovations is equivalent to a certain semiiseparable structure of the covariance sequence of the process. We also show │
│ │that existence of a finite dimensional realization (linear or nonlinear) of the mapping from a process to its innovations implies that the process have this serniseparable covariance │
│Abstract:│sequence property. In particular, for a stationary random process, the spectral density function must be rational. │
3. E.D. Sontag. A concept of local observability. Systems Control Lett., 5(1):41-47, 1984. [PDF] Keyword(s): observability.
│ │A notion of local observability, which is natural in the context of nonlinear input/output regulation. is introduced. A simple characterization is provided, a comparison is made with │
│Abstract:│other local nonlinear observability definitions. and its behavior under constant-rate sampling is analyzed. │
4. E.D. Sontag. An algebraic approach to bounded controllability of linear systems. Internat. J. Control, 39(1):181-188, 1984. [PDF] Keyword(s): saturation, bounded inputs.
│ │In this note we present an algebraic approach to the proof that a linear system with matrices (A,B) is null-controllable using bounded inputs iff it is null-controllable (with │
│ │unbounded inputs) and all eigenvalues of A have nonpositive real parts (continuous time) or magnitude not greater than one (discrete time). We also give the analogous results for the │
│Abstract:│asymptotic case. Finally, we give an interpretation of these results in the context of local nonlinear controllability. │
5. E.D. Sontag. Parametric stabilization is easy. Systems Control Lett., 4(4):181-188, 1984. [PDF] Keyword(s): systems over rings.
│ │A polynomially parametrized family of continuous-time controllable linear systems is always stabilizable by polynomially parametrized feedback. (Note: appendix had a MACSYMA │
│ │computation. I cannot find the source file for that. Please look at journal if interested, but this is not very important. Also, two figures involving root loci are not in the web │
│Abstract:│version.) │
6. E.D. Sontag. Remarks on input/output linearization. In Proc. IEEE Conf. Dec. and Control, Las Vegas, Dec. 1984, pages 409-412, 1984. [PDF]
│ │In the context of realization theory, conditions are given for the possibility of simulating a given discrete time system, using immersion and/or feedback, by linear or state-affine │
│Abstract:│systems. │
1. E.D. Sontag. Remarks on the preservation of various controllability properties under sampling. In Mathematical tools and models for control, systems analysis and signal processing, Vol. 3
(Toulouse/Paris, 1981/1982), Travaux Rech. Coop. Programme 567, pages 623-637. CNRS, Paris, 1983. [PDF] Keyword(s): controllability, sampling, nonlinear systems, real-analytic functions.
│ │This note studies the preservation of controllability (and other properties) under sampling of a nonlinear system. More detailed results are obtained in the cases of analytic systems │
│Abstract:│and of systems with finite dimensional Lie algebras. │
2. R.T. Bumby and E.D. Sontag. Stabilization of polynomially parametrized families of linear systems. The single-input case. Systems Control Lett., 3(5):251-254, 1983. [PDF] Keyword(s): systems over
│ │Given a continuous-time family of finite dimensional single input linear systems, parametrized polynomially, such that each of the systems in the family is controllable, there exists │
│Abstract:│a polynomially parametrized control law making each of the systems in the family stable. │
3. E.D. Sontag. A Lyapunov-like characterization of asymptotic controllability. SIAM J. Control Optim., 21(3):462-471, 1983. [PDF] Keyword(s): control-Lyapunov functions.
│ │It is shown that a control system in Rn is asymptotically controllable to the origin if and only if there exists a positive definite continuous functional of the states whose │
│Abstract:│derivative can be made negative by appropriate choices of controls. │
4. E.D. Sontag. Further remarks preservation of accessibility under sampling. In Proc. Johns Hopkins Conf. on Info. Sci. and Systems, 1983, pages 326-332, 1983.
1. E.D. Sontag. A characterization of asymptotic controllability. In A. Bednarek and L. Cesari, editors, Dynamical Systems II, pages 645-648. Academic Press, NY, 1982. [PDF] Keyword(s):
control-Lyapunov functions.
│ │This paper was a conference version of the SIAM paper that introduced the idea of control-Lyapunov functions for arbitrary nonlinear systems. (The journal paper was submitted in 1981 │
│Abstract:│but only published in 1983.) │
2. E.D. Sontag. Abstract regulation of nonlinear systems: stabilization. In Feedback control of linear and nonlinear systems (Bielefeld/Rome, 1981), volume 39 of Lecture Notes in Control and Inform.
Sci., pages 227-243. Springer, Berlin, 1982.
3. E.D. Sontag. Linear systems over commutative rings: a (partial) updated survey. In Control science and technology for the progress of society, Vol. 1 (Kyoto, 1981), pages 325-330. IFAC,
Laxenburg, 1982. Keyword(s): systems over rings.
4. P.P. Khargonekar and E.D. Sontag. On the relation between stable matrix fraction factorizations and regulable realizations of linear systems over rings. IEEE Trans. Automat. Control, 27
(3):627-638, 1982. [PDF] Keyword(s): systems over rings.
│ │Various types of transfer matrix factorizations are of interest when designing regulators for generalized types of linear systems (delay differential. 2-D, and families of systems). │
│ │This paper studies the existence of stable and of stable proper factorizations, in the context of the thery of systems over rings. Factorability is related to stabilizability and │
│ │detectability properties of realizations of the transfer matrix. The original formulas for coprime factorizations (which are valid, in particular, over the field of reals) were given │
│Abstract:│in this paper. │
5. E.D. Sontag. Remarks on piecewise-linear algebra. Pacific J. Math., 98(1):183-201, 1982. [PDF] Keyword(s): hybrid systems, piecewise linear systems.
│Abstract:│Algebraic study of functions defined by piecewise linear (generally discontinuous) equations. File obtained by scanning a reprint. │
6. E.D. Sontag. Abstract regulation of nonlinear systems: Stabilization, Part II. In Proc.Princeton Conf.on Information Sciences and Systems, Princeton, March 1982, pages 431-435, 1982. Keyword(s):
feedback stabilization.
7. E.D. Sontag. Small-input controllability. In Proc. IEEE Conf. Dec. and Control, Orlando, Dec. 1982, 1982.
8. E.D. Sontag and H.J. Sussmann. Accessibility under sampling. In Proc. IEEE Conf. Dec. and Control, Orlando, Dec. 1982, 1982. [PDF] Keyword(s): discrete-time.
│ │This note addresses the following problem: Find conditions under which a continuous-time (nonlinear) system gives rise, under constant rate sampling, to a discrete-time system which │
│Abstract:│satisfies the accessibility property. │
1. R.T. Bumby, E.D. Sontag, H.J. Sussmann, and W. Vasconcelos. Remarks on the pole-shifting problem over rings. J. Pure Appl. Algebra, 20(2):113-127, 1981. [PDF] Keyword(s): systems over rings,
systems over rings.
│ │Problems that appear in trying to extend linear control results to systems over rings R have attracted considerable attention lately. This interest has been due mainly to │
│ │applications-oriented motivations (in particular, dealing with delay-differential equations), and partly to a purely algebraic interest. Given a square n-matrix F and an n-row matrix │
│ │G. pole-shifting problems consist in obtaining more or less arbitrary characteristic polynomials for F+GK, for suitable ("feedback") matrices K. A review of known facts is given, │
│Abstract:│various partial results are proved, and the case n=2 is studied in some detail. │
2. E.D. Sontag. Conditions for abstract nonlinear regulation. Inform. and Control, 51(2):105-127, 1981. [PDF] Keyword(s): feedback stabilization, nonlinear systems, real-analytic functions.
│ │A paper that introduces a separation principle for general finite dimensional analytic continuous-time systems, proving the equivalence between existence of an output regulator (which│
│Abstract:│is an abstract dynamical system) and certain "0-detectability" and asymptotic controllability assumptions. │
3. E.D. Sontag. Nonlinear regulation: the piecewise linear approach. IEEE Trans. Automat. Control, 26(2):346-358, 1981. [PDF] Keyword(s): hybrid systems.
│Abstract:│Development of an approach to nonlinear control based on mixtures of linear systems and finite automata. File obtained by scanning. │
4. P.P. Khargonekar and E.D. Sontag. On the relation between stable matrix fraction decompositions and regulable realizations of systems over rings. In Proc. IEEE Conf.Dec. and Control, San Diego,
Dec. 1981, pages 1006-1011, 1981. Keyword(s): systems over rings.
5. E.D. Sontag and D.E. Stevenson. Remarks on multi-server, multi-priority queuing models related to MVS job scheduling. Technical report TM-81-45281-1, Bell Telephone Labs., 1981.
1. M. L. J. Hautus and E.D. Sontag. An approach to detectability and observers. In Algebraic and geometric methods in linear systems theory (AMS-NASA-NATO Summer Sem., Harvard Univ., Cambridge,
Mass., 1979), volume 18 of Lectures in Appl. Math., pages 99-135. Amer. Math. Soc., Providence, R.I., 1980. [PDF] Keyword(s): observability.
│ │This paper proposes an approach to the problem of establishing the existence of observers for deterministic dynamical systems. This approach differs from the standard one based on │
│ │Luenberger observers in that the observation error is not required to be Markovian given the past input and output data. A general abstract result is given, which special- izes to new│
│Abstract:│results for parametrized families of linear systems, delay systems and other classes of systems. Related problems of feedback control and regulation are also studied. │
2. E.D. Sontag. On quasireachable realizations of a polynomial response. In Systems analysis (Conf., Bordeaux, 1978), volume 75 of Astrisque, pages 207-217. Soc. Math. France, Paris, 1980.
3. E.D. Sontag. On generalized inverses of polynomial and other matrices. IEEE Trans. Automat. Control, 25(3):514-517, 1980. [PDF]
│ │Necessary and sufficient conditions are given for a matrix over a ring to admit a Moore-Penrose generalized inverse in a weak sense. (Attached is also a Math Review with additional │
│Abstract:│comments on strong inverses.) │
4. E.D. Sontag. On the length of inputs necessary in order to identify a deterministic linear system. IEEE Trans. Automat. Control, 25(1):120-121, 1980. [PDF]
│ │The family of m-input, n-dimensional linear systems can be globally Identified with a generic input sequence of length 2mn. This bound is the best possible. A best bound is proved │
│Abstract:│also for a corresponding local identification problem. │
5. E.D. Sontag. Nonlinear regulation, the piecewise linear approach. In Proc.Princeton Conf.on Information Sciences and Systems, Princeton, March 1980, 1980. Keyword(s): piecewise linear systems.
6. E.D. Sontag and H.J. Sussmann. Remarks on continuous feedback. In Proc. IEEE Conf. Decision and Control, Albuquerque, Dec.1980, pages 916-921, 1980. [PDF] Keyword(s): feedback stabilization.
│ │We show that, in general, it is impossible to stabilize a controllable system by means of a continuous feedback, even if memory is allowed. No optimality considerations are involved. │
│ │All state spaces are Euclidean spaces, so no obstructions arising from the state space topology are involved either. For one dimensional state and input, we prove that continuous │
│ │stabilization with memory is always possible. (This is an old conference paper, never published in journal form but widely cited nonetheless. Warning: file is very large, since it was│
│Abstract:│scanned.) │
1. E.D. Sontag. Polynomial Response Maps, volume 13 of Lecture Notes in Control and Information Sciences. Springer-Verlag, Berlin, 1979. [PDF] Keyword(s): realization theory, discrete-time, real
algebraic geometry.
│ │(This is a monograph based upon Eduardo Sontag's Ph.D. thesis. The contents are basically the same as the thesis, except for a very few revisions and extensions.) This work deals the │
│ │realization theory of discrete-time systems (with inputs and outputs, in the sense of control theory) defined by polynomial update equations. It is based upon the premise that the │
│ │natural tools for the study of the structural-algebraic properties (in particular, realization theory) of polynomial input/output maps are provided by algebraic geometry and │
│ │commutative algebra, perhaps as much as linear algebra provides the natural tools for studying linear systems. Basic ideas from algebraic geometry are used throughout in │
│ │system-theoretic applications (Hilbert's basis theorem to finite-time observability, dimension theory to minimal realizations, Zariski's Main Theorem to uniqueness of canonical │
│ │realizations, etc). In order to keep the level elementary (in particular, not utilizing sheaf-theoretic concepts), certain ideas like nonaffine varieties are used only implicitly │
│ │(eg., quasi-affine as open sets in affine varieties) or in technical parts of a few proofs, and the terminology is similarly simplified (e.g., "polynomial map" instead of "scheme │
│Abstract:│morphism restricted to k-points", or "k-space" instead of "k-points of an affine k-scheme"). │
2. Y. Rouchaleau and E.D. Sontag. On the existence of minimal realizations of linear dynamical systems over Noetherian integral domains. J. Comput. System Sci., 18(1):65-75, 1979. [PDF] Keyword(s):
systems over rings.
│ │This paper studies the problem of obtaining minimal realizations of linear input/output maps defined over rings. In particular, it is shown that, contrary to the case of systems over │
│ │fields, it is in general impossible to obtain realizations whose dimiension equals the rank of the Hankel matrix. A characterization is given of those (Noetherian) rings over which │
│Abstract:│realizations of such dimensions can he always obtained, and the result is applied to delay-differential systems. │
3. E.D. Sontag. On finitary linear systems. Kybernetika (Prague), 15(5):349-358, 1979. [PDF] Keyword(s): systems over rings.
│ │An abstract operator approach is introduced, permitting a unified study of discrete- and continuous-time linear control systems. As an application, an algorithm is given for deciding │
│ │if a linear system can be built from any fixed set of linear components. Finally, a criterion is given for reachability of the abstract systems introduced, giving thus a unified proof│
│Abstract:│of known reachability results for discrete-time, continuous-time, and delay-differential systems. │
4. E.D. Sontag. On the observability of polynomial systems. I. Finite-time problems. SIAM J. Control Optim., 17(1):139-151, 1979. [PDF] Keyword(s): observability, observability, polynomial systems.
│ │Different notions of observability are compared for systems defined by polynomial difference equations. The main result states that, for systems having the standard property of │
│ │(multiple-experiment initial-state) observability, the response to a generic input sequence is sufficient for final-state determination. Some remarks are made on results for │
│Abstract:│nonpolynomial and/or continuous-time systems. An identifiability result is derived from the above. │
5. E.D. Sontag. Realization theory of discrete-time nonlinear systems. I. The bounded case. IEEE Trans. Circuits and Systems, 26(5):342-356, 1979. [PDF] Keyword(s): discrete-time systems, nonlinear
systems, realization theory, bilinear systems, state-affine systems.
│ │A state-space realization theory is presented for a wide class of discrete time input/output behaviors. Although In many ways restricted, this class does include as particular cases │
│ │those treated in the literature (linear, multilinear, internally bilinear, homogeneous), as well as certain nonanalytic nonlinearities. The theory is conceptually simple, and │
│ │matrix-theoretic algorithms are straightforward. Finite-realizability of these behaviors by state-affine systems is shown to be equivalent both to the existence of high-order input/ │
│Abstract:│output equations and to realizability by more general types of systems. │
1. W. Dicks and E.D. Sontag. Sylvester domains. J. Pure Appl. Algebra, 13(3):243-275, 1978. [PDF]
│ │The inner rank of an m x n matrix A over a ring is defined as the least integer r such that A can be expressed as the product of an m x r and an r x n matrix. For example, over a │
│Abstract:│(skew) field this concept coincides with the usual notion of rank. This notion is studied in this paper, and is related to Sylvester's law of nullity and work by P.M. Cohn. │
2. E.D. Sontag. On first-order equations for multidimensional filters. IEEE Trans. Acoustics, Speech, and Signal Processing, 26:480-482, 1978. [PDF]
│Abstract:│A construction is given to obtain first-order equation representations of a multidimensional filter, whose dimension is of the order of the degree of the transfer function.│
3. E.D. Sontag. On split realizations of response maps over rings. Information and Control, 37(1):23-33, 1978. [PDF] Keyword(s): systems over rings.
│ │This paper deals with observability properties of realizations of linear response maps defined over commutative rings. A characterization is given for those maps which admit │
│Abstract:│realizations which are simultaneously reachable and observable in a strong sense. Applications are given to delay-differential systems. │
4. E.D. Sontag. Algebraic-geometric methods in the realization of discrete-time systems. In Proc. Conf. Inform. Sci. and Systems, John Hopkins Univ. (1978), pages 158-162, 1978.
1. E.D. Sontag. On the internal realization of nonlinear behaviors. In A. Bednarek and L. Cesari, editors, Dynamical Systems, pages 93-497. Academic Press, New York, 1977.
2. E.D. Sontag. The lattice of minimal realizations of response maps over rings. Math. Systems Theory, 11(2):169-175, 1977. [PDF] Keyword(s): systems over rings.
│ │A lattice characterization is given for the class of minimal-rank realizations of a linear response map defined over a (commutative) Noetherian integral domain. As a corollary, it is │
│ │proved that there are only finitely many nonisomorphic minimal-rank realizations of a response map over the integers, while for delay -differential systems these are classified by a │
│Abstract:│lattice of subspaces of a finite-dimensional real vector space. │
3. E.D. Sontag and Y. Rouchaleau. Sur les anneaux de Fatou forts. C. R. Acad. Sci. Paris Sr. A-B, 284(5):A331-A333, 1977. [PDF] Keyword(s): systems over rings.
│ │It is well known that principal rings are strong Fatou rings. We construct here a more general type of strong Fatou rings. We also prove that the monoid of divisor classes of a │
│Abstract:│noetherian strong Fatou ring contains only the zero element, and that the dimension of such a ring is at most two. │
1. E.D. Sontag. On the internal realization of polynomial response maps. PhD thesis, Univ. of Florida, Advisor: R.E. Kalman, 1976.
2. E.D. Sontag. Linear systems over commutative rings: A survey. Ricerche di Automatica, 7:1-34, 1976. [PDF] Keyword(s): systems over rings.
│ │An elementary presentation is given of some of the main motivations and known results on linear systems over rings, including questions of realization and control. The analogies and │
│Abstract:│differences with the more standard case of systems over fields are emphasized throughout. │
3. E.D. Sontag. On finitely accessible and finitely observable rings. J. Pure Appl. Algebra, 8(1):97-104, 1976. [PDF] Keyword(s): systems over rings, observability, noncommutative rings.
│ │Two classes of rings which occur in linear system theory are introduced and compared. Characterizations of one of them are given in terms, of integral extensions (every finite │
│Abstract:│extension of R is integral) and Cayley--Hamilton type matrix condition. A comparison is made in the case of no zero-divisors with Ore domains. │
4. E.D. Sontag and Y. Rouchaleau. On discrete-time polynomial systems. Nonlinear Anal., 1(1):55-64, 1976. [PDF] Keyword(s): identifiability, observability, polynomial systems, realization theory,
│ │Considered here are a type of discrete-time systems which have algebraic constraints on their state set and for which the state transitions are given by (arbitrary) polynomial │
│ │functions of the inputs and state variables. The paper studies reachability in bounded time, the problem of deciding whether two systems have the same external behavior by applying │
│ │finitely many inputs, the fact that finitely many inputs (which can be chosen quite arbitrarily) are sufficient to separate those states of a system which are distinguishable, and │
│Abstract:│introduces the subject of realization theory for this class of systems. │
1. E.D. Sontag. On linear systems and noncommutative rings. Math. Systems Theory, 9(4):327-344, 1975. [PDF] Keyword(s): systems over rings.
│ │This paper studies some problems appearing in the extension of the theory of linear dynamical systems to the case in which parameters are taken from noncommutative rings. Purely │
│ │algebraic statements of some of the problems are also obtained. Through systems defined by operator rings, the theory of linear systems over rings may be applied to other areas of │
│Abstract:│automata and control theory; several such applications are outlined. │
2. E.D. Sontag. On some questions of rationality and decidability. J. Comput. System Sci., 11(3):375-381, 1975. [PDF] Keyword(s): theory of computing and complexity.
│ │Some results are given in the theory of rational power series over a broad class of semirings. In particular, it is shown that for unambiguous sets the notion of rationality is │
│Abstract:│independent of the semiring over which representations are defined. The undecidability of the rationality of probabilistic word functions is also established. │
1. E.D. Sontag. Temas de Inteligencia Artificial. PROLAM, Buenos Aires, 1972. [PDF] Keyword(s): artificial intelligence.
│ │Textbook on Artificial Intelligence. Scanned 2005. The complete pdf file is 16 Megabytes. (Libro de texto con introduccion a la inteligencia artificial. El pdf file completo tiene 16 │
│Abstract:│Megabytes.) │
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders.
Last modified: Wed Oct 30 12:09:16 2024
Author: sontag.
This document was translated from BibT[E]X by bibtex2html | {"url":"http://www.sontaglab.org/PUBDIR/Biblio/complete-bibliography.html","timestamp":"2024-11-05T18:38:17Z","content_type":"text/html","content_length":"619721","record_id":"<urn:uuid:b51d998b-3550-404a-8d4f-b75f46ed158f>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00324.warc.gz"} |
QuickCheck and Magic of Testing
Haskell is an amazing language. With its extremely powerful type system and a pure functional paradigm it prevents programmers from introducing many kinds of bugs, that are notorious in other
languages. Despite those powers, code is still written by humans, and bugs are inevitable, so writing quality test suites is just as important as writing an application itself.
Over the course of history buggy software has cost industry billions of dollars in damage and even lost human lives, so I cannot stress enough, how essential testing is for any project.
One of the ways to test software is through writing unit tests, but since it is not feasible to test all possible inputs exhaustively for most functions, we usually check some corner cases and
occasionally test with other arbitrary values. Systematic generation of random input, that is biased towards corner cases, could be very helpful in that scenario, and that's where QuickCheck comes
into play. This state of the art property testing library was originally invented in Haskell, and, because it turned out to be so powerful, it was later ported to other languages. However, the real
power of random testing is unleashed when combined with purity of Haskell.
Let's start by looking at this exemplar properties of a reverse function:
reverse (reverse xs) == xs
reverse (xs ++ ys) == reverse ys ++ reverse xs
We know, that they will hold for all finite lists with total values. Naturally, there are ways to prove them manually and there are even tools for Haskell, such as LiquidHaskell, that can help you
automate proving some properties. Formal proof of correctness of a program is not always possible: some properties are either too hard or impossible to prove. Regardless of ability to prove a
property of a function, we at least need to check that it works correctly on some finite set of inputs.
import Test.QuickCheck
prop_RevRev :: Eq a => [a] -> Bool
prop_RevRev xs = reverse (reverse xs) == xs
prop_RevApp :: [Int] -> [Int] -> Bool
prop_RevApp xs ys = reverse (xs ++ ys) == reverse ys ++ reverse xs
We can load those properties into GHCi and run quickCheck on them. Here is a quick way on how to do it from a terminal, and a detailed guide on how to get started with stack.
$ stack --resolver lts-7.16 ghci --package QuickCheck
Configuring GHCi with the following packages:
GHCi, version 8.0.1: https://www.haskell.org/ghc/ :? for help
Loaded GHCi configuration from /tmp/ghci3260/ghci-script
Prelude> :load examples.hs
[1 of 1] Compiling Main ( examples.hs, interpreted )
Ok, modules loaded: Main.
*Main> quickCheck prop_RevRev
+++ OK, passed 100 tests.
*Main> quickCheck prop_RevApp
+++ OK, passed 100 tests.
What just happened? QuickCheck called prop_RevRev and prop_RevApp 100 times each, with random lists as arguments and declared those tests as passing, because all calls resulted in True. Far beyond
what a common unit test could have done.
Worth noting, that in reality, not only prop_RevRev, but both of those properties are polymorphic and quickCheck will be happy to work with such functions, even if type signatures were inferred, and
it will run just fine in GHCi. On the other hand, while writing a test suite, we have to restrict the type signature for every property to concrete type, such as [Int] or Char, otherwise type checker
will get confused. For example, this program will not compile:
import Test.QuickCheck
main :: IO ()
main = quickCheck (const True)
For the sake of example let's write couple more self-explanatory properties:
prop_PrefixSuffix :: [Int] -> Int -> Bool
prop_PrefixSuffix xs n = isPrefixOf prefix xs &&
isSuffixOf (reverse prefix) (reverse xs)
where prefix = take n xs
prop_Sqrt :: Double -> Bool
prop_Sqrt x
| x < 0 = isNaN sqrtX
| x == 0 || x == 1 = sqrtX == x
| x < 1 = sqrtX > x
| x > 1 = sqrtX > 0 && sqrtX < x
sqrtX = sqrt x
Now, this is great, but how did we just pass various functions with different number of arguments of different types to quickCheck, and how did it know what to do with them? Let's look at it's type
λ> :t quickCheck
quickCheck :: Testable prop => prop -> IO ()
So, it seems, that QuickCheck can test anything that is Testable:
λ> :i Testable
class Testable prop where
property :: prop -> Property
exhaustive :: prop -> Bool
instance [safe] Testable Property
instance [safe] Testable prop => Testable (Gen prop)
instance [safe] Testable Discard
instance [safe] Testable Bool
instance [safe] (Arbitrary a, Show a, Testable prop) => Testable (a -> prop)
The last instance is for a function (a -> prop), that returns a prop, which, in turn, must also be an instance of Testable. This magic trick of a recursive constraint for an instance definition
allows quickCheck to test a function with any number of arguments, as long as each one of them is an instance of Arbitrary and Show. So here is a check list of requirements for writing a testable
• Zero or more arguments, which have an instance of Arbitrary, that is used for generating random input. More on that later.
• Arguments must also be an instance of Show, so if a test fails, offending value can be displayed back to a programmer.
• Return value is either:
□ True/False - to indicate pass/fail of a test case.
□ Discard - to skip the test case (eg. precondition fails).
□ Result - to customize pass/fail/discard test result behavior, collect extra information about the test outcome, provide callbacks and other advanced features.
□ Property for a much finer control of test logic. Such properties can be used as combinators to construct more complex test cases.
□ Prop used to implement Property
• Start with prop_ or prop, followed by the usual camelCase, but that is just a convention, not a requirement.
• Has no side effects. Also not a requirement, but strongly suggested, since referential transparency is lost with IO and test results can be inconsistent between runs. At the same time there are
capabilities for testing Monadic code, which we will not go into here.
Here is another very simple property of lists xs !! n == head (drop n xs), so let's define it as is:
prop_Index_v1 :: [Integer] -> Int -> Bool
prop_Index_v1 xs n = xs !! n == head (drop n xs)
Naturally, you can see a problem with that function, it cannot accept just any random Int to be used for indexing, and quickCheck quickly finds that problem for us and prints out violating input
along with an error:
λ> quickCheck prop_Index_v1
*** Failed! Exception: 'Prelude.!!: index too large' (after 1 test):
Interestingly, if you try to run this example on any computer, there is a very good chance that it will give exactly the same output, so, it seems that input to properties is not completely random.
In fact, thanks to the function sized, the first input to our property will always be an empty list and an integer 0, which tend to be really good corner cases to test for. In our case, though, !!
and head are undefined for empty lists and negative numbers. We could add some guards, but there are facilities provided for such common cases:
prop_Index_v2 :: (NonEmptyList Integer) -> NonNegative Int -> Bool
prop_Index_v2 (NonEmpty xs) (NonNegative n) = xs !! n == head (drop n xs)
This version is still not quite right, since we do have another precondition n < length xs. However, it would be a bit complicated to describe this relation through the type system, so we will
specify this precondition at a runtime using implication operator (⇒). Note, that return type has changed too:
prop_Index_v3 :: (NonEmptyList Integer) -> NonNegative Int -> Property
prop_Index_v3 (NonEmpty xs) (NonNegative n) =
n < length xs ==> xs !! n == head (drop n xs)
Test cases with values, that do not satisfy the precondition, will simply get discarded, but not to worry, it will still generate the 100 tests. In fact it will generate up to a 1000 before giving
up. An alternate way to achieve similar effect would be to generate a valid index within a property itself:
prop_Index_v4 :: (NonEmptyList Integer) -> Property
prop_Index_v4 (NonEmpty xs) =
forAll (choose (0, length xs1)) $ \n -> xs !! n == head (drop n xs)
λ> quickCheck prop_Index_v3 >> quickCheck prop_Index_v4
+++ OK, passed 100 tests.
+++ OK, passed 100 tests.
Just in case, let's quickly dissect this for all (∀) business. It takes a random value generator, which choose happens to produce, a property that operates on it's values and returns a property, i.e.
applies values from a specific generator to the supplied property.
λ> :t forAll
forAll :: (Show a, Testable prop) => Gen a -> (a -> prop) -> Property
λ> sample' $ choose (0, 3)
There is a very subtle difference between the last two versions, namely _v3 will discard tests that do not satisfy a precondition, while _v4 will always generate a value for n that is safe for
passing to index function. This is not important for this example, which is good, but that is not always the case. Whenever precondition is too strict, QuickCheck might give up early while looking
for valid values for a test, but more importantly, it can give a false sence of validity, since most of the values that it will find could be trivial ones.
For this section we will use prime numbers in our examples, but rather than reinventing the wheel and writing functions for prime numbers ourselves we will use primes package. Just for fun, let's
write a property for primeFactors, which is based on Fundamental Theorem of Arithmetic:
prop_PrimeFactors :: (Positive Int) -> Bool
prop_PrimeFactors (Positive n) = isPrime n || all isPrime (primeFactors n)
That was incredibly easy and is almost a direct translation of a theorem itself. Let's consider a fact that every prime number larger than 2 is odd, thus we can easily derive a property that sum of
any two prime numbers greater than 2 is even. Here is a naive way to test that property:
prop_PrimeSum_v1 :: Int -> Int -> Property
prop_PrimeSum_v1 p q =
p > 2 && q > 2 && isPrime p && isPrime q ==> even (p + q)
As you can imagine it is not too often that a random number will be prime, this certainly will affect the quality of this test:
λ> quickCheck prop_PrimeSum_v1
*** Gave up! Passed only 26 tests.
It only found 26 satisfiable tests out of a 1000 generated, that's bad. There is even more to it, in order to convince ourselves, that we are testing functions with data that resembles what we expect
in real life, we should always try to inspect the values being generated for a property. An easy way to do that is to classify them by some shared traits:
prop_PrimeSum_v1' :: Int -> Int -> Property
prop_PrimeSum_v1' p q =
p > 2 && q > 2 && isPrime p && isPrime q ==>
classify (p < 20 && q < 20) "trivial" $ even (p + q)
λ> quickCheck prop_PrimeSum_v1'
*** Gave up! Passed only 29 tests (96% trivial).
λ> quickCheckWith stdArgs { maxSuccess = 500 } prop_PrimeSum_v1'
*** Gave up! Passed only 94 tests (44% trivial).
Almost all values this property was tested on are in fact trivial ones. Increasing number of tests was not much of a help, because, by default, values generated for integers are pretty small. We
could try to fix that with appropriate types, but this time we will also generate a histogram of unique pairs of discovered prime numbers:
prop_PrimeSum_v2 :: (Positive (Large Int)) -> (Positive (Large Int)) -> Property
prop_PrimeSum_v2 (Positive (Large p)) (Positive (Large q)) =
p > 2 && q > 2 && isPrime p && isPrime q ==>
collect (if p < q then (p, q) else (q, p)) $ even (p + q)
λ> quickCheck prop_PrimeSum_v2
*** Gave up! Passed only 24 tests:
16% (3,3)
8% (11,41)
4% (9413,24019)
4% (93479,129917)
This is better, there are less trivial values, but still, number of tests is far from satisfactory. It is also extremely inefficient to look for prime values that way, and, for any really large value
passed to the property, it will take forever to check its primality. Much better approach would be to choose from a list of prime values, which we have readily available for us:
prop_PrimeSum_v3 :: Property
prop_PrimeSum_v3 =
forAll (choose (1, 1000)) $ \ i ->
forAll (choose (1, 1000)) $ \ j ->
let (p, q) = (primes !! i, primes !! j) in
collect (if p < q then (p, q) else (q, p)) $ even (p + q)
λ> quickCheck prop_PrimeSum_v3
+++ OK, passed 100 tests:
1% (983,6473)
1% (953,5059)
1% (911,5471)
There could be a scenario where we needed prime values for many tests, then it would be a burden to generate them this way for each property. In such cases solution is always to write an instance for
newtype Prime a = Prime a deriving Show
instance (Integral a, Arbitrary a) => Arbitrary (Prime a) where
arbitrary = do
x <- frequency [ (10, choose (0, 1000))
, (5, choose (1001, 10000))
, (1, choose (10001, 50000))
return $ Prime (primes !! x)
Calculating large prime numbers is pretty expensive, so we could simply use something like choose (0, 1000), similarly to how it was done in prop_PrimeSum_v3, but there is no reason why we should
exclude generating large prime numbers completely, instead, we can reduce their chance by describing a custom distribution with frequency function.
Now writing prop_PrimeSum is a piece of cake:
prop_PrimeSum_v4 :: Prime Int -> Prime Int -> Property
prop_PrimeSum_v4 (Prime p) (Prime q) =
p > 2 && q > 2 ==> classify (p < 1000 || q < 1000) "has small prime" $ even (p + q)
λ> quickCheck prop_PrimeSum_v4
+++ OK, passed 100 tests (21% has small prime).
There are quite a few instances of Arbitrary, many common data types from base are, but the most peculiar one is a function:
λ> :i Arbitrary
class Arbitrary a where
arbitrary :: Gen a
shrink :: a -> [a]
instance [safe] (CoArbitrary a, Arbitrary b) => Arbitrary (a -> b)
That's right, QuickCheck can even generate functions for us! One of restrictions is that an argument to the function is an instance of CoArbitrary, which also has instance for a function,
consequently functions of any arity can be generated. Another caveat is that we need an instance of Show for functions, which is not a standard practice in Haskell, and wrapping a function in a
newtype would be more appropriate. For clarity we will opt out from this suggestion and instead demonstrate this cool feature in action. One huge benefit is that it allows us to easily write
properties for higher order functions:
instance Show (Int -> Char) where
show _ = "Function: (Int -> Char)"
instance Show (Char -> Maybe Double) where
show _ = "Function: (Char -> Maybe Double)"
prop_MapMap :: (Int -> Char) -> (Char -> Maybe Double) -> [Int] -> Bool
prop_MapMap f g xs = map g (map f xs) == map (g . f) xs
One of the first concerns, that programmers usually raise when coming from other languages to Haskell, is that there are situations when unit tests are invaluable, but QuickCheck does not provide an
easy way to do that. Bear in mind, QuickCheck's random testing is not a limitation, but rather is a priceless feature of testing paradigm in Haskell. Regular style unit tests and other QA
functionality (code coverage, continuous integration, etc.) can be done just as easily as they are done in any other modern language using specialized libraries. In fact, those libraries play
beautifully together and complement each other in many ways.
Here is an example of how we can use hspec to create a test suite containing all properties we have discussed so far, plus few extra unit tests for completeness of the picture.
#!/usr/bin/env stack
module Main where
import Test.Hspec
import Test.QuickCheck
main :: IO ()
main = hspec $ do
describe "Reverse Properties" $
do it "prop_RevRev" $ property prop_RevRev
it "prop_RevApp" $ property prop_RevApp
it "prop_PrefixSuffix" $ property prop_PrefixSuffix
describe "Number Properties" $
do it "prop_Sqrt" $ property prop_Sqrt
describe "Index Properties" $
do it "prop_Index_v3" $ property prop_Index_v3
it "prop_Index_v4" $ property prop_Index_v4
it "unit_negativeIndex" $ shouldThrow (return $! ([1,2,3] !! (1))) anyException
it "unit_emptyIndex" $ shouldThrow (return $! ([] !! 0)) anyException
it "unit_properIndex" $ shouldBe (([1,2,3] !! 1)) 2
describe "Prime Numbers" $
do it "prop_PrimeFactors" $ property prop_PrimeFactors
it "prop_PrimeSum_v3" $ property prop_PrimeSum_v3
it "prop_PrimeSum_v4" $ property prop_PrimeSum_v4
describe "High Order" $
do it "prop_MapMap" $ property prop_MapMap
Random testing can be mistakenly regarded as an inferior way of software testing, but many studies have certainly shown that it is not the case. To quote D. Hamlet:
By taking 20% more points in a random test, any advantage a partition test might have had is wiped out.
It is very easy to start using QuickCheck to test properties of pure functions. There is also a very similar toolbox included in the library for testing monadic functions, thus allowing for a
straightforward way of testing properties of functions that do mutations, depend on state, run concurrently and even perform I/O. Most importantly, this library provides yet another technique for
making Haskell programs even safer.
Writing tests doesn't have to be a chore, it can be fun. We certainly find it fun at FPComplete and will be happy to provide training, consulting or development work.
Further reading
Subscribe to our blog via email
Email subscriptions come from our Atom feed and are handled by Blogtrottr. You will only receive notifications of blog posts, and can unsubscribe any time.
Do you like this blog post and need help with Next Generation Software Engineering, Platform Engineering or Blockchain & Smart Contracts? Contact us. | {"url":"https://tech.fpcomplete.com/blog/2017/01/quickcheck/","timestamp":"2024-11-08T00:46:21Z","content_type":"text/html","content_length":"77878","record_id":"<urn:uuid:2d6c8110-3c28-426a-bdbe-aaba201f0c55>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00173.warc.gz"} |
Basic math MCQs For Entry Test Online Preparation
Math MCQs Quiz 18
1 / 20
25% of 30% of 45% is equal to ______?
2 / 20
60% of a number is added to 120, the result is the same number. Find the number?
3 / 20
85% of a number is added to 24, the result is the same number. Find the number?
4 / 20
40 is subtracted from 60% of a number, the result is 50. Find the number?
5 / 20
96% of the population of a village is 23040. The total population of the village is________?
6 / 20
If the price has fallen by 10% what percent of its consumption be: increased so that the expenditure may be the same as before?
7 / 20
If y exceeds x by 25%, then x is less than y by__________?
8 / 20
The salary of Mr. X is 30% more than that of Mr. Y . Find what percent of Mr. Y’s salary is less than Mr. X’s?
9 / 20
In an examination 38% of students fail in English and 61% pass in Urdu and 23% fail in both. Find the actual failure percentage?
10 / 20
Two numbers are respectively 20% and 25% more than a third number. The percentage that is first of the second is_________?
11 / 20
A sells his goods 50% cheaper than B but 50% dearer than C. The cheapest is_________?
12 / 20
The salary of a typist was at first raised by 10% and then the same was reduced by 5%. If he presently draws Rs.1045.What was his original salary?
13 / 20
The tax on a commodity is diminished by 20% and its consumption increased by 15%. The effect on revenue is_________?
14 / 20
A candidate got 35% of the votes polled and he lost to his rival by 2250 votes. How many votes were cast?
15 / 20
If the price of gold increases by 50%, find by how much the quantity of ornaments must be reduced, so that the expenditure may remain the same as before?
16 / 20
Subtracting 10% from X is the same as multiplying X by what number?
17 / 20
If the numerator of a fraction is increased by 20% and its denominator is diminished by 25% value of the fraction is 2/15. Find the original fraction.
18 / 20
An engineering student has to secure 36% marks to pass. He gets 130 marks and fails by 14 marks. The maximum No. of marks obtained by him is _______?
19 / 20
5% people of a village in Sri Lanka died by bombardment, 15% of the remainder left the village on account of fear. If now the population is reduced to 3553, how much was it in the beginning?
20 / 20
A and B’s salaries together amount to Rs. 2,000. A spends 95% of his salary and B spends 85% of his. If now their savings are the same, what is A’s salary?
Your score is
The average score is 30%
Basic math MCQs For Entry Test
There are twenty questions in this important Preparation of Basic math MCQs For Entry Test.
The quiz will help the aspirants understand the Basic math MCQs For Entry Test.
After attempting all the questions in these antonyms quiz, candidates must submit the quiz to see their marks for this quiz.
Candidates must submit the quiz at the end to see their final marks and the average scores of other candidates participating in this Mathematics quiz.
After the quiz's final submission, aspirants will see the link to the next quiz in the series of our Math quizzes for one paper preparation.
This quiz is an effort to help students prepare for their math Quiz for subject specialist tests and one-paper competitive exams
Read More: Math MCQs for preparation.
Leave a Comment | {"url":"https://mcqsquiz.org/basic-math-mcqs-for-entry-test/","timestamp":"2024-11-06T21:30:23Z","content_type":"text/html","content_length":"268620","record_id":"<urn:uuid:c7abdad0-4b1c-4c89-8d7d-dc45e085e3a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00563.warc.gz"} |
Life in MiniatureChallenger and IT Terrain.Still clearing up '83Series 2 games - Micro WarfareSeries 1 gamesTabletop GamesCombat 3000Middle EarthD&D the game that changed the worldTTG Christmas Party!AirfixA bit of a catch up...A gamble?...and rise...The Rise...The Severed AllianceLaserburn.First Show.CastingNottingham Wargames ClubNottingham, and the Run.Full time.Saturday, Saturday, Saturday...The first days workA new shop in town...Asgard for the first time.
tag:blogger.com,1999:blog-65803841221128637492024-10-24T13:27:41.315+01:00I joined the world of metal miniatures in 1983, little realising it was a boom time, and I`d missed it... 30 years later, and
still no wiser, this is my story.pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.comBlogger31125tag:blogger.com,1999:blog-6580384122112863749.post-14339751486491277382015-05-16T09:56:00.000+01:002015-05-19T10:35:06.479+01:00<!--[if gte mso 9]>
<xml> <w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</
w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/>
<w:DontGrowAutofit/> </w:Compatibility> <w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel> </w:WordDocument> </xml><![endif]--> Odd how things coalesced, and one day can define years to
come...<br /> <br /> Occasionally in this Life in Miniature, I 'be been around successful protects, at GW late in the 80's they came thick and fast, in the 90's it was amazing to watch Magic: the
Gathering grow from nothing to a world beater, but TTG's little (micro) success, stems from the summer 1983 launch of Challenger, the 1/300 scale micro-tank game set in the 'Ultra Modern' period.<br
/> <br /> Challenger, written by Bruce Rea-Taylor, and published through Table Top Games, is a simulation wargame based on the supposed escalation of the NATO/WarPact Cold War to a point where
hoards of Soviet Armoured Divisions roamed into Western Wargamerland.<br /> <br /> Quiet if this was simulation, dystopian fiction or wish fulfilment, I can't really say, At what point would a Soviet
invasion of Western Europe, have not provoked a nuclear response, and made all these tiny tanks piles of radioactive dust? Or maybe this was after 'the bomb' and the elephant in the room was
that all these little model towns, hills and road junctions were already a radio active wastelands, but still strategically important... in either case, a certain type of British gamer started
to lap it up, and Ultra Modern was a big success, in TTG terms, for the remainder of the 80's.<br /> Within a year or so it was adopted for Nation Championship play, making it the game of choice for
both Marxist teachers and Civil servants, looking to advance the cause of Communism, and camo-jacketed maths junkies, holding out for Freedom.<br /> <br /> Over the next 10 years Challenger and all
it's updates, digests, and army lists would keep TTG in a steady stream of sales that drove the tiny company forwards, and I think this period was the happiest I ever saw Bob, he liked the rules, he
loved the period, and he was great chums with Bruce, a larger than life figure, who quiet clearly loved his hobby, and the (niche-sized) recognition it brought him.<br /> <br /> <div class=
"MsoNormal"> Bruce's Rules (published through TTG)</div> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td
style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjB4O3soYybzVYF3LtgKA2_9sTSpgqXrQhMmdrxpmlbgu6kO6r3LmQiza5Gaj_WMvbdDoTLJgcFNe1_iEL-B27x_hAjUFJO_ODYwlYBlMSSvIF3AsvLziIqA5-I03384XacLKN_YMit44F1/s1600/Challanger+II.JPG" imageanchor="1" style=
"clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjB4O3soYybzVYF3LtgKA2_9sTSpgqXrQhMmdrxpmlbgu6kO6r3LmQiza5Gaj_WMvbdDoTLJgcFNe1_iEL-B27x_hAjUFJO_ODYwlYBlMSSvIF3AsvLziIqA5-I03384XacLKN_YMit44F1/s320/Challanger+II.JPG" width="234" /></a></td></
tr> <tr><td class="tr-caption" style="text-align: center;">No pictures of Challenger I on t'interweb, unless you know better...</td></tr> </tbody></table> <div class="MsoNormal"> Challenger</div>
<div class="MsoNormal"> Challenger Revised Edition</div> <div class="MsoNormal"> Challenger II</div> <div class="MsoNormal"> Challenger 2000</div> <div class="MsoNormal"> <span style="mso-tab-count:
1;"> </span></div> <div class="MsoNormal"> Battlezones - Scenarios for the Ultra Modern Period<span style="mso-tab-count: 1;"> </
span></div> <div class="MsoNormal"> Corps Commander OMG (Div Scale)<span style="mso-tab-count: 1;"> </span></div> <div class=
"MsoNormal"> Firefight (Modern Skirmish)</div> <div class="MsoNormal"> <br /></div> <div class="MsoNormal"> Ultra Modern Army Lists and Organisation Volume 1 Challenger</div> <div class="MsoNormal">
Ultra Modern Army Lists & Organisation Volume 2 Challenger</div> <div class="MsoNormal"> <br /></div> <div class="MsoNormal"> Digest No. 3 - Challenger / Corps Commander</div> <div class=
"MsoNormal"> Digest 4 Ultra Modern Army Lists for Challenger II Rules</div> <div class="MsoNormal"> Digest 5 Ultra Modern Army Lists for Challenger II Rules<span style="mso-tab-count: 1;"> &
nbsp; </span></div> <div class="MsoNormal"> <br /></div> <div class="MsoNormal"> Modern Aircraft Handbook - Aircraft Details & Weapons for Challenger II<span style="mso-tab-count: 1;"> &
nbsp; </span></div> <div class="MsoNormal"> Revised Modern Aircraft Handbook - Aircraft Details & Weapons for Challenger II</div> <div class=
"MsoNormal"> <br /></div> And...<br /> Bob used to clean up on the miniature sales for the games too, buying in Skytrex, and Heroics and Ross, tanks and vehicles, the only brands available freely in
the UK, and stripping them from their packaging and reselling them at just below original price... Those camo-jacketed gamers could be seen in droves at any show TTG attended, heads down perusing
lists and scraps of paper for micro armour at 33pence a pop...<br /> <br /> On a couple of occasions both Skytrex and H&R both put a stop on selling to Bob, he just bought around through other
people, before getting back into good-books with both companies, and at least once trying to buy Skytrex's range to tie our rules to their models.<br /> <br /> Bruce died suddenly in the late 80's,
from what I suspect would be described as a life-style related condition, smoking or weight related, and Bob would never be the same person after, the wind gone from his sails...<br /> <br /> Ok
then, and now for the odd coincidence, on the very same day the TTG took delivery of the first batch of Challenger rules (mid-summer '83?) Bob had a visit from a chap, whose name I never knew, who
bought samples of his new companies product to 'pitch', polystyrene terrain and tiles.<br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;
text-align: center;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEiA36NeOqYg8yCc96AA4hwZnrvMcroSCdRx-xqSa5BcnUhyCxGBZ_TKlbMXDuxvQaGqtniBnHhlZPb2BlFvEjfqRDvQORanhGuo3wSAApql4zUaTzwobnmtx_gNacSrk1SCRX52_9yk1_C2/s1600/IT+Terrain.jpg" imageanchor="1" style=
"margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEiA36NeOqYg8yCc96AA4hwZnrvMcroSCdRx-xqSa5BcnUhyCxGBZ_TKlbMXDuxvQaGqtniBnHhlZPb2BlFvEjfqRDvQORanhGuo3wSAApql4zUaTzwobnmtx_gNacSrk1SCRX52_9yk1_C2/s320/IT+Terrain.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">2 of the first poly tiles in the world...</td></tr> </tbody></table> These things would over the next few years become almost standard items for
large numbers of wargames, but as yet were an untested idea. People used to bring ideas to Bob quiet a lot, I suppose his opinion must have been valued by those wanting to get into the hobby, as the
chap from Integral Terrain had done, most were given short shrift, but Bob spotted this as a winner straight away and bought whatever samples the guy had, a few packs of small, medium and large
hills, and 3 or 4, two feet square, terrain boards...<br /> <br /> I remember Kate saying, "what fools are going to pay £6 for 2 feet of polystyrene covered in green flock?" with complete
disdain... seconds before Mark and I tore into the stock to buy up whatever stock Bob had acquired minutes earlier...<br /> <br /> And that's the way it went, gamers knew a great idea when they saw
it, and TTG would sell out of Integral Terrain whenever we got new stock in, what gamers had used previously; green baize or cloth, was cheap and easy, but looked bad, plywood tiles with
Tetrion 'hard modeled' on looked good, but needed a village hall to store it in, and a team of volunteers to lift into place, sand-trays were versatile but heavy and attracted cats (!!!)...
polystyrene solved a load of these problems, light, adaptable and uniform they gave a gamers a handy battlefield that could be changed to suit many games and was easy to break down and store in a
small (ish) area.<br /> <br /> So on that evening, Mark bought half a dozen T72's and M60's, which he quickly daubed with green and olive paint, and the pair of us tried, almost in vain, to get some
kind of enjoyment out of the maths equation that was Challenger...<br /> <br /> I can't say ever I did get a great deal of mirth from the game, then, or on any of the two or three occasions I've
played since, but on that night, with some brand new Ultra Modern rules in hand, and some state of the art polystyrene terrain to play on, we could definitely say, the 80's started here...<br /> <br
/> <div class="MsoNormal"> <br /></div> <!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" LatentStyleCount="156"> </w:LatentStyles> </xml><![endif]--><!--[if !mso]><img src="//
img2.blogblog.com/img/video_object.png" style="background-color: #b2b2b2; " class="BLOGGER-object-element tr_noresize tr_placeholder" id="ieooui" data-original-id="ieooui" /> <style> st1\:*
{behavior:url(#ieooui) } </style> <![endif]--><!--[if gte mso 10]> <style> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan;
font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} </style> <![endif]--><br /> <div> <br /></div> <div> <br /></div> <div>
<br /></div> <div> <br /></div> <div> <br /></div> <div> <br /></div> pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com1tag:blogger.com,1999:blog-6580384122112863749.post-87168129471590737492015-05-12T09:16:00.000+01:002015-05-16T08:47:36.989+01:00Ok, this blog update is
dedicated to Kevin Davies (no not that one), Shaun Watson, Steve Clark and the two or three Frothers that commented, without whom I might not have continued blogging...<br /> <div> <br /> <table
cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://
/Kevin+Davies.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/
R29vZ2xl/AVvXsEhG7ry9el0xco4jDCuoj6JG8ko-Y_aElkW2bGda_L8qbAxZerVPNPl76t-_7HU6wrEe-aEI6Il_2acx7EorUhFlZyB5zjy7hBqm0WcXBRbzC_xh1gq6o1dNivghUY2yEkhRQt3nNSls5nwB/s320/Kevin+Davies.jpg" width="216" /></a>
</td></tr> <tr><td class="tr-caption" style="text-align: center;">Not this Kevin Davies</td></tr> </tbody></table> </div> <div> I had hoped to deal with things in a roughly chronological order, but
this has gone wholly out of the window over the last year or so, the best laid plans, and all that... So what I'm gonna do is catch-up as quickly as I can with one big post about events in '84, which
should bring us to the early spring of '85 and another fairly traumatic full stop point... </div> <div> <br /></div> <div> But before that there are s couple of thinks to clear up from '83 that
really need doing, one that has personal impact on me, and one that was of a more general interest for the wargames world...</div> <div> <br /></div> <div> So, with a much waffle as always, I'll blog
on... </div> pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-18251805447078132532014-09-06T12:18:00.000+01:002014-09-08T10:17:32.609+01:00Ok, gone off the boil recently,
work gets in the way of writing... so some '83 stuff to catch up on, there really was a lot going off including; Wargames Illustrated, Challenger, Tercio and mould making, and then I'll get onto
whatever was going on in '84, but first I'll finish off the TTG history lessons for awhile, before moving onto more time related stuff...<br /> <div> <br /></div> <table cellpadding="0" cellspacing=
"0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEg9WbY3pKHgKleleLzmkybX6ldHAReyoac2VfU9B3ThTq2-9agn2R8Xrzr0gHk-N6Cq5BKZjAFgJEt1OoOE7LqC8zDiaA41xVw92wuduwuShdvnwBpYMCTmr4twkfw5pd5TW6jF3RWKaSNG/s1600/Micro_Nap_frontpage.jpg" imageanchor="1"
style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEg9WbY3pKHgKleleLzmkybX6ldHAReyoac2VfU9B3ThTq2-9agn2R8Xrzr0gHk-N6Cq5BKZjAFgJEt1OoOE7LqC8zDiaA41xVw92wuduwuShdvnwBpYMCTmr4twkfw5pd5TW6jF3RWKaSNG/s1600/Micro_Nap_frontpage.jpg" height="320" width=
"195" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">page one, slightly damaged/aged</td></tr> </tbody></table> <div> The next step for TTG back then in the mid-70's was the
Micro Warfare Series, known as the Series 2 games, and if what Bob had done with the Series 1 games was strip the flab from American board games of the period, what he tried to do with the Micro
Series games, was in a way even more radical.<br /> <br /> <a href="http://boardgamegeek.com/boardgamefamily/9153/micro-warfare-series">Mirco Warfare Series games</a>, Ancient Mediaeval, Napoleonic,
Colonial, three Naval games (Napoleonic, WW1 and WW2) as well as a Sci-fi ground warfare game, reduced everything from the traditional wargame, the rules, the playing pieces, the terrain, everything,
to the barest minimum necessary for play, turning expensive and difficult to find miniatures and terrain into card counters, and giving you everything necessary to play in an 16 page A5 booklet, with
record sheets, and cut-out and keep counters, in a sleeve in the dust jacket.<br /> <br /> Once again the rules were all Bob's, although I bet the Nottingham club had helped with the play testing,
and Roger Heaton supplied the art work, and both still stand up to closer inspection. Bob had a knack for writing just enough detail into the rules to enable fun, flowing play, and Roger does wonders
with 70's-printed black and white line drawings, character and action in such a small production can't have been easy...<br /> <br /> I don't know whether Bob would thank me for saying this, or even
acknowledge the existence of the word, but what he was doing, and continued to do all his working life, was democratize gaming. He was a great believer in trying to get everyone playing games, and
this is what the Series 2 games did best. The rules were pocket money prices, £1 each, and extra armies even less... you could have bought all 5 extra Napoleonic armies for less than a couple
of quid... and the games themselves were pocket sized... micro even... no need for huge table, or massive amounts of toy soldiers. Everyone could play.<br /> <br /> Which brings me to a point of
contention, whilst I was looking at the Wikipedia page for Micro Games I read that... <br /> <div style="text-align: justify;"> <blockquote class="tr_bq"> <a href="https://en.wikipedia.org/wiki/
Microgame">"While small scale wargames and board games had existed before they began publishing, Metagaming Concepts first used the term "MicroGame" when they released <i>Ogre</i>, MicroGame #1 in
1977."</a></blockquote> </div> </div> ...but TTG's Mirco Series pre-dates this release by a couple of years, and it is possible that they would have been on sale, and known to American gamers before
1977 and the Metagaming release, making TableTop Games the first to introduce the word Micro into gaming...<br /> <br /> <table align="center" cellpadding="0" cellspacing="0" class=
"tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody> <tr><td style="text-align: center;"><img border="0" src="https://blogger.googleusercontent.com/img/b/
R29vZ2xl/AVvXsEhj2rLLz3dzkKEycRFhDdzTDJdjxc3pbmOJ2M2LyFQztnNTeL4Cg0GR2H5etd6YlFl8B8oih7vh0x1PmbqV6L966zoSbq4b0EXCbTasfDOTTLRp_xWA5iZYL2VCGFfv2rKSz4h7S7Kuq3hP/s1600/Micro_Nap_Game.jpg" height="211"
width="320" /></td></tr> <tr><td class="tr-caption" style="text-align: center;">French and British Napoleonic ready for play</td></tr> </tbody></table> <br /> So I'm off to edit the Wikipedia, and
when I get back, I'll finish off with the 1983 stuff that I mentioned above, and then onto 1984...<br /> <br /> <br /> <br />pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-16107325888705976992014-06-09T09:32:00.000+01:002014-09-06T13:41:52.166+01:00I think Bob's first love was
for board games, even into the 80's he would argue that Avalon Hill's Stalingrad was the best game ever made,and I'm sure that at some point in the period between '73 and '81, TTG did in fact produce
a board game, called Wild West, but on the whole I think that full-sized, boxed games, were beyond the scope of a small company like TTG at that time.<br /> <br /> <table cellpadding="0" cellspacing=
"0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhmJnh1GeAp0kPCm_Ul6x1n8h-TQWxLyWu29WfGUv3McOdPN5VyQVXJduOG2j6ZDIodp6JtHNV1WRpGcAHlVS8I2caHbpqZ9QQ-bk9BBOK5nTKIxIbmw5V76cHAuWpFqXSfACgAPige9P6T/s1600/pic54647_md.jpg" imageanchor="1" style=
"clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhmJnh1GeAp0kPCm_Ul6x1n8h-TQWxLyWu29WfGUv3McOdPN5VyQVXJduOG2j6ZDIodp6JtHNV1WRpGcAHlVS8I2caHbpqZ9QQ-bk9BBOK5nTKIxIbmw5V76cHAuWpFqXSfACgAPige9P6T/s1600/pic54647_md.jpg" height="320" width="220" />
</a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Cover by Rodger Heaton</td></tr> </tbody></table> So what Bob, and Rodger, did for their first game release was strip away all
the flabby excess of the American tactical Board game, the board (!!!), the die-cut counters (players cut-out their own printed ones), and the box, and instead, produced a game in a zip-seal bag,
that could be played on any flat-ish surface and packed away into a (duffel) coat pocket.<br /> <br /> In what order, Galactic War, MTB, U Boat,and Ballistic Missile were designed and released, I
don't know, something makes me want to say that they were all done at the same time in '73/4, which would have been quite an organisational achievement for a small company, but regardless, these four
Series 1 games were TTG's first pop at the gaming market.<br /> <br /> <br /> <br /> <br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;
margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhBSM4uTIO0qOX_DFGfychotTokoelClNHTvhapK972AiKLFmm36MqWqoj2r08C_mnY8b1tJ4OdTCHMrGJwHmncS4zumxT5fo8tE9nYJWFdnMl5Rq3EO9RJYi0KDdUDqBDLgxVEv-FwuPCZ/s1600/pic50720_md.jpg" imageanchor="1" style=
"clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhBSM4uTIO0qOX_DFGfychotTokoelClNHTvhapK972AiKLFmm36MqWqoj2r08C_mnY8b1tJ4OdTCHMrGJwHmncS4zumxT5fo8tE9nYJWFdnMl5Rq3EO9RJYi0KDdUDqBDLgxVEv-FwuPCZ/s1600/pic50720_md.jpg" height="320" width="222" />
</a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Series 1 game</td></tr> </tbody></table> MTB (Motor Torpedo Boat) and U-Boat show Bob's Naval fascinations coming to the fore, I
suppose its not surprising that as a child of the 40's and 50's, World War II, and the Navy with all it history and traditions played, such a part in his life. MTB is set in the English Channel
in the mid-war period and U-boat under it, but both are very (too) similar games in out look, groups of small craft against each other in encounter battles with weapons and damage recorded on record
sheets.<br /> <br /> <br /> <br /> <br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style=
"text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhhovqdGon8dzdXPj8ICHO1kft4XegYlGsCh6Xvc0yD-jBzPjwJCvJ2WDM6yBKtKnlmPRfxH7yfRP6Z6z40PW_e1xs4HKxj0ltp-PrRUGYP-H_r_kPYRFQfYqO84mtrN9rGmYmUMcKZPV74/s1600/pic29030.jpg" imageanchor="1" style="clear:
right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhhovqdGon8dzdXPj8ICHO1kft4XegYlGsCh6Xvc0yD-jBzPjwJCvJ2WDM6yBKtKnlmPRfxH7yfRP6Z6z40PW_e1xs4HKxj0ltp-PrRUGYP-H_r_kPYRFQfYqO84mtrN9rGmYmUMcKZPV74/s1600/pic29030.jpg" height="320" width="220" /></
a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Swoosh-blam!<br /> Fun with nukes '70's style</td></tr> </tbody></table> The only one of these games that I had played before
working with Bob was Galactic war, a sci-fi ship combat to ship combat, which I suppose again owed a lot to Bob's interest in Naval gaming. I can't say it particularly grabbed me, the ships on both
sides were too same-y and looked like ELO's space ship from the Out of the Blue LP cover, a bit dated in the 80's, but it was a reasonably good game and played out in 45 minutes to an hour...<br />
<br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://
/$_12.JPG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhQVgd5nwV6_OzPKhgR01_y7L7hkjJVkFwFzyKiGOdJvFg9O98eWb09R1iHFcZFVe2ZQ-9i-K1czHTtz-fIG0wSpOJ7ZZOL-rgJHl6UgYU9DgqYoa-HbvN-KPvZIPR7Mf6-YKA-xY4cMt9n/s1600/$_12.JPG" height="320" width="225" /></a></
td></tr> <tr><td class="tr-caption" style="text-align: center;">Image nicked from Noble Knight Games</td></tr> </tbody></table> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br />
Ballistic Missile is only one of the four basic games that was a based on Naval warfare, it being a Cold War gone hot, shoot out, with counters, rules and record sheets all yours for the princely sum
of 75p.<br /> <br /> The thing that strikes me now and has really struck me before about these games is Rodger's art work. His covers, especially MTB and Ballistic Missile have a good deal of 'dash'
about them, which can't have been easy to achieve in the two colours that they were printed in.<br /> <br /> I am unsure of the current availability of these TTG products, if anyone has pdf copies of
them I'd be delighted to see them.<br /> <br /> <br /> <br />pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-64485367312419199382014-05-28T10:18:00.001+01:002014-05-31T13:14:47.623+01:00Ok, well the previous posts
clears the Old School games out of the way, and I can get into the meat of the next stage of this blog.<br /> <div> <br /> Tabletop Games, had been a new comer to the wargames scene in the early
70's, Kate told us (new boys) that the company had grown out of Bob's dissatisfaction with the currently available wargames rules for the Napoleonic period that he playing at competitive level at
that time.<br /> Quite what his beef was with the sets they were using I never got to the bottom of, ask him and you get a mumbled responses about "riflemen lying down" and the only way you could
"kill them, was with Lancers..." quiet why this got at him I have no idea... but it did, so when in 1973 Bob won the National Napoleonic title, and I assume, as part of team, was invited to
host the following years tournament, he threw away the old rule set and wrote his own.<br /> <br /> Bob's rules for Napoleonic warfare were used at the 1974 National Championship.At first they were
just given away to competitors, but reaction must have been favourable, because shortly after they were being published by the infant Tabletop Games.<br /> <table cellpadding="0" cellspacing="0"
class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjdBQFeRWH2K4vGuBFHzoGWvelG88c0DgI_foleaaQcDqKdENkczCmrqLhFqsFHxZ12HX2cY_oRdO5tj6pgTj6OgVB4zyVFo3C17ezvTlDBSFtoKGAIbkxj1iQwGqshZfTwIjJRobWT1Jj0/s1600/Nap_rule_cover.jpg" imageanchor="1" style=
"clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjdBQFeRWH2K4vGuBFHzoGWvelG88c0DgI_foleaaQcDqKdENkczCmrqLhFqsFHxZ12HX2cY_oRdO5tj6pgTj6OgVB4zyVFo3C17ezvTlDBSFtoKGAIbkxj1iQwGqshZfTwIjJRobWT1Jj0/s1600/Nap_rule_cover.jpg" height="320" width="216"
/></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Cover by Rodger Heaton</td></tr> </tbody></table> <br /> Tabletop in seems was a partnership at its inception, Bob of course
wrote the rules, and the other Partner, Rodger Heaton did all the illustrations.<br /> Kate also told a tale about Richard Butler being there at the start of the company, but not becoming a partner
in the business at the last moment because of the finance necessary.<br /> Richard would later go on to write his own set of Napoleonic wargames rules used for National Championship games, To the
Sound of the Guns, which TTG published, and Bob and He would remain firm friends.<br /> Richard was one of the few people who wouldn't stop when arriving at the shop, he'd hustle on through to the
back rooms without stopping, not even noticing the bemused wooden-top shop assistant sitting on the stool, with his speechless mouth open...<br /> <br /> When or how the split with Rodger occurred I
don't know; all the early games (series 1 & series 2) had an address given as Ruddington, which is a couple of miles outside central Nottingham, which I'm assuming is his, and all these early
games, and many of the early rule sets published, used his illustrations as covers, or scattered though-out the text. By the late 70's Rodger seems to have dropped out of TTG leaving Bob and Kate as
the sole proprietors, to run the business from the home on Acton Road in Arnold.<br /> <br /> OK then, back after a short break, with the early TTG micro-games, and all the other products of the late
70's and early 80's<br /> <br /> Interested in reading Bob's Napoleonic rules?<br /> <a href="http://www.scribd.com/doc/106898294/TTG-Rules-for-the-Napoleonic-Period-1975">Check them out here on
Scribd.</a><br /> <br /> <br /></div> pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-51254242055556588852014-04-10T10:23:00.000+01:002014-09-06T13:44:04.736+01:00<span style="font-size: small;
">As much as I was fighting shy the previous post then this one is a much more cherished task.</span><br /> <span style="font-size: small;">Of all the games that I played before leaving school, this,
a sci-fi skirmish set in the distant future, was the one that really fired me up...</span><br /> <span style="font-size: small;"><br /></span> <span style="font-size: small;">Written by Halliwell and
Priestly in 1979, it flung gamers into a universe of possibilities some of which were trailed on the inside as including...</span><br /> <span style="font-size: small;"><br /></span> <span style=
"font-size: small;">"Command a squad of Star troopers, blast your way into the Galaxies richest banks and out of the strongest and most infamous jails. Boldly go where no man had probably gone
before, swap insults with exotic aliens, then swap blows with insulted aliens..."</span><br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;
margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><span style="font-size: small;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhXQXqHEW7sLpweXtit7U5k3L3yY1waOE_7GzKkuPXEiSQl6OKaP8isuxUvbuj0L381GbSx2pV1NYnfAG_1A3-QdhyphenhyphenAWEAAUx_BrzfE11CqG1UCU6CcyLok12hraBKp9p6NHh6lsgpXsuVL/s1600/C3000Cover1.jpg" imageanchor="1"
style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhXQXqHEW7sLpweXtit7U5k3L3yY1waOE_7GzKkuPXEiSQl6OKaP8isuxUvbuj0L381GbSx2pV1NYnfAG_1A3-QdhyphenhyphenAWEAAUx_BrzfE11CqG1UCU6CcyLok12hraBKp9p6NHh6lsgpXsuVL/s1600/C3000Cover1.jpg" height="320"
width="244" /></a></span></td></tr> <tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small;">Front cover by Tony Yates</span></td></tr> </tbody></table> <span style=
"font-size: small;"><br /></span> <span style="font-size: small;">Which all sound great to me as a kid raised on Dr Who, UFO, Space 1999 and just discovering The Hitch-hikers Guide to the Galaxy,
Combat 3000 seemed like an ideal jumping off point for the whole universe...</span><br /> <span style="font-size: small;"><br /></span> <span style="font-size: small;">Not that there was a whole
universe inside the rule book, this was a TTG product and the whole thing ran to 32 pages long, with just three alien races (plus humans) for the players to get there teeth into; Trimotes, three
armed apes, lifted shamelessly from Larry Niven's 'A Mote in God's eye', Maniblax, bipedal insectoids, and Zarquins, which had a more alien hive-mind thing going on, but this was enough, along with
what seemed like an endless list (50+) of lasers and blasters to arm your soldiers, and loads of armour and secondary weaponry to add, the game lent itself to highly personal squads.</span><br />
<span style="font-size: small;"><br /></span> <span style="font-size: small;">Once again, looking back, the rules themselves were quite complex, a percentile system with everything; (range, movement,
target size and situations, types of weapon, types of fire; aimed indirect, covering, conditions etc) adding or subtracting from the chance to hit, and then all that armour and variable weapon
effects to take into account for damage, once a hit had been achieved... which lead to quite small intense games, 6 - 10 each minis a side on a 4 feet square area would take a few hours for us to get
through, with each -5% for being hotly (childishly) contested, each move/shot/throw or melee vital...</span><br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style=
"float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><span style="font-size: small;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEidbdcVqQZypgCHaiBCnr-Z-wEXplQydfRGr0SX1CRbJu7O5ZS1dVIiSbwAb89IXjnrm2gVrmDutBc89I1JQE9g1KOVneRrctCbU_8w7zde5U4BvHnAdzNtrGeloThVsIuQtCSEcHfeVbUC/s1600/Marine3.jpg" imageanchor="1" style="clear:
left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEidbdcVqQZypgCHaiBCnr-Z-wEXplQydfRGr0SX1CRbJu7O5ZS1dVIiSbwAb89IXjnrm2gVrmDutBc89I1JQE9g1KOVneRrctCbU_8w7zde5U4BvHnAdzNtrGeloThVsIuQtCSEcHfeVbUC/s1600/Marine3.jpg" height="320" width="235" /></a>
</span></td></tr> <tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small;">Space Marine by Nick Bibby</span></td></tr> </tbody></table> <span style="font-family:
inherit; font-size: small;">I suppose that I shouldn't be surprised that this game played so sweetly, and that I became so enamoured with it, Halliwell went on to become THE greatest British game
designer of his generation, with a list of credits that include; Warhammer Fantasy Battle, and it's highly regarded but less well supported sister game, Warhammer Fantasy Role-play, Battlecars, the
most entertaining car-wars game ever, and of course the classic Space Hulk.</span><br /> <br /> <span style="font-size: small;">This was also the first game I played outside school, the time needed
play meant we (Simon, Mark, a lad called Richard Purseglove and I) had to meet up on Saturdays to play at each other's houses. In fact the only time Andy Chambers ever came to my house, was to play
was a game of Combat 3000, he arrived an hour or so late, mocked my rudimentary modelling skills on a future-tank I'd made, and then nuked the playing field from some cool looking space-fighter he'd
scratch built.</span><br /> <span style="font-size: small;"><br /></span> <span style="font-size: small;">Which cuts to the heart of what I loved about Combat 3000, and the problem with Sci-fi gaming
in general. This is summed up in a quote from Ripley in the Aliens movie... present with an insumountable number of menacing monster aliens she says..."I say we take off and nuke the entire site from
orbit?", in that, in the far-future whole planets can be whipped-out at the press of a button, or alien cities reduced to dust by half a dozen power-armoured Space Marines with imploding mini-nukes,
so that minor conflicts can't/shouldn't exist, without some kind of narrative to drive the game forward, scifi gaming becomes a power gamers dream.</span><br /> <span style="font-size: small;"><br />
</span> <span style="font-size: small;"><br /></span> <br /> <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;
text-align: center;"><tbody> <tr><td style="text-align: center;"><span style="font-size: small;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgH_lWNAY9zlkzhtuJDcyLKXZCzmGBMMcZnXhgmevZiMNUE0BRfsiv9ZYqxv7JzadBFvObgGI4tFu2mGaRnjcrQcdP7CAahvZJkDeUglbWGfFcD2GUDUIh9ja0m-azgVhdsFuyhAR6Xd8rC/s1600/Trimote2.jpg" imageanchor="1" style=
"margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgH_lWNAY9zlkzhtuJDcyLKXZCzmGBMMcZnXhgmevZiMNUE0BRfsiv9ZYqxv7JzadBFvObgGI4tFu2mGaRnjcrQcdP7CAahvZJkDeUglbWGfFcD2GUDUIh9ja0m-azgVhdsFuyhAR6Xd8rC/s1600/Trimote2.jpg" height="265" width="320" /></
a></span></td></tr> <tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small;">Trimote, by Nick Bibby</span></td></tr> </tbody></table> <span style="font-size: small;"><br
/></span> <span style="font-size: small;">How much better then, not to use all those high-end future weapons (Imperial Arsenals, the standard weapon of Imperial troops, +18% to hit, +5 damage
effect!) "check your blasters at the door", and duke it out with pistols and laser sabres, rather than to fight armoured combats, with roughly man-shaped future tanks... Battletech anyone?</span><br
/> <span style="font-size: small;"><br /></span> <span style="font-size: small;">Combat 3001 was released in 1981, this time authored by Halliwell alone, and although it did add more depth to our
imaginary future worlds; gravities, vehicles, more weapon types, more Aliens, it didn't really add anything much to the gaming experience, and apart from Laserburn, British Sci-fi gaming was heading
to the doldrums for half a decade or so...</span><br /> <span style="font-size: small;"><br /></span> <span style="font-size: small;"><br /></span> <br /> <table align="center" cellpadding="0"
cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody> <tr><td style="text-align: center;"><span style="font-size: small;"><a href=
AVvXsEjJPhDHOcKVY7qwkE59ljSR_ngtUjD_E9A6k4kCC1xYG3b-D0aTVCzq1oi8kW6d-p9_Nk9NFBRFTI-aYpdcFhiJ-xFPW22WBQYkDtwz9UYh3VHIar9w4r8OsP1JX09EN6FadhwQ5V7wVBhA/s1600/Combat3001Cafe2.jpg" imageanchor="1" style=
"margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjJPhDHOcKVY7qwkE59ljSR_ngtUjD_E9A6k4kCC1xYG3b-D0aTVCzq1oi8kW6d-p9_Nk9NFBRFTI-aYpdcFhiJ-xFPW22WBQYkDtwz9UYh3VHIar9w4r8OsP1JX09EN6FadhwQ5V7wVBhA/s1600/Combat3001Cafe2.jpg" height="468" width=
"640" /></a></span></td></tr> <tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small;">Future-Cafe from the inside cover of Combat3001, reportedly showing the Asgard
crowd responsible for the game</span></td></tr> </tbody></table> <span style="font-size: small;">Interested in reading these veteran rule-sets?</span><br /> <span style="font-size: small;">Check them
out here on my Scribd page. <a href="http://www.scribd.com/doc/80261502/TTG-Combat-3000-1979">Combat 3000</a>, <a href="http://www.scribd.com/doc/80142995/TTG-Combat-3001-1981">Combat 3001</a></span>
pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-17688962113664785312014-03-04T09:54:00.001+00:002015-06-18T10:34:43.790+01:00Ok, so then you look up and a
month has flown by and I've not updated the blog... and...<br /> <div> <br /></div> <div> Well to tell the truth I've been fighting shy of this one, coz the rule set that I found next in my lil'life
in miniatures is Middle Earth, written by the South London Warlords and published by Skytrex in 1976, a wargame in the style that WRG were producing in the same period, and obviously based on
Tolkien's world in Lord or the Ring et al. <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style=
"text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgCGf3BixqoCY4f2ciYLxzdXMDgA9TZ4GMu3qxBnfPBkc0Qu8NVHFhtoqB_6WG7uKq3ocDGToZht-M7OY4WNc55i25m-fE4hyrRHdq71EjMGOyOZ5xHGmrRqWQcLC0UCHAETnJtipmU3-yS/s1600/MIDDLE+EARTH+WARGAMES+RULES.jpg" imageanchor
="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgCGf3BixqoCY4f2ciYLxzdXMDgA9TZ4GMu3qxBnfPBkc0Qu8NVHFhtoqB_6WG7uKq3ocDGToZht-M7OY4WNc55i25m-fE4hyrRHdq71EjMGOyOZ5xHGmrRqWQcLC0UCHAETnJtipmU3-yS/s1600/MIDDLE+EARTH+WARGAMES+RULES.jpg" width="278"
/></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">the only picture from this 1976 publication</td></tr> </tbody></table> <br /></div> <div> Now I had no trouble with the rules,
slightly less involved than D&D but by no means simple, or the wargames aspect, it made a change to line up the few dozen minis we had in little warbands and advance them at each other, rather
than 'roleplay' with them.</div> <div> <br /></div> <div> Where the issue lay was with the whole Tolkien background, implied but not exactly explained in the rules, a world similar to the one of my D
&D experience, Orcs, Elves, Dwarves, even Dragons are allowed for in the rules, but there seemed to be so much more...</div> <div> <br /></div> <div> And this off is course the problem...<br />
Of there is "so much more", because Tokien's world is huge, vast, a history and a mythology of a continent, over 1000's of years, and I hadn't read a single word of it...</div> <div> I wanted to
play, I loved the idea of battling armies of dwarves and elves but I was a complete wooden-top where Tolkien was concerned. </div> <div> "No..." Simon Maze would cry "...Nazgul do <b>not</b>
have wings..." as I tried to rope in my Ral Patha demons as the Dark Lords servants...<br /> <br /> <br /> <br /> <br /> <br /> <div class="separator" style="clear: both; text-align: center;"> </div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://
/The-Fellowship-Of-The-Ring-Book-Cover-by-JRR-Tolkien_1-480.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="400" src=
The-Fellowship-Of-The-Ring-Book-Cover-by-JRR-Tolkien_1-480.jpg" width="258" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">a better draughtsman than writer?</td></tr> </
tbody></table> <div style="text-align: right;"> </div> </div> <div> It wouldn't be another three or four years (after I left school) before I even attempted to read LotR, even then it took me a
couple if attempts to get through it, and as anyone who has known me at all in the last 30 years will probably be able to tell you, I am not a fan of JRR.<br /> <br /></div> <div> I find there is
just 'too much' LotR; the prose is leaden, the plot drawn out, and the dialogue stilted, I admire the scope of the book, and his work in general, bringing together all the tradition elements of the
myths of Northern Europe into one whole as he does, you can't doubt he knew his stuff but, but, but...</div> <div> <br /> Like a huge Christmas dinner, where with the need to accommodate everyone's
favourite tradition goes into the meal, it gets larger and denser with every element added, until the whole thing is fit to burst... Now everyone likes a blow-out meal now and then, but Tolkien
servers his spicy stodge all the time, page after page for 100's of pages...</div> <div> </div> <div> Ok, ok, ok, I know it's not a popular opinion, and in the Geek World in which I live, it is
almost considered heresy to say you aren't a fan, but there ya'go, I can't lie to you folks can I?</div> <div> <br /></div> <div> So, my introduction to The Master comes not from his great work, or
his kids intro book, or the film (the Bakshi version kids, go ask your Mum) or some other kind of tie-in product (like there were any), but this little fan made rule set.</div> <div> <br /> Check out
my copy of <a href="http://www.scribd.com/doc/79079294/middle-earth-wargames-rules-1976">Middle Earth here on Scribd</a><br /> <br /> <table align="center" cellpadding="0" cellspacing="0" class=
"tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjcwE4oYAa1adNgeHvi-Tn4T3OdkkKp29KcSom6qBoxA3o8k5-YwWSDtPPu69cKHxDkMxhr_x3-hAF6PEd9uTfftk5zVreUYJu6afFlN8TrdwHoi1knjke8rd75tO7u3V8H0gvUnFbnSMRu/s1600/rp01028agremlins2.jpg" imageanchor="1" style
="margin-left: auto; margin-right: auto;"><img border="0" height="207" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjcwE4oYAa1adNgeHvi-Tn4T3OdkkKp29KcSom6qBoxA3o8k5-YwWSDtPPu69cKHxDkMxhr_x3-hAF6PEd9uTfftk5zVreUYJu6afFlN8TrdwHoi1knjke8rd75tO7u3V8H0gvUnFbnSMRu/s1600/rp01028agremlins2.jpg" width="400" /></a></
td></tr> <tr><td class="tr-caption" style="text-align: center;">Ral Partha Demons, not Nazgul.</td></tr> </tbody></table> <br /> <br /></div> <div> Next Combat 3000, where stealing ballpoint pens on
far away worlds, is a distinct possibility.</div> <div> <br /></div> <div> <br /></div> <div> <br /></div> pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-45527264682695401882014-01-22T10:25:00.001+00:002014-09-06T13:38:14.000+01:00<div style="text-align: left;">
<!--[if gte mso 9]><xml> <w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid>
<w:IgnoreMixedContent>false</w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/>
<w:WrapTextWithPunct/> <w:UseAsianBreakRules/> <w:DontGrowAutofit/> </w:Compatibility> <w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel> </w:WordDocument> </xml><![endif]--> Well if this
blog is going to be about Games and Gaming then we might as well start with the biggy, the daddy of all modern fantasy games, the system which launched a thousand imitators and made stars of its
creators, writers and artists...</div> <div style="text-align: left;"> <br /></div> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em;
text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjOHpeCUxp7PtEyshhLuRJgGY6WsMc0yJ2JfXPxQYhtV-xnX5IQaXc4-2H8l7sOrRYl7agwqLB5zb2fnY6BYUegHkdr2q8RXfc0IrXEMjNpIikmi2bEkZhHfba8PFH32d9m3Xr4hovLjeUh/s1600/Dave_Arneson.jpg" imageanchor="1" style=
"clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjOHpeCUxp7PtEyshhLuRJgGY6WsMc0yJ2JfXPxQYhtV-xnX5IQaXc4-2H8l7sOrRYl7agwqLB5zb2fnY6BYUegHkdr2q8RXfc0IrXEMjNpIikmi2bEkZhHfba8PFH32d9m3Xr4hovLjeUh/s1600/Dave_Arneson.jpg" height="213" width="320"
/></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Dave Arneson</td></tr> </tbody></table> <div style="text-align: left;"> Dungeons & Dragons is, this year in its fortieth
year, and fifth or sixth incarnations . It's creators, Gary Gygax and Dave Arneson, both of whom are now dead had the genius idea of marrying, role-playing, a previously little known psychology
and management tool, with escapist fantasy story telling, and traditional tabletop miniature and board games, to create a game like no other of its time...</div> <div style="text-align: left;"> <br
/></div> <div style="text-align: left;"> The earliest versions of the game, simply called Dungeons & Dragons, or Chainmail, seem to have had an effect not unlike the Velvet Underground's early
LP's, or the Sex Pistol's gig in Manchester 1976, not many people bought records or heard them play live, but everyone who did went away and started their own bands, or in this case their own game
systems... The game was that inspirational.</div> <div style="text-align: left;"> <br /></div> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left:
1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgdScq5m3pD-7KxmjIglUwS5HPvSZKcXAm4lRpBkyJMseHSIbqsPJZCzKz-sGfONRNnM6wQZY2eODkQ3KcHjVlfwAvlpypMYj8SxFWoJiEOfo5bDn1i_1CyHqFtMZj7J4COq9IZNP8GvAkj/s1600/Gary_Gygax_Gen_Con_2007.JPG" imageanchor="1"
style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgdScq5m3pD-7KxmjIglUwS5HPvSZKcXAm4lRpBkyJMseHSIbqsPJZCzKz-sGfONRNnM6wQZY2eODkQ3KcHjVlfwAvlpypMYj8SxFWoJiEOfo5bDn1i_1CyHqFtMZj7J4COq9IZNP8GvAkj/s1600/Gary_Gygax_Gen_Con_2007.JPG" height="320"
width="240" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Gygax, great dress sense too...</td></tr> </tbody></table> <div style="text-align: left;"> But what was it that
inspired so many people? There was little in the way of story, background or plot for a modern role-player to get their teeth into, the rule system although quite complex for its time, and becoming
increasing dense with each new edition, were really little more than a combat system and list of spells and their effects, it's surprising to look back at those rules now and read "the DM's word is
final" and accept we live/gamed in worlds with no fixed rules.</div> <div style="text-align: left;"> <br /></div> <div style="text-align: left;"> When I got to Advanced Dungeons & Dragons as it
was called, in the late 70's TSR monstrous baby was seeming huge, they had a whole world, Grayhawk, for gamers to explore, but wafer thin, huge areas were pencilled in with mountains or dessert,
cities and kingdoms but very little detail was given, not even 'Here be Dragons' to aid the Characters or DM in their quests to adventure into this new world... and even where D&D did give you a
grand plot or over arching scenario to discover and work through, such as in the now legendary G1/2/3,D1/2/3,Q1 campaign, the action takes place outside of the Greyhawk continuum, and outside the
rest of TSR's output (other modules) completely.</div> <div style="text-align: left;"> <br /></div> <div style="text-align: left;"> <table cellpadding="0" cellspacing="0" class="tr-caption-container"
style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhZA1iS2hAHfJKgVL5aucg-K-kDWV1cxB7Z7bC5B-hk5BGOVWV9A_RvvbMpYVdCR9TyqrCfBjIz9unk49Q2126QCjVgkKtP0QlJlUTCqvSEXKZFSNj1Eds3SJRs4QAvrc8F_WR-knuPzgwP/s1600/Charactor+scan.jpg" imageanchor="1" style=
"clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhZA1iS2hAHfJKgVL5aucg-K-kDWV1cxB7Z7bC5B-hk5BGOVWV9A_RvvbMpYVdCR9TyqrCfBjIz9unk49Q2126QCjVgkKtP0QlJlUTCqvSEXKZFSNj1Eds3SJRs4QAvrc8F_WR-knuPzgwP/s1600/Charactor+scan.jpg" height="320" width="227"
/></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">my first D&D character,<br /> in Andy Chambers's hand writing</td></tr> </tbody></table> Later as the system and the
Company behind it grew, TSR became more prescriptive about its worlds and the mythos within them, they spoon fed gamers with Dragonlance, or Ravenloft, or Shadowrun or... but somehow giving us more
detail, they restricted the imagination of the gamer, they corralled everybody, 1000's of us, all locked into the same 10' wide corridors fighting the same Liche Lords or Tentacle walls.
</div> <div style="text-align: left;"> <br /></div> <div style="text-align: left;"> And although this gave people of a certain age, a shared experience, it also reduced creativity at the base level
of the game, the stand alone role-play group and it's DM.</div> <div style="text-align: left;"> Today Dungeons & Dragons is huge, more people play now than ever have before, the conventions are
better attended, gaming groups are growing and TSR's parent company knows that if it continues to nurture its brand with new product and continued support, the game will run and run...<br /> </div>
<div style="text-align: left;"> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align:
center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
height="320" width="246" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">12 pages of black & white print, and few drawings</td></tr> </tbody></table> </div> <div style=
"text-align: left;"> <br /> So if you're a gamer, or modeller, or even a collector of fantasy and sci-fi metal models, and you've not already contributed to the <a href="http://
www.gygaxmemorialfund.org/">Gygax memorial</a>, or raised a toast to Dave Arneson and those ground breaking early guys, can I suggest you do so, salute D&D; the game that changed the world.</div>
<br /> <!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" LatentStyleCount="156"> </w:LatentStyles> </xml><![endif]--><!--[if gte mso 10]> <style> /* Style Definitions */
table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400;
mso-bidi-language:#0400;} </style> <![endif]-->pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-36196536915892187462013-12-24T09:57:00.000+00:002013-12-24T11:19:49.609+00:00<!--[if gte mso 9]><xml>
<w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</
w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/>
<w:DontGrowAutofit/> </w:Compatibility> <w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel> </w:WordDocument> </xml><![endif]--> Yer yer, I know promised you more Old School ramblings about
the Fantasy and Sci-fi games I played in school, but the <span style="mso-spacerun: yes;"> </span>Christmas rush here at work kicked-in, and I've had little or no time for writing, so all that
stuff about D&D, Combat 3000 etc will have to wait until the New Year...<br /> <br /> 30 years ago today... I know exactly what I was doing....<br /> <br /> Christmas Eve that year fell on the
Saturday, so although the shop was open and we did get a few customers though the door, we spend most of the day playing Shock of Impact on Bob's old dinning room table in the shop...<br /> <br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://
/SoI.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgA3ByM1ErHlUVrVoj_WwPuv8nT5cW2YFhBcauDMPme-cFBDDJLmAYhr4EPjnYnfSFz8-DfGD8fw6r5xxQFs2eO_Hb_C3zucEumvHe3FcYDe15_JkV2lubyRfT-wPZrYa2uEQtf-ww-rHdv/s320/SoI.jpg" width="235" /></a></td></tr> <tr><td
class="tr-caption" style="text-align: center;">not WRG 6th</td></tr> </tbody></table> Shock of Impact were TTG's Ancient Wargames rules, covering warfare from the dawn of recorded time until the end
of the 11thC, written by Ian S. Beck, who wrote a lot of what was good about TTG in the late 70's, they had one or two new ideas contained with-in the rule system...<br /> Firstly they used D10
instead of D6, which in Wargames rules was a bit of a leap, and secondly they had a whole figure causality removal system, again based on D10, which stopped too much record keeping.<br /> <br /> I'd
been playing SoI over the summer and autumn of '83, it gave good games for smallish units and I'd been enthused enough to buy a second hand (half finished) Late Roman army from a painter
called Ted Pool who would come into the shop, it was mostly Minifigs infantry and TTG cavalry, but it gave me enough smartly painted minis to use at the club on Monday's, and start learning to play.
<br /> <br /> <br /> But on Christmas Eve we didn't use our own armies...<br /> Oh no, too easy...<br /> <br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style=
"float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgnG5U4b1_lbpbLtRETUOC0epJZI_EgOMqNELNfNpo9Gx-HRdJeTgn-P81qYZZIJ0-MXMfJNclIyEBgqxjytq2dPp3t0GQ4fvCGx1pF1AKcoRu326lloKzv9hNGUrpEhNyYpKG51QnCRAqR/s1600/SoIAL.jpeg" imageanchor="1" style="clear:
right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgnG5U4b1_lbpbLtRETUOC0epJZI_EgOMqNELNfNpo9Gx-HRdJeTgn-P81qYZZIJ0-MXMfJNclIyEBgqxjytq2dPp3t0GQ4fvCGx1pF1AKcoRu326lloKzv9hNGUrpEhNyYpKG51QnCRAqR/s1600/SoIAL.jpeg" /></a></td></tr> <tr><td class=
"tr-caption" style="text-align: center;">it does what it says on the tin...</td></tr> </tbody></table> In the week or two previously we'd rolled randomly to see not only, which army we would be using
from the 60 or 70 given in the Army List, but also the number and type of troops that each army contained... SoI had a randomisation factor built into the army list which was supposed to stop players
fielding only super armies with no dross, in actual fact all the players I played liked to pick their armies rather than take what came on the randomiser, all super troops and no dross was how we
rolled, but for Christmas we had proper random armies... and we had to find the minis out of TTG's range, with proxies standing in, where we didn't have the exact minis needed...<br /> <div style=
"text-align: right;"> <br /></div> I don't really remember which army I rolled, something with lots of Medium Calvary in it, or how the game went (which means I probably lost), but the day stays with
me... Kate bringing food and drink in between serving customers, and us four boys, head down over the green baize for the best part of the day...<br /> <br /> <br /> <!--[if gte mso 9]><xml>
<w:LatentStyles DefLockedState="false" LatentStyleCount="156"> </w:LatentStyles> </xml><![endif]--><!--[if gte mso 10]> <style> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table
Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} </style> <![endif]-->pete the
17441528010939741890noreply@blogger.com1tag:blogger.com,1999:blog-6580384122112863749.post-53995117100506195482013-12-03T09:06:00.000+00:002014-01-21T10:58:34.066+00:00<span style="font-family:
inherit;"><span style="font-size: small;">Well I was going to blog about a couple of my favourite games today, but I shall fly in the face for the current vogue for <a href="https://www.facebook.com/
groups/121390094630920/">Old School Gaming</a> by going back, way back, before Old School, to Pre-School... And like almost my entire generation, pre-school soldiers meant plastic, and plastic meant
Airfix. </span></span><br /> <div> <span style="font-family: inherit;"><span style="font-size: small;"><br /></span></span></div> <div> <table cellpadding="0" cellspacing="0" class=
"tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><span style="font-family: inherit;"><span style="font-size: small;"><a
AVvXsEhjyFy5zU7qCtj24DAy4VN5dLHioIT10FO6jrK8XBl-LU9wJqtnEmNYo9qWUVKKVyPaRpVHlwMAlRzVaYV9Ppb7-S_Uy5B-RTyduRbizOZHglSteNDPvS1IiuiKobtZAET8JYW7pgmFJI3H/s1600/Airfix+Robin+Hood.jpg" imageanchor="1" style
="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhjyFy5zU7qCtj24DAy4VN5dLHioIT10FO6jrK8XBl-LU9wJqtnEmNYo9qWUVKKVyPaRpVHlwMAlRzVaYV9Ppb7-S_Uy5B-RTyduRbizOZHglSteNDPvS1IiuiKobtZAET8JYW7pgmFJI3H/s320/Airfix+Robin+Hood.jpg" height="271" width=
"320" /></a></span></span></td></tr> <tr><td class="tr-caption" style="text-align: center;"><span style="font-family: inherit;"><span style="font-size: small;"><span style="font-size: x-small;">
Airfix Nottingham connection</span></span></span></td></tr> </tbody></table> <span style="font-family: inherit;"><span style="font-size: small;">Airfix were huge in the UK, and had been a staple of
British boyhood for over 20 years when I got to them in them in the early 70's.</span></span><br /> <br /> <span style="font-family: inherit;"><span style="font-size: small;">Little did I know it at
the time but the had 100's of kits, and were Britain's biggest toy company producing models as diverse at 1/144th scale airliners and 1/8th motor bikes... But all these kits were for older boys, and
I, like so many others my age, started out with a box of their 1/72 scale 'little men'.</span></span></div> <div> <span style="font-family: inherit;"><span style="font-size: small;"><br /></span></
span></div> <div> <span style="font-family: inherit;"><span style="font-size: small;">Saturday afternoons would mean a walk to my paternal Grandma's house, to be left there in front of the wrestling
on ITV or a Cowboy 'picture', whilst my Dad went into Arnold to watch the local non-league side play football... Walking to Gran's, we had to pass Berry's paper shop and as often as not, we stopped
in the shop for a treat... </span></span><br /> <br /></div> <div> <span style="font-family: inherit;"><span style="font-size: small;">I can't remember why Dad bought me the first box,
Astronauts, but after a couple of boxes, Robin Hood & Sheriff's men, I was hooked. </span></span><br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style
="float: right; text-align: right;"><tbody> <tr><td style="text-align: center;"><span style="font-family: inherit;"><span style="font-size: small;"><a href="https://blogger.googleusercontent.com/img/
imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgJAjO6bjIVMY2-9XrMAWs2v8el2g2VkIdySvWbX9pN6i71qdfB0lK_Ny9oC34xv9snYrrTc8blUkWiKim_KKt4uV_Xylw6_IsLG6u4KSHgdmMEXdChhnM2siH7TU_TIauKnbQCXSFhUjb3/s320/Airfix+Astronauts.jpg" height="263" width=
"320" /></a></span></span></td></tr> <tr><td class="tr-caption" style="text-align: center;"><span style="font-family: inherit;"><span style="font-size: small;"><span style="font-size: x-small;">
Astronauts first, well it was 1970</span></span></span></td></tr> </tbody></table> <span style="font-family: inherit;"><span style="font-size: small;">Maybe it was the boxes, all the boxes had full
colour art and Airfix were very good at showing you what you were going to get inside... Or maybe it was the models, 10 or 12 different little men with a few doubles, and little diorama, or a two or
three part snap-fit kit... But whatever it was, there was everything in the box to create a tiny world, right there on the carpet in front of <a href="http://en.wikipedia.org/wiki/
Mick_McManus_%28wrestler%29">Mick Mcmanus</a> or John Wayne.</span></span></div> <div> <span style="font-family: inherit;"><span style="font-size: small;"></span></span><br /> <span style=
"font-family: inherit;"><span style="font-size: small;"></span></span></div> <div> <span style="font-family: inherit;"><span style="font-size: small;">Soon it was a regular feature of my weekend, a
box of soldiers on a Saturday keep me in a world of my own until Doctor Who at 6ish, and time to go home... After a while I had quite a lot, bags full in fact, and I would acquire loads more too,
including tanks and diorama sets, as other boys grew out-off theirs and handed them down... </span></span></div> <div> <span style="font-family: inherit;"><span style="font-size: small;"></span>
</span><br /> <span style="font-family: inherit;"><span style="font-size: small;"></span></span></div> <div> <span style="font-family: inherit;"><span style="font-size: small;">And everybody (well
every boy) had loads. You'd go to peoples house's; cousins, children of family friends, school mates, and they'd all have loads too... so we'd tip them out onto the bedroom floor, line them up, and
knock them down...</span></span><br /> </div> <div> <span style="font-family: inherit;"><span style="font-size: small;">It was in a bag of Soldiers that I inherited from somewhere that I first
learned a salutary lesson about scale... In the bag, much like the others ,there were the usual British Commandos and WW2 Germans, as well as the odd stray knight or WW1 Frenchman, but there were
also some American Paras or Airborne... AND THEY WERE A DIFFERENT SIZE!</span></span></div> <div> <span style="font-family: inherit;"><span style="font-size: small;">Airfix advertise their minis as 1
/72 HO sized, and these were BIGGER! </span></span><br /> <span style="font-family: inherit;"><span style="font-size: small;">Now I wasn't daft I knew that Airfix, and say Action Man, weren't
going to be compilable together, but what on earth was this all about? Why make Soldiers like Airfix, and not make them the same size as Airfix. It was my first inkling that all was not right with
the world of tiny troopers... and I didn't like it...</span></span></div> <div> <span style="font-family: inherit;"><span style="font-size: small;"></span></span><br /> <span style="font-family:
inherit;"><span style="font-size: small;"></span></span></div> <div> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;">
<tbody> <tr><td style="text-align: center;"><span style="font-family: inherit;"><span style="font-size: small;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEj5LSnODSmeKxVtLRfodfHPV-OPrAsBfi6rBFppn6wHB_OMjVeyfgSMCxZRkziRI3yeRQAyQB6QhfZNxI_WiC8_7xAPZG1VyCkxG-XYgGu9WvCDoGvYR9jiJ1WjMMEBgQmCLsTCuSNidGCG/s1600/airfix+book.JPG" imageanchor="1" style=
"margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEj5LSnODSmeKxVtLRfodfHPV-OPrAsBfi6rBFppn6wHB_OMjVeyfgSMCxZRkziRI3yeRQAyQB6QhfZNxI_WiC8_7xAPZG1VyCkxG-XYgGu9WvCDoGvYR9jiJ1WjMMEBgQmCLsTCuSNidGCG/s320/airfix+book.JPG" height="240" width="320" />
</a></span></span></td></tr> <tr><td class="tr-caption" style="text-align: center;"><span style="font-family: inherit;"><span style="font-size: small;"><span style="font-size: x-small;">Bruce
Quarrie's rules for WW2</span></span></span></td></tr> </tbody></table> <span style="font-family: inherit;"><span style="font-size: small;">Much later, at about 11 or 12, just before I got into D&
amp;D actually, I had come across Bruce Quarrie's rules for WW2 games, published by Airfix. These were the first rules I'd ever seen and I was on the verge of getting a few mates together to play,
when the D&D bug bit, and I (we all) moved over from plastic WW2, to metal Fantasy minis and gaming.</span></span></div> pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-63138561352143534532013-11-27T09:29:00.000+00:002013-11-27T09:32:41.807+00:00<!--[if gte mso 9]><xml>
<w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</
w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/>
<w:DontGrowAutofit/> </w:Compatibility> <w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel> </w:WordDocument> </xml><![endif]--> Ok Folks, sorry if the last three or four posts have been a
bit of a hatchet-job, I didn't really intent for it to be read that way. I had hoped to start the blog in June with leaving school and starting at TTG, but one thing drove out another (<a href="http:
//www.theassaultgroup.co.uk/store/home.php?cat=130">TAG Tudors</a>) and I didn't get started until September... which made getting to Nov 8th a bit of a rush.... and consequently the posts do come
across as a bit of a... frenzied...<br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style=
"text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEh3zeVcl0bfNzrZDXo2mR7lqatOkHIwFcS1HnlnUc0bE31leibaNBYcygor6ZIA8bo0ghyveiNE7DEejU1shqh1ejxFk8lqrBgTYFX_gDQbCUyHr0PGHJVWx959fZRDf4cmcpytmTXH_rQI/s1600/Shoot-out_Chop.jpg" imageanchor="1" style=
"clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="318" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEh3zeVcl0bfNzrZDXo2mR7lqatOkHIwFcS1HnlnUc0bE31leibaNBYcygor6ZIA8bo0ghyveiNE7DEejU1shqh1ejxFk8lqrBgTYFX_gDQbCUyHr0PGHJVWx959fZRDf4cmcpytmTXH_rQI/s320/Shoot-out_Chop.jpg" width="320" /></a></td></
tr> <tr><td class="tr-caption" style="text-align: center;">Tony Yates Illo</td></tr> </tbody></table> But this Blog is not necessarily about Citadel Miniatures, its about my life with in the minis
world, and as I said in previous posts, a break with Citadel occurred in the late '83 so at that point I stopped following there mini releases as closely as I had been doing. And although TTG did
keep up a relationship with Game Workshop for awhile, which I will blog about when the time comes, for the next few years most of these posts will be about TTG, their miniature range and rules, as
well as the games that we stocked in the shop and some of the people who bought them.<br /> <br /> Before that though, I would like to blog about one or two of the games that we played back at
school, that were very important to me in a couple of ways, for the worlds they created, and the way that they did so... <br /> <br /> So next time, back in full flow, with Combat 3000 and Middle
Earth.<br /> <!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" LatentStyleCount="156"> </w:LatentStyles> </xml><![endif]--><!--[if gte mso 10]> <style> /* Style Definitions */
table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400;
mso-bidi-language:#0400;} </style> <![endif]-->pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-21999886878768760082013-11-19T10:47:00.000+00:002013-12-03T09:22:30.164+00:00<!--[if gte mso 9]><xml>
<w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</
w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/>
<w:DontGrowAutofit/> </w:Compatibility> <w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel> </w:WordDocument> </xml><![endif]--> Right-o,<br /> <br /> I didn't mean this to turn into a micro
analysis of Citadel miniatures, but what Bryan did with the production of the miniatures in the period ('82-84) after he took charge bares noting, so if this gets a little technical please stick with
me, and I get back to the rampant nostalgia in the next update or two...<br /> <br /> Traditionally, making white-metal minis involves a two stage moulding process; firstly an original sculpture is
made using an epoxy putty (or even earlier carved from solder) and then moulded into what is called a Master-mould. This Master-mould might contain a few different models but of course it could only
have one copy of each original in it, this is ok for small scale production, but as moulds ware, and quite often the original would be destroyed in the stressful vulcanisation process, it is only
really a temporary mould.<br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align:
center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgcJ7LzeQ2mF07gP6npX-RHCxsriDiae_DDDP4r9qk4LB_LkjrYLRSMMcg7RRAoT77uo2NQ_0VEt7z4OmKiBr_btHgdBwyeklP7ijqaG4VYbV_FoCRq47JKnckroWUmVwIrMNf6IZW33Tao/s1600/Mastermould.jpg" imageanchor="1" style=
"clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgcJ7LzeQ2mF07gP6npX-RHCxsriDiae_DDDP4r9qk4LB_LkjrYLRSMMcg7RRAoT77uo2NQ_0VEt7z4OmKiBr_btHgdBwyeklP7ijqaG4VYbV_FoCRq47JKnckroWUmVwIrMNf6IZW33Tao/s320/Mastermould.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">a silicon rubber Master-mould.</td></tr> </tbody></table> <br /> So once a mini had been Master-moulded a number of Master-castings would be
taken from it, cleaned of blemishes and then these would in-turn be moulded into a Production mould, which would have a large number of each mini type on it. <br /> It would have been incredibly
difficult to provide large numbers of castings from a mould with one cavity on it, but considerably easier if the mould had 10 - 20 minis of the same type on in... simples...<br /> <br /> In this a
'belt and braces' type of mini production, the expensive to produce original is protected by having firstly a Master-mould taken, and secondly by having the master-castings cast, and saved, to return
to when the Production-mould inevitably wore out.<br /> <br /> So what Bryan did in 82 - 84 to this traditional process was not only radical for the time, but also quite risky.<br /> <br /> What he
did was get the sculptors, and at this time there was only a handful of them, to make-up only the basic bodies of the miniatures before master-moulding, and then to add the final detailing onto the
Master-castings just before the production moulds were made. This allowed a great degree of variation to each mini that went into production, for example one fighter would get one type of helmet and
a bag, and the next in line might get a different helmet and a cloak, the next, a third helmet and a sword instead of a axe etc... One well known sculptor told me that his job when he first went to
Citadel in this period was to do a good deal of this type of conversion work, sitting between the moulding processes, sculpting bags and pouches, cloaks and hats that all added a huge amount of
character and colour to the minis that were being released.<br /> <br /> <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em;
text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEh-TR4hEGs_duRR9gqKGLACNdLDlmXKu6G8Tu2tR4deIeevEKibHq685W8seMK58G0cQWK-Vhw6tfU5L0VYtlbYVPQgrbbkAB0MypKAVxFP5nXEI6E_USt4YCsBl24sbVVNJXP8QnkNDyGR/s1600/C02-47.jpg" imageanchor="1" style=
"margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEh-TR4hEGs_duRR9gqKGLACNdLDlmXKu6G8Tu2tR4deIeevEKibHq685W8seMK58G0cQWK-Vhw6tfU5L0VYtlbYVPQgrbbkAB0MypKAVxFP5nXEI6E_USt4YCsBl24sbVVNJXP8QnkNDyGR/s320/C02-47.jpg" width="187" /></a></td></tr> <tr>
<td class="tr-caption" style="text-align: center;">C02 Wizard</td></tr> </tbody></table> <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto;
margin-right: auto; text-align: center;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEi1lNWQlRu0PXRcEdrmQniMMxkKXxp_s4-aG3HWuiZUr7qU1aLbEtgnKMQplebKCbjP3cQkSMjxqTztPFY-VzCmKygl-_MSzaeWXcerJ7b-6ALVmp1a-nd9x5A1Kik4Sjb8afQKL_Ut56kR/s1600/C02-48.jpg" imageanchor="1" style=
"margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEi1lNWQlRu0PXRcEdrmQniMMxkKXxp_s4-aG3HWuiZUr7qU1aLbEtgnKMQplebKCbjP3cQkSMjxqTztPFY-VzCmKygl-_MSzaeWXcerJ7b-6ALVmp1a-nd9x5A1Kik4Sjb8afQKL_Ut56kR/s320/C02-48.jpg" width="182" /></a></td></tr> <tr>
<td class="tr-caption" style="text-align: center;">a converted C02 Wizard</td></tr> </tbody></table> But there is a problem with this method, moulds ware-out. The moulds once spun a few time start to
degrade, areas that are undercut will rip, larger items will start to flash and constant use will cause them to burn-out (lose the oils in the organic rubber compounds) and break up. In the
traditional process this is not an issue, in that it is possible to return to the master-castings, which survive the vulcanisation process, to make more moulds... But where the design team had added
extra detailing to the basic body types in Bryan's new method, the putty would be lucky to survive, and the sculptors would need to make a number of new variants to fill the new production mould
every time they were remade.<br /> As an aside, it might have been possible to take more 'master-castings' from a fresh production moulds and put these aside to make more production-moulds from, but
these would have been third, (forth, fifth) generation copies of the original bodies and would be of lower quality that the first and second generation copies...<br /> <br /> From gamers this method
of making minis produced a boom in the numbers of different models that were available and kick started the 'Collector-gene' in a lot of people, but it had an inherent problem, it required an almost
ever increasing number of sculptors to service the constant remaking of the range... and although Citadel did increase the design capacity over this period, doubling the number of sculptors they
employed, I suppose the decision was taken to move back to a more traditional method of working, and by the release of the Second Compendium ('85?) the range had settled down to less varied 'codes'
with regular numbers of set minis in each...<br /> <br /> Which all begs a couple of big question; 1) did Bryan know what he was doing with the moulds?<br /> I suspect that he did, he knew that his
new moulds would ware-out, he is an accomplish mould maker himself and Citadel must have already been remaking loads of moulds on a regular basis, given the numbers they were selling of the old
range... And 2), did he realise the medium term problems he would create? And again my guess is that he did, taking a gamble on pumping the highly profitable miniatures side of his business as
quickly as possible to grow the whole organisation.<br /> An entrepreneurial risk.<br /> <br /> Regardless it worked, Citadel miniatures were now driving Games Workshop forward but at a cost... most
of those great minis from this period are now lost forever, torn, ripped or burnt-out long ago, never to return.<br /> <!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" LatentStyleCount
="156"> </w:LatentStyles> </xml><![endif]--><!--[if !mso]><img src="//img2.blogblog.com/img/video_object.png" style="background-color: #b2b2b2; " class="BLOGGER-object-element tr_noresize
tr_placeholder" id="ieooui" data-original-id="ieooui" /> <style> st1\:*{behavior:url(#ieooui) } </style> <![endif]--><!--[if gte mso 10]> <style> /* Style Definitions */ table.MsoNormalTable
{mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} </style>
<![endif]--><br /> Next time, a pause for breath...pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-29128090078602082272013-11-15T10:48:00.001+00:002014-01-28T10:39:24.850+00:00Citadel miniatures in 1983,
must have been a fabulous place to be.<br /> <div> <br /></div> <div> After the changes in personal at the top of Citadel/GW in the previous year left Bryan wholly in command, the year that followed
would be one which shaped the miniature gaming hobby for the next two decades.</div> <div> <br /></div> <div> Bryan changed the way that miniatures were produced, marketed and consumed for a
generation of gamers. These changes weren't instantaneous, and some like the production methods, had been developments of what was already going on across the previous couple of years, and of
course some developments were only temporary themselves and would be superseded in a the fullness of time, but it clear to see Bryan's direction and imagination coming to the fore, in his first
full year in charge.<br /> <br /> The first noticeable move away from the sales model of the previous 4 years came in late '82, Citadel started to put out new miniatures in boxed sets. Now I don't
think this was a original idea, I had seen some American companies selling in boxes (but I can for the life of me remember who? Dave?) in the early '80's, but these new Citadel boxes were the first
to contain minis that were any good.<br /> <br /> <div class="separator" style="clear: both; text-align: center;"> </div> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style=
"clear: right; float: right; margin-bottom: 1em; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgmhksTe-VN2DG6Zbt8-kXdWDBRNfp52y1TxygInhc3GmBwj78IKp64rLLOJ4Xdqob2eWfWjasZg0utrFxEMENDmm2LWe2nlur0vBuvd9crnDxci69NK5gOvE1nDksyfaSCm4BOoDGCUjxy/s1600/s2dwfkingcourtc2-01.jpg" imageanchor="1"
style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgmhksTe-VN2DG6Zbt8-kXdWDBRNfp52y1TxygInhc3GmBwj78IKp64rLLOJ4Xdqob2eWfWjasZg0utrFxEMENDmm2LWe2nlur0vBuvd9crnDxci69NK5gOvE1nDksyfaSCm4BOoDGCUjxy/s320/s2dwfkingcourtc2-01.jpg" height="320" width=
"222" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">The Dwarf Kings Court</td></tr> </tbody></table> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEiCwdKiFRXsalD_bIFMLtoaY46TRiyRSTvf-fX072vBOCLmLYbfyD4p1Tt0CPiruTGpUYRQye2af7tP5Kij_fFPJEkoqgVhJ7yhGE3rfMFc3ab9mj2o1bQXVzORQWWY_2HTtZ6baGFgoBGj/s1600/citsslogo.jpg" imageanchor="1" style=
"margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEiCwdKiFRXsalD_bIFMLtoaY46TRiyRSTvf-fX072vBOCLmLYbfyD4p1Tt0CPiruTGpUYRQye2af7tP5Kij_fFPJEkoqgVhJ7yhGE3rfMFc3ab9mj2o1bQXVzORQWWY_2HTtZ6baGFgoBGj/s1600/citsslogo.jpg" /></a><br /> <br />
Previously minis had only been sold in singles, in plastic bags with folded cardboard headers, and if you wanted one mini, you paid for and got, one mini.<br /> Boxing was the first attempt to drive
gamers/collectors into buying more miniatures than they necessarily wanted. A box would contain 8 to 10 minis that you couldn't get in the main range, so if you wanted a specific mini the only option
was to spend £3.95 on the box to get it...<br /> <br /> Fortunately, for gamers, most of the these early boxes contained great minis, so people were only too willing to to put up with the marketing
to get the best Citadel had to offer, and most of these box sets are still very fondly remembered. <br /> <br /> <br /> The second change, visible form the outside, was the move away for a catalogs
of miniatures you could buy to what came to be known as the 'C' codes.<br /> In the early years Citadel had it range divided into <a href="http://www.solegends.com/citf/citfa/index.htm">Adventures</
a>, <a href="http://www.solegends.com/citf/citff/index.htm">Monster and specials</a>, with each mini having its own specific code, with-in these broad groups, which you could order separately.<br />
The 'C' codes stopped this, minis were grouped into 40 codes which contained many different minis.<br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;
margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjK33IRPmVTtUa2MhZsH50esifM0E9JEpj5z7ObQi1WEBeRjJvRV21ovKBXD9vkR7AdA4eZuHF0V1jFPv3MC9mpcX8xcbM5RkTjXHanu-Yg6AoM4tW5d5Mb5ZTmRgY04WTP3AKE0l-2-8oq/s1600/Citadel.Compend-1.jpg" imageanchor="1" style
="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjK33IRPmVTtUa2MhZsH50esifM0E9JEpj5z7ObQi1WEBeRjJvRV21ovKBXD9vkR7AdA4eZuHF0V1jFPv3MC9mpcX8xcbM5RkTjXHanu-Yg6AoM4tW5d5Mb5ZTmRgY04WTP3AKE0l-2-8oq/s1600/Citadel.Compend-1.jpg" /></a></td></tr> <tr>
<td class="tr-caption" style="text-align: center;">John Blanche art from the first Compendium</td></tr> </tbody></table> I don't quite know when this change took place, <a href="http://
www.solegends.com/">The Stuff of Legend</a> gives a date as early '83, and when I got to TTG in that summer, Bob had me had me change over the figure-racks from the old codes over to the new system,
the change was defiantly complete by the release of the fist Citadel Compendium in October...<br /> <br /> In October '82, if you wanted <a href="http://www.solegends.com/citf/citfa/citfa82cat.htm">
FA-1 Fighter in Plate</a>, you got it and noting else, in October '83 if you ordered from <a href="http://www.solegends.com/citc/c01fighters.htm">C01 Fighters</a>, you got one of sixty plus variants.
<br /> <br /> Finally, the biggest thing at Citadel in 1983, was the release of Warhammer.<br /> The first edition of the mass battle system was launched in the summer, and was an attempt to put a
game behind the miniature range to guide players into buying more miniatures. The problem with D&D was a vehicle for a miniatures range was that the miniatures themselves were an unnecessary
luxury. With role-playing most players wanted one or two miniatures, preferably ones that represented their Character as closely as possible, and no more... DM's would be expected to have a few more,
half a dozen goblins and Orcs, an Ogre or troll, or a scenario specific monster or two, but not huge numbers.<br /> Warhammer changed all that. <br /> <table cellpadding="0" cellspacing="0" class=
"tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
warhammer_fantasy_battle_edition_1_book_cover.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://
warhammer_fantasy_battle_edition_1_book_cover.jpg" height="320" width="233" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Mr Blanche again</td></tr> </tbody></table> <br />
I remember taking the first box home, and playing a scenario given in the back of one of the books.<br /> The adventures had to cross one of three bridges, whilst a random assortment of 'baddies'
tried to stop them... the heroes I think were given in the rules and the baddies were generated by an encounter table... Now, Mark and I had a fair number of minis each... I must have had 50 or 60,
Mark a similar amount but within a few rounds we'd exhausted our supply, even reusing dead'uns and throwing in proxies where we didn't have an exact match for what the the random table generated, we
ran out of minis...<br /> Plus the rule system seamed retrogressive even then... saving throws! What was all that about? Mark was throwing buckets of random monsters at my five heroes, with whatever
damage done 'saved' on a roll of 3-6!<br /> Would you believe I played Warhammer on the first day it was released, and didn't play again for 5 years, I just didn't like it... but I assume lots of
people did, or were looking for something new after the D&D boom waned, as it went on to be the biggest game in the UK Fantasy market in the 80's and 90's, but you needed LOTS of miniatures...<br
/> <br /> How Citadel provided all these new and different miniatures is in perhaps the most interesting thing about the growth of the company in the period, and I'll write about the radical
production methods next time, but for now I hope that I've shown you how Citadel Miniatures started to dominate, firstly Games Workshop Britain's biggest game manufacturer and retailer, and secondly
the UK market itself.</div> pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-22949341425367933042013-11-12T09:26:00.000+00:002014-01-28T10:34:46.851+00:00It's hard to look back now at
Citadel Miniatures and not see them as the all conquering behemoth of the miniature gaming world they were to become, but in the early 80's that one particular outcome was not certain by any means,
other companies could have come to the fore or the company might not have developed in the way that it did.<br /> <div> So what happened between the formation of the company in early '79 and my
formative year of '83 to turn the casting arm of a small games company into a dominant market leader?</div> <div> <br /></div> <div> Citadel's early miniatures show their roots, all those early minis
are designed almost exclusively for use alongside Dungeons & Dragons. </div> <div> Character types are copied slavishly from the AD&D books, creatures from the Monster Manual, very
little is original, and where it was, as was the case of the few monsters that travelled from the range over into new D&D books, we all knew what we were being sold, and for what we were supposed
to be using them... D&D.<br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style=
"text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgcZ_JS-R-bOVUnZbXffEe4esIF97rXAgK1CWG_u9-TykDXICzl7ecGzhKplrZQZoQJKsMH2A8iJkVtN8pW4-ofX07uG474pDwgw217GdqEV_vG_6bBLJz1cBi_Ab-4WJMNEpOvqIvWHg2G/s1600/Citadel+Blue+Catalogue.jpg" imageanchor="1"
style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgcZ_JS-R-bOVUnZbXffEe4esIF97rXAgK1CWG_u9-TykDXICzl7ecGzhKplrZQZoQJKsMH2A8iJkVtN8pW4-ofX07uG474pDwgw217GdqEV_vG_6bBLJz1cBi_Ab-4WJMNEpOvqIvWHg2G/s1600/Citadel+Blue+Catalogue.jpg" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">'82 catalogue </td></tr> </tbody></table> </div> <div> Which is a bit odd really, because it wasn't until much later that Citadel had a
full AD&D license... </div> <div> Grenadier Models had that license in the in the late 70's and early 80's in the US, but made little impact in the UK in spite of the tie-in.</div> <div>
Even the range that Citadel were set up to produce over here, Ral Partha, could (should) have gone on to become the dominant player here, as it was in the US, but again, even with a long standing
history of being associated with D&D, it slipped into the position of also-ran.</div> <div> <br /></div> <div> It's possible to go a look at what Citadel produced in the first couple of years and
pick out virtually every monster and character from the D&D pantheon or it's rough equivalent, but after making everything that the D&Der needed there was a natural break on what the
company might possible make next.</div> <div> <br /></div> <div> Obviously they looked for other markets, historical miniatures were (are) a short step away, as are minis for other game systems, and
Citadel go away and try to expand all these other revenue streams as the 80's dawn... Gangsters, sci-fi, larger scale models and movie tie-ins (Star Trek) are all explored, but with little success...
</div> <div> The only thing that does start to sell more miniatures, and I mean sell more than the one of each or the few that you needed for the D&D campaigns, were the Fantasy Tribes.<br
/> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://
/FTD9.Show.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjRMQdyLtWcyqI0Aq9IreImygRWj1132S_3e1cwfrlL6CZvkE9OFK2CxosBvrKib39H4kMVzzhsCTGrkDPjXragsGnJ4ZrnmuHXScZJ1M0CsHUyEVlPRhgds65tO4CLf8yK_CuQ3NkIzq6r/s320/FTD9.Show.jpg" height="320" width="230" /></
a></td></tr> <tr><td class="tr-caption" style="text-align: center;">One of 20 variants of FTD9 Dwarf in plate-mail with sword</td></tr> </tbody></table> </div> <div> Fantasy Tribes, I feel, have all
the hallmarks of what made Citadel great in the 80's, and would show the pattern which Bryan would try to repeat whenever he started a new project.</div> <div> Firstly they were wholly original,
other manufacturers may have had a dwarf or two in their range, only Citadel had 60 different models in a Tribe, secondly they were collectible, where other ranges had fixed models to buy, Tribes
were, it seamed, constantly changing so that just when you thought you had them all, new variants would turn up to keep you buying, thirdly, and this was true of all the models that Bryan
commissioned, they were full of character, no bland Orc with Sword in this range, these Orcs are attacking, swinging, charging, and finally, they were great models, in a way that lots of early
Citadel or American imported minis weren't.</div> <div> <br /></div> <div> But even these stand out collections weren't for very much more than extra variety on the D&D table and I doubt that the
company could have gone on from strength to strength in the way it did with just these...</div> <div> <br /></div> <div> Which is where a little bit of luck comes in handy...</div> <div> <br /></div>
<div> Steve Jackson and Ian Livingston, Bryan's partners in Citadel and owners of the parent company, Game Workshop, had hit on the smart idea of copying the unique feature of also ran fantasy role
play game Tunnels & Trolls, it's solo play option, and repackaging it for a younger market as Fighting Fantasy game books... They were hugely successful creating a publishing phenomena and
launching a whole line of best selling books which made their authors at least properly famous, if not quite house-hold names.<br /> <br /> <table cellpadding="0" cellspacing="0" class=
"tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjYqDxqAYcfxp5r9ESArzup4v7J0PwdqidodxlqQvrxveTF0aSTB81x14Zc-DsT607MiDp1lltj1fq4JUyjlb_RP89prK8_WS-r4kr3usPGkVgLu69W8IXl6W08iFsZP0qj2N5JQCS72n5q/s1600/figfan03o.jpg" imageanchor="1" style="clear:
left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjYqDxqAYcfxp5r9ESArzup4v7J0PwdqidodxlqQvrxveTF0aSTB81x14Zc-DsT607MiDp1lltj1fq4JUyjlb_RP89prK8_WS-r4kr3usPGkVgLu69W8IXl6W08iFsZP0qj2N5JQCS72n5q/s320/figfan03o.jpg" height="320" width="195" /></
a></td></tr> <tr><td class="tr-caption" style="text-align: center;">The first Fighting Fantasy book I bought</td></tr> </tbody></table> </div> <div> Which must have taken the pressure off Citadel/GW
to perform financially, Bryan had made another halfhearted effort to start again with his Bryan Ansell Miniatures, but by late '82 with Steve and Ian moving into new spheres and Bryan looking for new
directions, a deal is struck that gives Bryan control of Citadel AND Games Workshop and allows him to take both companies forward with his direction and control.</div> <div> <br /></div> <div> Now,
the deal that I heard that was struck was that Bryan would take immediate control and pay Steve and Ian £1,000,000 in 12 months. Bryan told me at a much later date, that he didn't have the money when
he took control, and had to make £1M in that first year to for-fill his part of the agreement, but fore-fill it he did, so we can assume that 1983 was a very good year for miniatures...<br /> <br />
Next time, Lets make a million! All aboard for Boxed sets, the first Compendium and Warhammer Fantasy Battles </div> <div> <br /></div> pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-32796037374221164952013-11-08T23:40:00.000+00:002014-04-10T10:45:35.250+01:00<!--[if gte mso 9]><xml>
<w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</
w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/>
<w:DontGrowAutofit/> </w:Compatibility> <w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel> </w:WordDocument> </xml><![endif]--> (80's joke in the title...)<br /> So it came as a bit of a
shock to get to work on the 8th of November 1983, and find Bob in a terrible mood, Kate warned us (Mark and I), just to stay out of his way when he was in a foul mood, so we kept our heads down and
got on with whatever we had to do...<br /> <br /> Shame really coz I'd had a terrific weekend, for the first time I'd travelled away to help out at a wargames show... and not just any wargames
show, oh no... this was the BIG one.<br /> <br /> Northern Militaire was held on the 5th and 6th of November in Oldham, at the Queen Elizebeth Hall. Bob had travelled up on the Friday evening
but I went on the Saturday morning with Rees (if memory serves we went up in an escort-type hire-van rented from the place where his wife worked... it's was foggy on the M1 and I remember
Siouxsie and the Banshees version of Dear Prudence on the radio...)<br /> <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left:
1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEisuryKdAffbGBZ4McfXYutGMXz8sGZIXpOwogU7T0C4jNvfuRgsoXCm8607cVyqB4Laihu1HKdZSXO-pyGRZmSpdrRcKNluFwbX2uhNX5Hfz5NfWPa576DjrFZpvvk53lf5tawvNWIqenX/s1600/QEH.jpg" imageanchor="1" style="margin-left:
auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEisuryKdAffbGBZ4McfXYutGMXz8sGZIXpOwogU7T0C4jNvfuRgsoXCm8607cVyqB4Laihu1HKdZSXO-pyGRZmSpdrRcKNluFwbX2uhNX5Hfz5NfWPa576DjrFZpvvk53lf5tawvNWIqenX/s320/QEH.jpg" height="240" width="320" /></a></td>
</tr> <tr><td class="tr-caption" style="text-align: center;">Queen Elizabeth Hall, Oldham</td></tr> </tbody></table> <br /> TTG had a huge stand at the event by Bob's standards, which is why Rees and
I travelled up, and Bob roped in the willing hands of Bruce Rea-Taylor to make four of us to cover the 24 feet.<br /> My section of the stand was made up of the extra stock that Bob had arranged to
bring from Citadel.<br /> That same weekend was Games Day in London, and of course Citadel/GW were directing all their efforts toward that.<br /> <br /> A deal was struck to exchange stock between
Citadel and Bob, so that we could both have a presence in, and a profit from, both events.<br /> Rik Priestley and Richard Halliwell had come to the shop in Daybrook square to bring stock for us to
take to Oldham, and also to take away TTG rules and minis for sale in London.<br /> <br /> Northern Mil. was amazing for a young'un like me, it was so BIG, a couple or three floors and although there
were only a few games on, it was primarily a modelling event, there was a much greater variety in displays, traders and public than a normal wargames event...<br /> And boy were we busy... now in my
time I've stood trade shows like Salute or Games Day where the public have been three or four deep at the stand, but nothing came close to the two days of Northern Mil.<br /> <br /> "Used to be
better in the old place..." grumbled my Boss, "... Never recovered from the change of venue..." But if shows did get bigger and better than this, I would have been amazed.<br /> <br /> <table align=
"center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://
/Games+Day+1983.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEiIsn_Ua2P1WVTPrB6mhDpD-uwbO-VvjLrlKf9qSWA8uIZ0Sl09yQ_Qfm9rLFEYOLf8ufabCe6fi9TcATOEWYI6UGXwNZkNqessbljSbT2fJEPziIvBlArAAHOjaETV2MKeas1CSa_j5PYC/s320/Games+Day+1983.jpg" height="225" width="320"
/></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Games Day '83, note the date.</td></tr> </tbody></table> I remember being driven through Oldham in the dark, heading for the
hotel, and the road ran through all the old back to back houses, which were lit with fires and fireworks... Punch drunk and tired I sat drinking cola listening to the old Chaps joke in the bar...
perfect.<br /> <div style="text-align: left;"> Sunday, more of the same... Non stop customers... and non stop music too... They used to play Top 20 War Film Themes over and over, all day long on the
public address system... Even Bob who liked a movie film theme, would be tiring for 636 Squadron by 11am on Sunday morning...</div> <br /> I bought some minis, a second hand Japanese Samurai army
from the Bring& Buy. (more stuff for Tercio)<br /> <br /> No club on Monday, a night off after a two day event, and then into work again on Tuesday as normal.<br /> <br /> Or not...<br /> It
transpired that Bob was fuming because all the stock, minis, rules, displays, that we had sent to Citadel had not been taken to Games Day, they had been left behind and TTG would get no presence, or
profit, from the event despite having to work double-hard to do two major events in one weekend, and working hard and taking extra staff/space to sell the Citadel stuff at Northern Mil.<br /> <br />
I don't know if Bob even spoke to Bryan on the normal Monday 'Run' or not, but as far as Bob was concerned, that was it, The End.<br /> Over that week, Bob had me take down all the Citadel miniatures
stock from<br /> the rack in the shop, other things would take its place, and we would have no more contact with Citadel.<br /> <br /> So, people often say to me, "oh the golden age of Game Workshop
was such and such... 85-87, or 88-91, or mid 90's". Well for me the golden age of Citadel miniatures ran from the time that they started on the Fantasy Tribes (81?), until the 8th of November 1983,
the day that I found out that you couldn't trust them, and they were only looking out for No.1.<br /> <br /> And what next dear reader? <br /> Why I suppose we need to judge Citadel's actions in
context, so next time, I'll muse on the changes in Citadel in the years of 82 and 83...pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-34368261684292833642013-11-07T10:40:00.001+00:002013-12-03T09:16:08.596+00:00I said I was happy to have
spoken to Bryan Ansell at my first Wargames show, but that wasn't the first time I'd seen the him, oh, no, he'd been into TTG in the summer...<br /> <table cellpadding="0" cellspacing="0" class
="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEiARz-fgph_OcdIELiIypC4op3_w1Gi6nHXohQp1DBQR3SkL9vb4sysdYQaFzZ9SnUafBzXAtwl7yqp8DxIl5GWW4sTNJkAMJSuvEZOaf5NabvrglqI75zWVKJgUD5XKXitL9cM718vzA7i/s1600/Spacelady_Chop.jpg" imageanchor="1" style=
"clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEiARz-fgph_OcdIELiIypC4op3_w1Gi6nHXohQp1DBQR3SkL9vb4sysdYQaFzZ9SnUafBzXAtwl7yqp8DxIl5GWW4sTNJkAMJSuvEZOaf5NabvrglqI75zWVKJgUD5XKXitL9cM718vzA7i/s320/Spacelady_Chop.jpg" width="147" /></a></td></
tr> <tr><td class="tr-caption" style="text-align: center;">Tony Yates illo</td></tr> </tbody></table> So I suppose I need to write about why Bryan and TTG were linked in those days, and what happened
to end this relationship. <br /> <br /> TTG and Bryan had history going way back into the 70's, Kate had said the Bryan had first started casting miniatures in her kitchen on Acton Road in Arnold,
but I am unsure whether she meant casting for Asgard, or Citadel, or why even he wasn't using his own kitchen (?!?), but hey that was the story...<br /> <br /> Bryan had been instrumental is starting
Asgard in the mid-70's, with I think at least two other people, Paul Sulley being one, and had sculpted quite a number of their early miniatures, but as always, with his eye on the main chance, he'd
jumped ship in in the late 70's (78?) and started to work with Steve Jackson and Ian Livingstone at Games Workshop to start Citadel miniatures.<br /> <br /> <table cellpadding="0" cellspacing="0"
class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgOv3ZNnwfmtWrTzH5HLatxUIn_46Zg6wl5HeUS6z_cHUYBGkGyEGlHRSeF9jMLIQK3H0q8q1L2f8dMhqp__itWGG8rgu9GLkZ-qd-wwSeerPF3BeopK_G-CaPNSbq97SX8QfQxL9VkwK0c/s1600/Robin+Hood_chop.jpg" imageanchor="1" style=
"clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgOv3ZNnwfmtWrTzH5HLatxUIn_46Zg6wl5HeUS6z_cHUYBGkGyEGlHRSeF9jMLIQK3H0q8q1L2f8dMhqp__itWGG8rgu9GLkZ-qd-wwSeerPF3BeopK_G-CaPNSbq97SX8QfQxL9VkwK0c/s320/Robin+Hood_chop.jpg" width="212" /></a></td>
</tr> <tr><td class="tr-caption" style="text-align: center;">Bryan's Robin Hood sample piece for GW</td></tr> </tbody></table> GW had a license to to produce Ral Partha in the UK, and had been
importing for the past few years. Bryan, I was told by Richard, had submitted 5 self-sculpted minis to Steve and Ian and they were keen to become involved, so Citadel was founded, and started to
produce minis from a lock-up garage off High Street in Arnold.<br /> <br /> And it was a success.<br /> By the early 80's Citadel were operating out of Newark, Notts, and making a large range of
fantasy, sci-fi and historical miniatures and growing rapidly alongside Games Workshop.<br /> <br /> In 1980 Bryan had tried to get a sci-fi game/rule-set printed through Games Workshop, and although
GW (Steve and Ian) were sold on the idea, and went on to commission Sparefarers, a rule-set based around Citadel sci-fi range, they didn't use Bryan's rules. (<a href="http://boardgamegeek.com/
boardgame/8137/spacefarers">details here on BoardGameGeek</a>)<br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;">
<tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgdq2bAS0vfEnm-2l4Ek4GbmIUrbtsyKgAupAy2P4A7YxZnLo9B72oc9-Xq2YtfD8CVj3RTvIe8iABsd__pPxpIvWfPLxUIK4iIA5__IsqZtSt7omvJpaUgaDn3FGWyqGvrd4prXq6l2pY1/s1600/SpaceFarers.jpg" imageanchor="1" style=
"clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgdq2bAS0vfEnm-2l4Ek4GbmIUrbtsyKgAupAy2P4A7YxZnLo9B72oc9-Xq2YtfD8CVj3RTvIe8iABsd__pPxpIvWfPLxUIK4iIA5__IsqZtSt7omvJpaUgaDn3FGWyqGvrd4prXq6l2pY1/s320/SpaceFarers.jpg" width="226" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Spacefarers rule book cover by Tony Ackland</td></tr> </tbody></table> <br /> <br /> Quite how put-out by this Bryan was I don't really know,
but regardless, within months Bryan was back with Bob, to set up Tabletop Miniatures to print Laserburn and produce a range of miniatures to support it...<br /> <br /> Laserburn was 15mm based,
which I think was a bit of a revolutionary step back then... All GW/Citadels miniatures were in 25mm (inc Sparefarers), and maybe Bryan switched scales as a way of mollifying his partners at GW that
he wasn't competing with them... or maybe he and Bob thought 15mm was a better scale for larger sci-fi battles, or possibly the move to 15mm was a trend, economic conditions generally weren't good in
the early 80's, so maybe they figured a change to a smaller scale would get people buying, and 15mms were a growing part of the fantasy/sci-fi market, Asgard also produced their own 15mm ranges.<br
/> <br /> Laserburn was published in late 1980, and was quickly followed by a large miniatures range, covering all the types of troops necessary for the game. Looking back it was quite derivative,
the basic game, as Bryan says on the BGG page given above, owed a lot to Western Gunfight games and the background given, to many other current 70's sci-fi staples, the Law Offices were borrowed from
2000AD's Judge Dredd, the Imperialist were classic Heinlein Starship Troopers, and the Red Redemptionists owed more that a little to the Fremen in Dune.<br /> <br /> <table cellpadding="0"
cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b
/R29vZ2xl/AVvXsEjlVxs48Mrj4GZIa00gXffJoXZ3Vapkxx3Mj2BYUPaNkx7iHgRR5NQpLDZuPvFgDAYr32acLkgYMsU_PkaM5NqNw9WyPGAS0_FPmteeTwXXYB4mjSZTbeuYrAmMg_qBrIVTPagHFfsaRi9b/s1600/100lb_1ws.jpg" imageanchor="1"
style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjlVxs48Mrj4GZIa00gXffJoXZ3Vapkxx3Mj2BYUPaNkx7iHgRR5NQpLDZuPvFgDAYr32acLkgYMsU_PkaM5NqNw9WyPGAS0_FPmteeTwXXYB4mjSZTbeuYrAmMg_qBrIVTPagHFfsaRi9b/s320/100lb_1ws.jpg" width="263" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Law Officer (not Judge Dredd)</td></tr> </tbody></table> <br /> Tabletop Miniatures started casting this range out of the Daybrook shop, with a
machine bought from Citadel, although I think the early miniatures were both moulded and cast in Newark, with Bryan doing the sculpting duties on all the minis, including TTM's range of historical as
well...<br /> <br /> By '83 when I got to TTG, the range was going cold, Bryan had stopped sculpting and writing for Laserburn, and although he did bring 5 new miniatures when he came to the shop in
July or August, these were there first to have seen the light of day for a year or so, and would be the last he did with Bob. I was told after the event that Bryan had come to sign-off with TTM,
handing ownership fully to Bob (& Kate) in exchange for a royalty on all his work.<br /> <br /> At this point, from my view of it in the back kitchen, it looked like an amicable split, TTM had
served its purpose, Bryan was moving on to bigger things and TTM had inherited a lots of Citadel 'staff' to work on side projects, including Rick Priestly, Tony Yates and Tony Ackland on sculpting
duties...<br /> <br /> But this wasn't really the end of Bob and Bryan's relationship, that comes tomorrow, 30 years ago...<br /> <br /> (Interested in reading my copy of Spacefarers, <a href="http:/
/www.scribd.com/doc/81678591/Spacefarers-1981">check it out here, on my Scribd page</a>)pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-44655928239469563952013-11-06T10:49:00.002+00:002013-11-27T09:31:36.895+00:00<!--[if gte mso 9]><xml>
<w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</
w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/>
<w:DontGrowAutofit/> </w:Compatibility> <w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel> </w:WordDocument> </xml><![endif]--> <span style="font-size: small;">Wargames Conventions are, I
assume, as old as Wargaming as a hobby...</span><br /> <span style="font-size: small;"> </span><span style="font-size: small;">Wargaming by it's nature, and unlike say, model railways or model
flight, needs groups of people to make it worthwhile, so where two or three are gathered together, then a 'Show' and the accompanying Trade are bound to follow.</span><br /> <br /> <span style=
"font-size: small;"> </span><span style="font-size: small;">TTG used to have big calendar in the back room on which were displayed all the events that Bob would be attending in the year. The
Season, started in late January or early February and ran through the whole year with a few weeks off in the summer, until the last week in November or first in December... There would be a show
almost every weekend, and Bob would attend most of them.</span><br /> <span style="font-size: small;"> </span><span style="font-size: small;">At the time names like, Triples and Midland Militare were
all new to me, I didn't really know what went off at these events, all I really knew was that on these Saturdays, Bob would be out of the shop on the weekend, taking half of the shop with him, and
Kate and the other Robert would be left alone to hold the fort.</span><br /> <br /> <span style="font-size: small;"> </span><span style="font-size: small;">Once Mark and I had started work it became
obvious what a large part of TTG's business Conventions were. We would spend the later part of most Show-weeks, getting the stock ready, rules and games all counted and boxed, miniature stock filled
to the brim and display cabinets repaired and updated with new items... and by Friday afternoon, there would be a large pile of heavily taped brown card-board boxes stacked by the door waiting for
the command from His Lordship to load-up so that he could be away that evening, or early next morning.</span><br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style=
"float: right; text-align: center;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgyY0ulQojthlIDklgPChnd91NdvBxzsUX8wLk6UJdQmx0Ejf4TosXNBKZPcGEwf_gX74uosC8XDpzh9KhTvhlqgP4jot_dpK-chZfIJOb1OzqPnON6Vh44GNIfqmnUR9JvxMXrfVrMcbSN/s640/blogger-image-235783903.jpg" imageanchor="1"
style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgyY0ulQojthlIDklgPChnd91NdvBxzsUX8wLk6UJdQmx0Ejf4TosXNBKZPcGEwf_gX74uosC8XDpzh9KhTvhlqgP4jot_dpK-chZfIJOb1OzqPnON6Vh44GNIfqmnUR9JvxMXrfVrMcbSN/s640/blogger-image-235783903.jpg" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Balrog in constant need of fixing...</td></tr> </tbody></table> <span style="font-size: small;"> </span><span style="font-size: small;">Tuesdays
the reverse would happen, all the stock piled near the door would have to be counted, filled or repaired again and stacked away waiting for the same to happen over and over again... Mark whose job it
had become to repair the mini display cases, would become thoroughly sick of constantly having to re-stick dragon wings, or hydra heads to the fantasy range display or tank turrets that had 'taken a
knock' in transit... </span><br /> <div> <span style="font-size: x-small;"></span><br /> <span style="font-size: small;">Kate had promised Mark and I that when it can to the bigger two day shows
later in the year, that Bob would take us one of us with him to help, which would mean a weekend away from home.</span><br /> <div> <span style="font-size: x-small;"></span><br /> <span style=
"font-size: x-small;"></span> <span style="font-size: small;"> </span><span style="font-size: small;">But...</span><br /> <span style="font-size: small;"> </span><span style="font-size: small;">The
first major show, after the small summer pause, would not require us to go very far, as the British Nationals Championship would be held on our doorstep in Nottingham.</span><br /> <span style=
"font-size: small;"> </span><span style="font-size: small;"><br /></span></div> <div> <span style="font-size: small;">Arena, as I'm sure the event was called, was a result of the Sherwood
Foresters (off whom more later) winning the team prize at the previous years event, and opting to host the event themselves as was the tradition at that stage... </span><br /> <table cellpadding="0"
cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><span style="font-size: small;"><a href="https://
/vlc-large.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="243" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl
/AVvXsEgOyZqO8BHtU8ozPzEwYA32_Rkl199SilQL8-gMJhZ8kIvvuZ1xtHwdN4OQVbxvW9KB52BdTDikbb26jcs_Nvr31hWeOfdy1aVEZWwEQZ9w4jxRGOT_VhBKS_O7SbfVunQfY8MIuheVUSRW/s320/vlc-large.jpg" width="320" /></a></span></
td></tr> <tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small;">Victoria Leisure Centre </span></td></tr> </tbody></table> <br /> <span style="font-size: small;"> </
span><span style="font-size: small;">The event itself was held at <a href="http://www.nottinghamcity.gov.uk/vlc">Victoria Leisure Centre</a> on the outskirts of central Nottingham, less than a mile
from the city centre, over two days on I think the 17th and 18th of September 1983. The venue was split into two main halls, with games and Trade in the sports hall and more games and the Bring &
Buy in the (covered) swimming pool hall...</span></div> <div> <span style="font-size: x-small;"><br /></span> <span style="font-size: small;"> </span><span style="font-size: small;">From what I
recall, Bob had set the trade stand up on the Friday evening so that when I got there on the Saturday morning there was very little in the way of work required of me for the first hour or so until
the event opened, and I had chance to wander around... </span><br /> <br /> <span style="font-size: small;"> </span><span style="font-size: small;">The centre of the hall was given over to the
games championship, with those grass-green 6x4's borrowed from Notts Wargames Club featuring... but around the outside were other traders like TTG. Bob introduced me to Paul and Teresa Bailey, who
had the Minifigs stand, next to them were Jacobite miniatures, who had travelled from Scotland for the weekend and also in the Hall were Dixon miniatures, whose adverts I had seen in White Dwarf
magazine and many others.</span><br /> <br /> <span style="font-size: small;"> </span><span style="font-size: small;">Also there, taking up one side of the hall were Citadel miniatures. I was very
pleased to speak to Bryan Ansell for the first time, He said hello and was I Bob's 'new boy', I was wearing a hand knitted jumper with the logo on, so I guess it wasn't too big a leap for him to
make, I asked about what new minis they had along that day... and in front of the Citadel stand was a huge siege game run by The Players Guild using the new Warhammer Fantasy rules.</span><br
/> <br /> <span style="font-size: small;"> </span><span style="font-size: small;">And outside, were the Treasure Trap, Live Role Play people, who were offering a free weekend to anyone who could
defeat their Champion in hand to hand combat. Mark had about 10 goes at doing this and I think eventually they just gave him the prize for persistence...</span><br /> <br /> <span style="font-size:
small;"> </span><span style="font-size: small;">I don't really remember much about the weekend other than I spent a long time on my feet, serving customers with minis and rules, I left Bob to serve
the people wanting the tanks, planes and ships, as these were well beyond my knowledge... and I came away on the Sunday afternoon with a Jacobite 15mm English Civil War royalist army (with which I
hoped to start playing Tercio when I had some painted) </span><br /> <br /> <span style="font-size: small;"> </span><span style="font-size: small;">I think John Blanche won the painting
competition, with an Asgard half-troll stood on the most elaborate base I'd ever seen, it had resin as a water effect at the lower levels of it and I just had to (just HAD to) touch it to prove to
myself that it wasn't real water... </span><br /> <br /> <span style="font-size: small;"> <span style="font-family: inherit;">I can't really remember much about the games, Ancient and Medieval
using WRG 6th, Renaissance using the new edition of Tercio, Napoleonic with To The Sound of the Guns, ACW using the Newbury rules and Modern and WW2 using WRG or maybe Challenger... Who won? can't
remember... Not the Foresters, or Nottingham club, I think the over all Champions were The Bun Shop a London club, so the next years event would be theirs to organize.</span></span><!--[if gte mso
9]><xml> <w:LatentStyles DefLockedState="false" LatentStyleCount="156"> </w:LatentStyles> </xml><![endif]--><!--[if !mso]><img src="//img2.blogblog.com/img/video_object.png" style="background-color:
#b2b2b2; " class="BLOGGER-object-element tr_noresize tr_placeholder" id="ieooui" data-original-id="ieooui" /> <style> st1\:*{behavior:url(#ieooui) } </style> <![endif]--><!--[if gte mso 10]> <style>
/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt
0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400;
mso-bidi-language:#0400;} </style> <![endif]--></div> </div> pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-20925712792395857882013-11-05T10:44:00.003+00:002014-03-04T10:07:53.439+00:00<!--[if gte mso 9]><xml>
<w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</
w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/>
<w:DontGrowAutofit/> </w:Compatibility> <w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel> </w:WordDocument> </xml><![endif]--> Tabletop's casting operation was tiny...<br /> <br /> The
whole place, shop, warehouse, casting room and Bob and Kate's living space was situated in two three story Edwardian terrace houses with shop fronts... When I started they didn't need all the space
they had so the 'house' above the second shop (55 Mansfield Rd Daybrook) was rented out to a couple of Police officers, who would come and go though the shop or back rooms at will.<br /> <br /> The
shop fronts were backed by a small room with a fireplace which I assume would have been the main living room in days gone by, but either Bob or the previous occupants had had the living space moved
up-stairs and added a kitchen and Bathroom on the first floor... But at the back of the old main room was what might have been the old kitchen, or scullery, a room 4 yards square that opened on to
the 'back-yard'... this was where all the casting was done.<br /> <br /> When I started they had one casting machine, an ex-Citadel, swinging weight thing powered by a belt driven motor. White metal
casting works by spinning a circular mould at a few 100 RPM and then dropping the hot metal into the central feed-hole and letting gravity, centrifugal and centripetal forces do their work...<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://
/$T2eC16ZHJG8E9nyfoTqqBRFOSItg0g~~60_35.JPG" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://
/$T2eC16ZHJG8E9nyfoTqqBRFOSItg0g~~60_35.JPG" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Saunders Spinning weight machine similar to the TTG one</td></tr> </tbody></table>
The only real issue with this is that it works too well, and pressure is needed to stop the still hot metal from shooting out the sides of the mould. Early machines, like TTG's, had three 'towers'
place evenly on the spinning plate from which swung levers with weights attached which when spinning, swung outward and levered the top plate shut. <br /> Cleaver huh?<br /> <br /> Well to a certain
extent it was an ideal way to work, but unfortunately it had one big draw back... The areas in-between the swinging weight would receive less pressure that the rest of the mould, and as a result
these areas would flash (excessively fill) the cavities and if these cavities were large or particularly close to the edge of the mould, the still molten metal would shoot out of the mould and spray
the inside of the machine... and as the lids on these machine were quite low to the spinning plate, and never shut satisfactorily, the metal would spray from the machine and blast a line of cooling
lead alloy across the crotch of the operative... It didn't hurt, fortunately, but it would leave a line of metal embedded in the trousers of every caster in town... For years after it was possible to
tell people who were working in the same job as me for other companies, by the 'Caster's Crotch' they all had...<br /> <br /> In the middle of the summer of '83 TTG took delivery of another new
casting machine, and this one was a bit different. The new machine, with an electric motor driving it's spinning plate directly, and its pressure controlled by a pneumatic ram was a huge step
forward. Speed and pressure were now controlled by the caster, allowing for minor adjustments to keep a warming mould spinning for longer in a day. Previously a mould would have to be rested to cool
once the swing weights could no long apply enough pressure to keep it running without the flash becoming too bad...<br /> <br /> The new machine was delivered by MCP (Multi-Coupling Pneumatic), with
a gentleman called Ray Tutt doing the fitting, whist his boss Mike chatted with Bob and Richard. Ray said that the machine that was being delivered was the first in a run of new machines which
Citadel had ordered to replace their old spinning weight machines, and we were getting the prototype model ahead of them.<br /> <br /> The new machine was fabbo, the pressure controller cured the
flash and spitting issued almost instantly, and the dropping of the plates well below the the lid height meant that if you did get a mould that spat, the metal no-longer splashed into your groin...
<br /> <br /> <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhy8sso_aEsKHr1LrCrztOnn79yRVulJZQTkfYqUFPU-4U8NXLp3wr75MwXzEZV4zi5qsCPDIA42m-8FlUJYeBRKNLJW5tIDOkV5K1-feSdn7TgFGsWL0MlQs7qDXU3xOSnqNKRxvNebI3R/s1600/byc7.jpg" imageanchor="1" style=
"margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhy8sso_aEsKHr1LrCrztOnn79yRVulJZQTkfYqUFPU-4U8NXLp3wr75MwXzEZV4zi5qsCPDIA42m-8FlUJYeBRKNLJW5tIDOkV5K1-feSdn7TgFGsWL0MlQs7qDXU3xOSnqNKRxvNebI3R/s320/byc7.jpg" height="319" width="320" /></a></
td></tr> <tr><td class="tr-caption" style="text-align: center;">BYC7 sculpted by Ali Morrison</td></tr> </tbody></table> TTG kept the old machine, so across that summer there were pretty much always
two of us working in the tiny room... As I mentioned in the last post, Rees Taylor was one of the other casters, who I was told was just making up a little pin-money whilst being a full-time Father,
but my main work mate was Richard Evans, a 27 year old local man, who I don't think actually spoke to me for over a week or so once I'd started...<br /> He and I would become firm-friends over
the next four years while I worked at TTG.<br /> (more on Richard later, a very interesting character, who I was to discover had his own history in the Wargames-world)<br /> <br /> Casting isn't a
bad job, its not difficult to master, but it is hot and heavy work, and requires long spells at the machine if the job is to be done efficiently... and although I'd jump at a chance to do anything
else at TTG if the chance arose, I didn't mind if I had to stand casting all day... it gave the two of us in the tiny room time to chat, and listen to music... <br /> <br /> Oh, and the first mini I
ever cast... Well it was one of these... BYC7 Asisiactic light-horsemen with bow and javs.pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-18644484606368200512013-11-04T10:12:00.000+00:002013-12-03T09:16:59.111+00:00They'll be there tonight you
know...<br /> <br /> And not just tonight, as if I'd chosen to write this on the one night a month or for the first time in ages, when they would be there....<br /> <br /> They are always there...
well maybe not always... Christmases, and Bank Holiday Mondays, there wouldn't be a club, but every other week, like clock-work at 6.30pm the place is open for walk in gamers and regulars
alike.<br /> <br /> <a href="http://nottinghamwargames.co.uk/home/4578685829">Nottingham Wargame club</a> dates back to the late 1960's, but I first started to go when I started work at TTG in the
summer of '83. Bob would offer to take me and Mark, as he was virtually driving passed where we lived, to collect Richard from his mom's, to take him to the club... so keen as mustard types that we
were we hitched along to see what the crack was...<br /> <br /> The club it's self was at that time on the top floor of this building, <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0"><a href=
"http://www.ournottinghamshire.org.uk/page_id__518_path__0p31p39p67p120p.aspx">Queens Walk Community Centre</a>, in Nottingham's less than salubrious Meadows area... up three or four flights of
stairs, which didn't help those members who were carting 25mm cavalry armies in large tool boxes, to the large room at the rear of the building, where there were plenty of trestle tables on which
grass-green 6' x 4' chip-board tops were placed for the games to take place... <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em;
text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
width="239" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Queen's Walk view of the Community centre</td></tr> </tbody></table> </span><br /> <br /> <span id=
"page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0">...and to tell you the truth, this place was probably the first place I'd seen grown men playing wargames... I think I'd seen the <a href="http://
en.wikipedia.org/wiki/Callan_%28TV_series%29">Callan movie</a> at this point and was I was kind of expecting retired Brigadiers types with tweedy jackets and pip-pip attitudes, but this was all a bit
different... Blokes, normal blokes, some of whom I'd seen from the shop, sitting behind units of tiny troops, measuring with expandable tape measures and either cursing their luck or looking smugly
at their dice... </span><br /> <br /> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0">I think that Mark and I just watched the first week, I don't remember playing, but in the following
weeks he and I would bring Bob's old Airfix Napoleonic (of which more later) and we'd have a game or two with those...</span><br /> <br /> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0">In
those days their could 20 to 30 people gaming on any one week, and of course this meant that there was a large variety of games to get involved in, there was of course WW2 with more Airfix plastics,
and other periods that were new, Ancients, Medieval and Renaissance games, as well as others with metal Napoleonic and micro tank games of 'Ultra-Modern' and more WW2... Loads of stuff, and to add to
the verity each of the major periods also had a choice of rule-sets to use, Wargames Research Group (WRG) and TTG had rules for all, and others would surface form Newbury or Skytrex or other
independent Wargames groups... </span><br /> <br /> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0">Players tended to like one rule-set for one period, which could reduce your choice of
opponent, but usually if you fancied a game against a particular player you could find something compatible to play, or if you want to play a particular type of game there were plenty of players
willing to play along...</span><br /> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0"><br /></span> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;
margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEii2jg-WJT3tN1Y-JQ3O0GWD-peP2keq1xMrMhqB4zv30mu-dg9-oyuzlTZ8464wewAFy0-iPrBzEYDPm4sWoKGIa0COVSglKOn0pdiXNaIjvu3hlqXM0hCs_ZuU01wj2RkkuJ5yjfllDYW/s1600/callanmovie.jpg" imageanchor="1" style=
"clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEii2jg-WJT3tN1Y-JQ3O0GWD-peP2keq1xMrMhqB4zv30mu-dg9-oyuzlTZ8464wewAFy0-iPrBzEYDPm4sWoKGIa0COVSglKOn0pdiXNaIjvu3hlqXM0hCs_ZuU01wj2RkkuJ5yjfllDYW/s1600/callanmovie.jpg" /></a></td></tr> <tr><td
class="tr-caption" style="text-align: center;">Callan and Lonely</td></tr> </tbody></table> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0">Now I suppose that every largeish town in the UK and
every city, has its equivalent Club to NWG, London has at least 2, Birmingham and Manchester a couple, Liverpool, Leeds, Glasgow and many more, and by the very fact that they are open to new members,
they act as a starting point for very many gamers, young and old, who might otherwise struggle to get into the hobby, but it is this very openness which is the downside of them. For every cool Callan
or retired Brigadier, there has to be a Lonely, and this bottom end can be a bit off putting... </span><br /> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0"><br /></span> <br /> <div style
="text-align: left;"> <br /></div> <div style="text-align: left;"> <br /></div> <div style="text-align: left;"> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0">But...</span></div> <span id=
"page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0">If you could stand the smell of 'ripe' wargamer on a hot summer evening then NWC was a great place to be, and I'd meet loads of folks to game with, many of
whom like Steve Bruce, Keith Tate, Karl Tebbe (who was running a role-play group downstairs) and Steve Clark, will crop-up again in connection with my future working life, and others; Andy Revel,
Andy 'Nick' Nicholson, Gary, Chris Thorn, who I still say hello to as I pass them at shows... </span><br /> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0"><br /></span> <span id=
"page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0">And after the club... off to the pub... the Queen's Hotel near the station, which is now a carpet warehouse, for a Britvic55 (well I was only 16) and a
debrief of the evenings games... before Bob ran us home in the van... perfect... </span><br /> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0"><br /></span> <span id=
"page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0">Ok, so then, for the next four years of this blog, you can take it that on any given Monday evening, I'll be there... a tool box full of soldiers,
expandable tape-measure in hand, either cursing or hooting with joy, at the dice rolls in front of me... I am a Wargamer.</span><br /> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0"><br /></
span> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0">Strange coincidence time... Whist looking on t'web for details of Nott's Club I noticed that the name given for contact is Rees Taylor, who
I think is now Chair of the club, but in '83 he was one of the two people I worked alongside in my first few days as a Caster at TTG... Its not a small world, its a Miniature world...</span><br />
<span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0"><br /></span> <span id="page_meetings_fJVPm6pR_rTRg9ZIOrhOI_P5_C0">Next time... Casting...</span>pete the mouldmakerhttp://www.blogger.com/profile
/17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-31450988810798978492013-11-01T10:39:00.000+00:002014-04-11T09:42:43.370+01:00As I implied in the last post,
TTG was shut on Mondays, Bob would often not arrive back from Wargame Shows until Sunday afternoon, and I suppose that it was time-off to do banking and paperwork, without the shop bell ringing, or
interruptions on the telephone...<br /> <br /> Also on a Monday, Bob would get back in the van and take advantage of Nottingham's place in the Wargames world to get out to see other companies in our
area.<br /> <br /> <a href="http://en.wikipedia.org/wiki/Nottingham">Nottingham</a> was not, as yet, known as the British Lead-Belt, a term which I don't think I heard first until the advent of the
internet in the late '90's, but is was ideally placed, in the middle of three or four other little centres of miniature production.<br /> Asgard, as I mentioned were in the City, not quite the center
but in the city never the less, as were TTG's printer, Trent Printers in the Meadows area. To the south was Loughborough, home to Skytex, manufacturer of small scale tanks, boats and planes for the
wargames trade, as well as the agents for Heritage minis in this country, and of course to the north-west in Newark were Citadel miniatures the big-boys of the hobby even then...<br /> <br /> <table
align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://
/vw_transporter_2.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgnsKApCaA_7MKsMmpKgjZ6m2gMCflaf1zWeJ-s9spLeyX8V8O8-TtaKrqcASPCX6Z1rZu4cncDG6naduJlRFnfNbeVGh8AlIw-5hOe1BQrBmLoi7Fp9XK2s6HLtz09or7JLVN_hPPQC01W/s320/vw_transporter_2.jpg" height="240" width=
"320" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">VW T2 Transporter</td></tr> </tbody></table> So Bob would jump in the van, and trundle off to see these other companies
on a Monday afternoon, bringing back rumours and news from them, as well as picking up new stuff and out-of-stock items to for-fill mail-orders back at HQ.<br /> <br /> <div style="text-align: left;
"> It strikes me now just how much Bob devoted his life to the wargames industry, working all week at mail-order, spending his evenings typing rules in preparation for them being printed,
driving on Friday or Saturday to a show, standing all day (sometimes two days), and then driving home, only to jump into the van again on Monday to head-out on to The Run, to see all these other
people.</div> <br /> Amazing...<br /> <br /> But there was one more thing to fit into Bob's day-off (?!?), and that was Nottingham Wargame Club... and that dear reader is where our travels will
lead us, next time...pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com0tag:blogger.com,1999:blog-6580384122112863749.post-55872901844564778772013-10-31T10:40:00.000+00:002013-12-03T09:17:45.869+00:00So after two or three weeks of
working on Saturdays, I knew I wanted a job at TTG.<br /> <br /> I wrote a letter, asking if they had any vacancies, despite the fact that it would have been easier just to ask face-to-face, but that
was what I'd been told in school, so that what I did, best hand-writing and everything...<br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left:
1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjV7JW9ScfzRgg56tIWHViMYxrp9IlSHwc2wG6gj8CCN_dN9jEdX6gFWm1XXoHXsiX9rAAV2dT1Ti05-LdRJy_XxDWyKOLI6lyolduyJLMBRNNBrNU_IE0gm06izThtFDlUDijEkGKkoiQy/s1600/Thulg2.jpg" imageanchor="1" style="clear:
right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEjV7JW9ScfzRgg56tIWHViMYxrp9IlSHwc2wG6gj8CCN_dN9jEdX6gFWm1XXoHXsiX9rAAV2dT1Ti05-LdRJy_XxDWyKOLI6lyolduyJLMBRNNBrNU_IE0gm06izThtFDlUDijEkGKkoiQy/s320/Thulg2.jpg" width="249" /></a></td></tr> <tr>
<td class="tr-caption" style="text-align: center;">Thulg, illo by Tony Ackland.</td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;">
<br /></td></tr> </tbody></table> <div style="text-align: right;"> <br /></div> The next time I was in the shop after school, Kate said...<br /> "I got your letter... its left me in a bit of a
dilemer..."<br /> <br /> "Why?" I wimpered, preparing myself for a big dose of rejection...<br /> <br /> "Well" she continued, "your mate Mark asked me for the same thing yesterday as well..."<br />
<br /> My life hung on one sentence...<br /> <br /> "...but, I think there is something we can do..."<br /> <br /> TTG were planning on going though a bit of an expansion, they had started a
miniatures range, which was making the 15mm Laserburn minis, a few 25mm sci-fi and a small range of Dark Age 15mm's,<br /> which they had been licensing in the USA to a company called Alliance
Miniatures.<br /> <br /> Now, it had come down the grape-vine that another US company, Heritage Miniatures, were going bust, and that Alliance in the States wanted to buy up the failed company and
license them back to Tabletop for production here in the UK. Bob was already selling quite large numbers of the Heritage Napoleonics through the shop and though mail-order, so picking up on an
existing range would have doubled their miniature out-put in one swoop.<br /> So if the deal went through, Kate was sure that there would be work for both Mark and Myself, in the newly expanded
Tabletop Miniatures.<br /> <br /> As far as I remember, the deal was still to be finalized in the US, but Kate said if I could do a few days casual work, in the casting room, to see if I was up to
the task, then the job would be mine when I finished school...<br /> Casting???<br /> Well I'd seen the machine and moulds in the back-room but I'd never done it at that time, but yes, " I can do
that" as Yosser Hughes would have said, "Gizza job."<br /> <br /> As it turned out, the deal with Heritage fell though, someone else bought the failing/failed company and their big selling Napoleonic
range would remain with Skytex (the UK agent) for a while yet, but Bob, indomitable as he was, made his mind up overnight, with the aid of Alliance in the US, that TTG would start their own range of
15mm Napoleonics, using their great young sculptor Aly Morrison who was already working on a Medieval range of 15mms.<br /> <br /> If anyone reading this has any more details about Alliance or
Heritage in the early 80's I would be delighted to hear from you... I owe my Life in Miniatures almost directly to these two American companies, and I'd love to find out just exactly what went off in
June or July of '83, I heard that Alliance were out bid, is this the truth? Who did pick up Heritage? What happened to them? I don't think that they are still out there... all information gratefully
received...<br /> <br /> So, short of doing a few trail days in the casting room I had a job... £35 a week for 5 days, 40 hours Tuesday to Saturday.pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com2tag:blogger.com,1999:blog-6580384122112863749.post-25847709066487128242013-10-30T09:52:00.000+00:002013-12-03T09:18:07.422+00:00I did already have a Saturday
job, in late February of '83, Mark had got me a job on <a href="http://en.wikipedia.org/wiki/Arnold,_Nottinghamshire">Arnold</a> Market, working for a flower seller, but in truth I hated it...
fetching and carrying for a couple of '80's barrow boys, in all weathers was no fun at all, so after working on the stock take, I asked Kate if they needed a new Saturday-boy to fill in for Robert
(who's second name I can remember), the lad who had done the job for the last couple of years and who was finishing his A levels and heading of to Bristol for Uni...<br /> <br /> <table cellpadding=
"0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/
imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgAqur5E51DuRw7MNwjS2Dfo1WN7kYUWs_FkkGjsGFOfK3hax-7G50HnEsVoEk84mg3CN8ZlD8hKCkyWDGoRH0QOQsxZNx8yTvTofe9UYc-_Zf7O5FVsW13mAd4t_9vjlbOGetfHoL6v6zU/s320/GoldLabelBarleyWine.jpg" width="188" />
</a></td><td style="text-align: center;"></td><td style="text-align: center;"></td></tr> <tr><td class="tr-caption" style="text-align: center;">an afternoon tipple</td></tr> </tbody></table> <br />
Kate gave it a thought, I assume asked Bob, and said yes. The Saturday post was all about watching the shop whilst Kate got on with a day-to-day life, which from what I remember was sitting with her
feet-up, drinking coffee, or after 2pm a barley-wine, and reading the paper... Bob, most weekends would be away at a Wargame Show (a what?), so Kate liked the idea of not sitting in the shop all day
whilst he was gone.<br /> <br /> 'Watching the shop' suited me down to the ground...<br /> Plonked on a tall stool behind the counter, I would sit and read game books, or White Dwarf, or whatever
came to hand, and wait for the bell to ring to announce the entrance of a customer...<br /> Saturday Mornings wouldn't be too busy, opening at 9, Kate would fulfill whatever mail-order she could that
had arrived that morning, but mostly it was only a light trip to the post office before the last collection at 11.00am, and then a day of waiting for customers...<br /> <br /> Trying to remember back
to those early weekends, I don't think we ever took over £150, some weeks much less, which doesn't sound like a lot of money these days, but it could be quite hard work when we were selling 15mm
minis for 7p (25's for 30p)... and even a big sale, a boxed game or D&D book, might only be £8 - £10, getting to £150 wasn't easy...<br /> <br /> The thing that made it for me was the customers,
mostly they were fabulous, people wanted to be enjoying themselves when they arrived at the shop, so every-time the bell went, there would be another happy Wargamer delighted to have found a little
Aladdins cave of stuff... <br /> And OK, we did have our share of 'characters' though the place, (more of whom later), but mostly customers were bright, knowledgeable and funny, and I couldn't
think of a better way to spend my time on Saturdays than fishing around in the figure cabinets for missing T72 turrets, or dusting down copies of Starship Troopers, or whatever else the customers
wanted...<br /> <br /> Not only was I doing something well with-in my skill range, I had started to enjoy it too.pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com9tag:blogger.com,1999:blog-6580384122112863749.post-85937411767379899802013-10-29T10:00:00.003+00:002013-10-29T10:45:30.595+00:00<div style="text-align:
justify;"> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://
/Stocktaker.gif" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/
R29vZ2xl/AVvXsEg38EfGbnnV4xJPWzs87A-04vmKsC4ioC1sECIK3Zau_Bt4sCbfGVWtZwkhK50ZWIgD0n_qLmCvW5zkjSJfwUtbp_7BhezDMvoU_tM_9WcrELuO-2G_ShkyuKL_8i_d-AZcAWEvUf9XEIGo/s320/Stocktaker.gif" width="320" /></a></
td></tr> <tr align="center"><td class="tr-caption">Pathetic Stock-taker!</td></tr> </tbody></table> <div style="text-align: left;"> Right-o then, fast forward a couple of years, '81 to 83, me and my
best mate Mark Weston were in and out of TTG two or three times a week, several things happened in this two year period, expansion, Laserburn, to mention two, but I'll get to these later... </div> </
div> <div style="text-align: left;"> <br /></div> <div style="text-align: left;"> In March of '83 Kate Connor asked me and Mark, if we'd like to help with the stock taking in the shop. He and I lept
at the chance. I think that we did two days, Tuesday the 29th and Thursday the 31st, just before the UK Tax year-end in April..</div> <div style="text-align: left;"> <br /></div> <div style=
"text-align: left;"> We arrived at 9.00 and after coffee and a chat about why we were doing the count, we set to totaling up box games, and tiny tanks, Citadel minis and Davco ships, and everything
else they had in what amounted to the warehouse space in the back of the place. TTG did quiet a number of rule sets and micro-games and all these had to have their components counted, books, QR,
record and counter sheets...</div> <br /> At lunch time Kate fed us all, Mark and I, and the other chap who was working full-time in the casting room, Richard Evans, something she would continue to
do whilst we worked for Her/Bob.<br /> It didn't really strike me at the time, but it was this kind of small thing that made work feel like home, they didn't have to do it, but they did, and even in
later years when we 'workers' stopped using Kate's kitchen and living room as a canteen, they continued to provide cash for us to buy food, to cook in the work's kitchen...<br /> <br /> I don't
remember what we got paid for the two days, or why we weren't in school for that matter, maybe we were on holiday or maybe school was winding down for 5th year exams, but what I do remember is
finishing on the second day and being given a little handful of folding cash.<br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em;
text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhY8ZePyqxsDUWeVeQMWvyIMAvPYfCwQeppJsh5A-9ilrw8nQgLNHSFxiDbwk6UPjXrcSW7IxZS99VCby3vnxfMeUZAhw-atNFkWoPDbyNmRu4ECJ3tpWLkKxkKCiYJOkcZ5ltqJ5XklqZM/s1600/Sewerville.jpg" imageanchor="1" style=
"clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEhY8ZePyqxsDUWeVeQMWvyIMAvPYfCwQeppJsh5A-9ilrw8nQgLNHSFxiDbwk6UPjXrcSW7IxZS99VCby3vnxfMeUZAhw-atNFkWoPDbyNmRu4ECJ3tpWLkKxkKCiYJOkcZ5ltqJ5XklqZM/s1600/Sewerville.jpg" /></a></td></tr> <tr><td
class="tr-caption" style="text-align: center;">illo by Tony Yates</td></tr> </tbody></table> On the way-out that evening, I grabbed a couple of Laserburn scenarios that I wanted, Sewerville
shootout and Tarrim Towers heist, and asked Bob<br /> "...how much?"<br /> "Oh you can have those", the great man said...<br /> <br /> Money and free games, just for standing around in the shop all
day, looking at whatever they had...<br /> I think I'd found something in my skill-range...<br /> Result!<br /> <br /> Now the only thing was to turn a couple of days casual work into a Career...pete
the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com2tag:blogger.com,1999:blog-6580384122112863749.post-60187456064549960172013-10-25T10:40:00.002+01:002014-03-04T10:19:57.309+00:00I've always been of the opinion
that <a href="http://en.wikipedia.org/wiki/Daybrook">Daybrook Square</a> was the centre of the whole wide world, a fact proved to me in early '81, when a wargames shop open there, right on my
doorstep, with-in a 100yrds of where I had first played with Airfix Knights and Astronauts on my Grandma's front room carpet...<br /> <br /> <div class="separator" style="clear: both; text-align:
center;"> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgmmQvKwrPpDCGhXeByH3Y02sosqSKu6TMh6ThvuooeFrEBWhnjkpRZ8H3TcDio1O6Akbn4UpBt8Lpxj4dgyL_hgv_cXF5T9g2WlheySWsBgbJ3iGyfgaaWVRSKsjTPyCgtBAPjer6Z3Z2U/s1600/TTGlogo2.jpg" imageanchor="1" style=
"margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgmmQvKwrPpDCGhXeByH3Y02sosqSKu6TMh6ThvuooeFrEBWhnjkpRZ8H3TcDio1O6Akbn4UpBt8Lpxj4dgyL_hgv_cXF5T9g2WlheySWsBgbJ3iGyfgaaWVRSKsjTPyCgtBAPjer6Z3Z2U/s1600/TTGlogo2.jpg" /></a></div> <div style=
"text-align: center;"> <br /></div> <br /> Once again I think Andy Black was the bringer of the great news, he must have had to walk past it that morning to get to school and by the time D&D club
started at dinner time it was pretty much old news that we had our own shop with-in walking distance...<br /> <br /> <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float:
left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
height="200" width="123" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Images stolen from Richard Scott</td></tr> </tbody></table> My first memory of going though inside was
one tea-time after a dentist appointment with my mother...<br /> ding-ding-ding, went the the door dell on entry and we were greeted by a friendly blonde lady, Kate Connor, behind the counter who
explained that they had just opened, after working out of their house on Acton Rd, Arnold for years.<br /> <br /> My Mum and Kate chatted for awhile whilst I shot to figure racks to see what they
had...<br /> <br /> And they had loads of stuff, everything Citadel had; Adventures, Monsters, Historicals, plus loads of Ral Partha and others...<br /> <br /> <table cellpadding="0"
cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody> <tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/
imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgMGsgPhT-AmK_pz0emXmWwj8Ski4RNJ_9nT0QdoVkNEfbDF8T9Xb37BqqG050l5ymjKJU2R5k-HJOWQb-8lzpc855h73RvujXrpmo6CwcFFo0ibZrOG5gGZy0FHd5y8-bcn9G3MBmbaf9e/s200/fa01-1-fighter-1-rs%5B1%5D.jpg" height="195"
width="200" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">NOT my painting</td></tr> </tbody></table> The shop was also full of other stuff, plastic kits and modelling
supplies which Kate later told me had been bought it to fill out the space, and also Dungeons and Dragons books and Modules, rules from other people, and 'Wargames Miniatures', tanks for WW2,
soldiers for Napoleonics and ACW, none of which I'd ever seen before... and board games, loads of them...<br /> <br /> But, on that first visit I only had eyes for the fantasy figures, Kate lent me a
chair to stand on, so i could reach the top of the rack and from there I picked my first ever Citadel miniatures... a slime beast with sword (FF2), and a Fighter in plate-mail (FA1), amongst them...
<br /> <br /> From then on, for the next couple of years, I'd cycle through Arnold Park and down to TTG after school and spend half and hour or so, going through everything fantasy and sci-fi they
had... I knew there stock as well as they did... which was handy...<br /> <br /> Next, Stock taker!<br /> <br />pete the mouldmakerhttp://www.blogger.com/profile/
17441528010939741890noreply@blogger.com2tag:blogger.com,1999:blog-6580384122112863749.post-31654649652843593592013-10-23T10:54:00.000+01:002013-12-03T09:18:30.185+00:00<div style="text-align: center;
"> <br /></div> <div class="separator" style="clear: both; text-align: center;"> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEi7kGjdXp5dJyY5HFboaeehzaw4-Bllg2-SJeJPN9uYm0qIJUSI-3bEIpMJCDNvbahx7IlBGy313YmpzGPYr9EcFJuchpf8v7w97_s1RmosP3Xyfdz0I4-qmf6SpMVvFvmpuX65l_1RNBqX/s1600/Asgardlogo.jpg" imageanchor="1" style=
"margin-left: 1em; margin-right: 1em;"><img border="0" height="194" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEi7kGjdXp5dJyY5HFboaeehzaw4-Bllg2-SJeJPN9uYm0qIJUSI-3bEIpMJCDNvbahx7IlBGy313YmpzGPYr9EcFJuchpf8v7w97_s1RmosP3Xyfdz0I4-qmf6SpMVvFvmpuX65l_1RNBqX/s320/Asgardlogo.jpg" width="320" /></a></div> <br
/> Ok, so my first Visit to Asgard was a bust... <br /> <br /> But luckily I was making better friends with the lads at the D&D club and on one Saturday Simon Maze suggested that I go with him
into Nottingham to have a look at the place...<br /> <br /> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgO7-9VgFuoDHc_40pGVZJNnCJj4rjVEjwWD6zpfauNuDyj8ehXEJcFzB-rveaWEaTqLDUWRlFAYBmCwpbeye2OOPtY99OMo-fnRXrtrhuf3C-I3SFmDB5e8YuuDyBTzoL2bwqo5zolrNZe/s1600/EdwardsTheGhoul.jpg" imageanchor="1" style=
"clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEgO7-9VgFuoDHc_40pGVZJNnCJj4rjVEjwWD6zpfauNuDyj8ehXEJcFzB-rveaWEaTqLDUWRlFAYBmCwpbeye2OOPtY99OMo-fnRXrtrhuf3C-I3SFmDB5e8YuuDyBTzoL2bwqo5zolrNZe/s400/EdwardsTheGhoul.jpg" width="254" /></a><br />
Now from what I remember this was probably my first trip into Nottingham on my own, OK I wasn't on my own, I was with Simon, but without a parent, if you see what I mean...<br /> <br /> I went to his
house in the morning, he lived a mile or so from where I did, and we caught the bus in the City Centre... Simon had said that we should save our bus fare and walk, but Nottingham seemed like a
million miles away to a lad of 13 so we spent the 7 or 8p that was the cost of the ride and got into the city as quickly as the bus would carry us...<br /> <br /> The Asgard shop I first visited was
on Commerce Square, which I was lead to believe was their second shop in roughly the same area of the City, off High Pavement, in what was then, quite a run-down area called The Lace Market.<br />
The Shop, which was really nothing more than a front to a warehouse or old mill, was up a couple of big stone steps, with what I assumed was a little workshop and storage space to the rear...<br />
The walls were lined, as was the fashion in those days, with a large areas of 'peg-board' racks, on which were hung all the miniatures they had in stock... Some Citadel, mostly fantasy adventures,
some Ral Partha, and loads of Asgard minis they had made on the premise... and that was about it... No painted minis that I can remember, no gaming tables, no racks of rules and modules, just minis
and a few old copies of White Dwarf magazine...<br /> The chap behind the counter, I later learnt was Paul Sulley, who at this time owned Asgard...<br /> <br /> I'd seen White Dwarf at the D&D
club, someone would always have the latest copy, but a back issue took my fancy, so I came away with one mag and one mini... The front cover of the mag that had taken my interest was issue 20
something, with this excellent Les Edwards Ghoul on the Cover...<br /> <br /> And the mini... well it was this this Ogre, <a href="http://www.miniatures-workshop.com/lostminiswiki/index.php?title=
Image:Asgard-fmonsters-fm63.jpg">FM63</a>, a cracking model with tonnes of character...<br /> <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="https://
imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/
AVvXsEiaEnEXsKAbLSOEaZhyphenhyphenOdk-EdaB9IgOqSIMma9N-wl6wG_3BJp-HbFHgViJ4-a6haa10j_K8i1EvCKqrbRsYYDp_1xfMziEYQ3pjTJ-U9QlFEJk7m_s2GJ6jiEzzjBpqdyO1WeMXw7Vvce1/s320/Asgard-fmonsters-fm63.jpg" width=
"174" /> </a></div> <div class="separator" style="clear: both; text-align: center;"> <br /></div> <div class="separator" style="clear: both; text-align: left;"> So, I'd broken my duck with
Asgard, it really did seem like a cool place, hidden away as it was, filled with all this stuff, and inhabited with what looked like an 'interesting' crowd of people... but little did I know that at
this point, that the next time I was going to set foot in a wargames shop it wouldn't be Asgard but a new shop, almost on my doorstep... TTG was just about to come into my life..</div> pete the | {"url":"https://life-in-miniature.blogspot.com/feeds/posts/default","timestamp":"2024-11-10T01:00:53Z","content_type":"application/atom+xml","content_length":"251210","record_id":"<urn:uuid:f3f69e5e-846a-4ba8-ac15-01cd4f65c68b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00142.warc.gz"} |
The architecture of M. Vitruvius Pollio
Full text: Vitruvius: The architecture of M. Vitruvius Pollio
quantity of water it had caused to overflow was not so great; but was as much less as the
magnitude of the mass of gold was less than that of the same weight of silver: lastly, filling
again the same vase with water, he put therein the crown itself, and found that more water
was displaced by the crown, than by the mass of gold of the same weight: so that from
the water displaced by the crown, more than that by the mass, he discovered by calcu-
lation the quantity of silver mixed with the gold; and thus detected the fraud of the work-
Let us now transfer our attention to the inventions of Architas the Tarentine, and of
Eratosthenes of Cyrene, who have by their mathematical knowledge made many discoveries
useful to mankind: and although for other inventions they may be applauded, for the solution
of the following problem they are chiefly celebrated. Each undertook to solve, by different
methods, the response uttered by Apollo in Delos, to make an altar like his, but containing double
the number of cubic feet; and that, thereafter, those who might be in that island should be
freed by the religion. This Architas solved by the description of the hemicylinder, and Era¬
tosthenes by the mechanism of the mesolabium.
(2*) The altar of Apollo at Delos was a cube, and
the proposition was to find the measure of another cube,
whose quantity should be exactly double that of the
If a cube be formed, whose side is double that of the
given cube, it will contain eight times the cubical quan¬
tity, being the fourth number of a geometrical series in-
creasing in a duplicate ratio, as 1. 2. 4. 8, and of which
the cube required is the second number of the series. It
is said, that Hippocrates, reflecting upon this principle,
reduced the proposition to the finding two geometric mean
proportionals between two given lines, one equal to the
side of the given cube, and the other, double thereof; the
leaft of which mean proportionals will be the measure of
the side of a cube that fhall be exacty double in quantity
to the given cube.
To find these two mean proportions, several methods
were discovered by the antients. Architas, Eratosthenes,
Plato, Nicomedes, Apollonius, and others, have invented
different methods; but they are all tentative: these may
be met with in books of geometry, to which I refer the
feader. I hall, however, here describe the principle on
which the mechanical contrivance of Eratosthenes was
Fig. LXVIII.
Let AB and CD be the two given lines, drawn parallel
to each other; join AC and BD, draw the line AF to any
point F, and from the point F raise FE parallel to AB;
then from E draw the line EH parallel to the former line
AF, and from H raise HG parallel also to AB: lastly, from
G draw GD parallel to the former lines AF and EH, and
if the line GD intersects the line CD in D, the operation
is right; if not, the direction of the lines AF, EH, and
GD, must be altered till it so happens ; when EF, and
GH, will be the two mean proportionals sought. This
method is founded upon the 4th proposition of the 6th
book of Euclid, which demonstrates that the correspond-
ing sides of equiangular triangles are proportionals.
By the same process any number of mean proportionals
may be found between the two given lines AB and CD.
As every addition to the several inventions left us by the
antients for finding two mean proportionals, may not be
without its use, I shall here annex one that I have dis¬
BD, Fig. LXVIII. is the longest given line, and DE is
the shortest, perpendicular to the former: continue BD
toward C indefinitely; also draw the perpendicular BA | {"url":"https://dlc.mpg.de/fulltext/868388572/276/","timestamp":"2024-11-11T21:26:55Z","content_type":"application/xhtml+xml","content_length":"114044","record_id":"<urn:uuid:1868d899-07ce-42f7-b88e-de4a114fb439>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00776.warc.gz"} |
Math Is Fun Forum
Il give you some tips. First prove that the first number (4-digit minus reverse) is divisible by 9. We know (or you can prove that too) that a number is divisible by 9 iff its sum of digits is
divisible by 9. So when we first add the digits it must be a number divisible by 9. What possible numbers are there?
Im not following. You cant just substitute expressions for inequalities in other inequalities
TheDude wrote:
Substitute these values into our previous inequality:
The Right hand side is ok, but not the left side. You have:
if im not missing anything.
Look at mathsyperson proof of the sum of squares:
This proof can be generalized so, if you know the formula for all sums with powers up to k, we can express the sum to the power k+1 in the other sums.
Replies: 4
This is an easy problem if you use the standard approach with trigonometry (the problem is for ~17 years old students). But I found a much nicer geometrical method, without using any trigonometry.
So my challenge for you is to find it.
(or solve it any other way without trigonometry is fine too
TheDude wrote:
No. You forget that there is a minus sign. We have 3c<-b. But:
there is a minus sign infront of the 3c/2, so we cant apply the inequality there.
1a2b3c2212 wrote:
HOW TO READ THAT LANGUAGE??? can translate into english pls? thx
I did translate the important parts of the text:
Kurre wrote:
Altough I should maybe have written that the figure consists of three squares, but that is kind of obvious.
JaneFairfax wrote:
Vi har och . Om så . .
Det är ett skönt problem. Tack så mycket, Kurre.
Yes that is the trigonometric approach
Good job!
Have you been studying swedish? I think I remember that you wrote that you are interested in languages.
Anyway here is my method:
Reflect AB in the horizontal line and draw line BC as in 2. Now reflect the triangle ABC in AC to make a quadrilateral as in 3. Now, all angles in the quadrilateral ABCD are equal, and all sides are
equal, thus it is a square, and since AC is a diagonal,
must be half of a right angle, ie
the +b is just a translation, the hard part is the az. Try polar coordinates.
2691= 2^4 + 3^4 + 3^4 + 5^4 + 5^4 + 5^4 + 5^4 + 13
We first use the cosine theorem on each of the squared sides:
Thus we get:
solving for and using the areaforumla ( etc) :
To minimize this sum we can for example use jensens inequality.
We first assume that the triangle is acute, so all angles are less than . Then is convex, since and . Thus Jensens inequality applies and we get:
What if one angle is obtuse?
Then we can argue as follows. WLOG we assume
. Then there exists a corresponding triangle with sides a',b',c' and and b=b' and c=c'. This triangle must have the same area since but a'<a since it corresponds to a smaller angle. the new triangle
is acute, so we can apply the result there, thus
use the formula on http://en.wikipedia.org/wiki/Trig_identities#Linear_combinations
Ricky wrote:
I guess he means that, because we assumed that p,p+2 where the largest pair there does not exist any numbers with factors p+2k,p+2(k+1).
As I posted before, this statement is false. (p+2k)*(p+2(k+1)) has precisely those numbers as factors.
hm I meant prime factors, but maybe he does not use that they are prime factors, idk, im too lazy to read through the oproof atm.
There are two different accounts posting here, both RICKisanidiot and RICKYisanidiot, imo a clear violence to the multiple account rule (which I think exist on this forum?)
Ricky wrote:
I'm having a lot of trouble understanding what it is you're trying to go for.
If we suppose that there exist a larger pair of the form p+2k and p+2(k+1) Than the set of numbers that have as factors p+2k and p+2(k+1) must be equal to 0 (since they do not exist).
We are supposing that there exists a larger pair where p+2k and p+2(k+1) are both prime? This doesn't make sense, we've already supposed that p and p+2 were the largest pair with this property.
And you say that the number of integers that have p+2k and p+2(k+1) as factors must be zero. This is false, just look at the number:
This number has both those as a factor.
I guess he means that, because we assumed that p,p+2 where the largest pair there does not exist any numbers with factors p+2k,p+2(k+1). IM not sure though and I have not tried to understand the
proof either.
I wanted to find a solution that uses the result from problem 3, which took a while and is probably not the easiest one, but here it is anyway:
Thanks for the tips!
Actually I made this thread to see what kinds of programs that existed, since I got the idea to create such a program myself. I have tried Geometers Sketchpad and Cabri now, and Cabri was really nice
but was not exactly the type of program I had in mind, altough I guess I will not be able to make a program close as good. But Il maybe give it a try anyway
Replies: 6
Are there any good programs for drawing geometrical pictures? I mean that has functions for lets say creating circumcircles, marking centroids/circumcentres/medians etc, basically a program that is
designed for creating proofs/problems/solutions in euclidean geometry??
#15let k,n be positive integers, a a nonzero real, k<n+1 . Show that:
both with real analysis and by using residue calculus
edit: i did a mistake so i dont know if its possible to do this using residues, but that does not mean it must be impossible
what do you mean by block walking?
Write it as C(n,0)*C(n,n)+C(n,1)*C(n,n-1)+...+C(n,k)*C(n,n-k)+...+C(n,n)*C(n,0).
Then assume you have a set of 2n persons, and divide it into two sets A and B of n persons in each. Now you want to choose a comittee of n persons. Then for each choice there will be say k persons in
set A, and n-k persons in set B, and summing will yield exactly the sum above.
eh how does one define length, width and height for a triangular prism?? :s
kean wrote:
sure! it's right
\sum_{k=1}^n \zeta_k^m =0
it does not hold for all m and n | {"url":"https://www.mathisfunforum.com/search.php?action=show_user_posts&user_id=4486","timestamp":"2024-11-09T22:30:31Z","content_type":"application/xhtml+xml","content_length":"58306","record_id":"<urn:uuid:4a814f45-4b9f-4581-8281-dcca8147400c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00272.warc.gz"} |
The integer hull of a convex rational polytope
Given $A\in Z^{m\times n}$ and $b\in Z^m$, we consider the integer program $\max \{c'x\vert Ax=b;x\in N^n\}$ and provide an equivalent and explicit linear program $\max \{\widehat{c}'q\vert M q=r;q\
geq 0\}$, where $M,r,\widehat{c}$ are easily obtained from $A,b,c$ with no calculation. We also provide an explicit algebraic characterization of the integer hull of the convex polytope $P=\{x\in\R^n
\vert Ax=b;x\geq0\}$. All strong valid inequalities can be obtained from the generators of a convex cone whose definition is explicit in terms of $M$.
Technical report #03018, LAAS, Toulouse, January 2003. Discr. Comput. Geom. 32 (2004), 129--139 | {"url":"https://optimization-online.org/2003/04/645/","timestamp":"2024-11-07T06:08:46Z","content_type":"text/html","content_length":"81896","record_id":"<urn:uuid:70311b30-a855-4884-9e2b-1f25d4de5092>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00747.warc.gz"} |
程序代写代做代考 Finite State Automaton A:00010 B:00001 C:0001 PARITY:ODD 1
A:00010 B:00001 C:0001 PARITY:ODD 1
A”00010 B”00001 C”0001 PARITY:ODD 1
You have been assigned your own individual codes for the letters A B C and
also a parity property .
You can obtain your codes and parity property by following the FSA codes and
party property link below.
You are the central hub for a communication system. Messages come to you as
sequences of As Bs and Cs but coded in binary. Each such binary message is
to be followed by a check digit . This is a final 0 or 1 so that the entire binary
message satisfies your parity property.
The parity properties are:
Even 0 The entire message including the check digit has an even number of
Odd 0 The entire message including the check digit has an odd number of
Even 1 The entire message including the check digit has an even number of
Odd 1 The entire message including the check digit has an odd number of
For example if your codes are A = 101 B = 1101 C = 001 and your parity
property is Odd0 the message ABAC would get encoded as 10111011010011.
The final character is the check digit. It is a 1 because we want an odd
number of 0s. So 10111011010011 is valid but 10111011010010 and
10111011011100 are not. Make sure you correctly understand this example
before you go further. ABAC is 1011101101001. It has 5 0s so it already has an
odd number of 0s. We have to add a check digit to keep this number odd so
the check digit in this case is 1. If the parity property had been Even0 the check
digit would have been 0.
Your task is to design a binary finite state automaton FSA to accept all strings
that represent valid messages for your particular codes and parity property
and reject all others. This FSA must be DETERMINISTIC REDUCED and must be
in STANDARD FORM.
This project is machine marked. You can submit your attempts as many times
as you like and your submission will be marked immediately. You will obtain one
of 4 responses:
Your machine does not work. It does not process the string …
correctly. The string that your machine processes incorrectly may assist
you in understanding why your machine does not work. 0 marks
Your machine processes all strings correctly but is not in
reduced form. This means that your machine accepts precisely those
messages that are valid but has states which are equivalent. 5 marks
Your machine processes all strings correctly. It is reduced but is
not in standard form. This means that your machine accepts precisely
those messages that are valid has the right number of states but they are
not named in the correct order for standard form. 6 marks
Your machine processes all strings correctly and is in reduced
standard form. Your machine is completely correct. 8 marks
You should submit an answer once you think you have found a deterministic
machine for your particular codes and parity property. If it is right you will be
told that it works but is not in reduced form. You can then reduce it and check
that you are still right so far. Once it is correctly reduced you can then put it in
standard form if necessary and submit that answer — hopefully finding that it is
completely correct.
Submit your answers using the submission link below.
*The late penalty will reduce the mark for that submission by 1 mark for each
day or part thereof after the deadline. Your best score counts. So if you have a
score of 6 out of 8 before the deadline you can still improve that score to 7 out
of 8 during the 24 hours after the deadline by making a submission that is
completely correct.
Incorrect submissions after the deadline will not lower any score you have
already obtained. | {"url":"https://www.cscodehelp.com/c-c-%E4%BB%A3%E5%86%99/%E7%A8%8B%E5%BA%8F%E4%BB%A3%E5%86%99%E4%BB%A3%E5%81%9A%E4%BB%A3%E8%80%83-finite-state-automaton-a00010-b00001-c0001-parityodd-1/","timestamp":"2024-11-10T12:35:40Z","content_type":"text/html","content_length":"52785","record_id":"<urn:uuid:800d0d26-5f12-411d-8016-762ec06c9faa>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00772.warc.gz"} |
On the kinetic theory of rarefied gases pdf download
The problem of sound propagation in highly rarefied monatomic gases is investigated from the point of view of general orthogonal polynomial solutions in velocity space of the boltzmann equation. A
gas is a simpler system than either a liquid or a solid. Approximate kinetic equations in rarefied gas theory. Superaerodynamics, mechanics of rarefied gases journal. Dec 26, 2018 download fulltext
pdf download fulltext pdf. Lecture 14 ideal gas law and terms of the motion of molecules. Ideal gasthe number of molecules is largethe average separation between molecules is largemolecules moves
randomlymolecules obeys newtons lawmolecules collide elastically with each other and with the wallconsists of identical molecules 3. In the other limit, for extremely rarefied gases, the gradients in
bulk properties are not small. His generalized hydrodynamic equations are consistent with the laws of thermodynamics.
Unit 6 kinetic theory of gases linkedin slideshare. We shall see the need of a new, modern, quantum, physics. Tartar relies on his hmeasures, a tool created for homogenization, to explain some of the
weaknesses, e. Revision notes on kinetic theory of gases askiitians. Enter your email address below and we will send you your username. We follow the kinetic theory to investigate the irreversible.
Harold grad search for more papers by this author harold grad search for more papers by this author. Back to categories a it was a commonly given for ideal gases b this theory was proposed by
bernoulli and next to developed by clausius, maxwell, kroning and boltzmann. They concern the basic equations in kinetic theory, written by boltzmann and landau, describing rarefied gases and weakly
interacting plasmas respectively. The kinetic theory of gases is valid for gases themselves, gaseous mixtures, and plasma. The molecules of a gas are identical spherical, rigid and perfectly elastic
point masses. Assumptions of kinetic theory of gases a all the gases are made up of molecules moving randomly in all the direction. The force of attraction between any two molecules of a solid is
very large. Pdf macroscopic transport equations for rarefied gas flows.
Kinetic theory of gases article about kinetic theory of. This book, unique in the literature, presents working knowledge, theory, techniques, and typical phenomena in rarefied gases for theoretical
development and applications. Jan 22, 2019 eutype generalized hydrodynamic equations have been derived from the boltzmann kinetic theory and applied to investigate continuum andor rarefied gas flows.
Rarefied gas dynamics is a collection of selected papers presented at the eighth international symposium on rarefied gas dynamics, held at stanford university in july 1972. Effect of abrupt change of
the wall temperature in the kinetic theory. Kinetic theory 3 parts to kinetic theory all particles are in constant random motion the motion of one particle is unaffected by the motion of other
particles unless they collide. Class 11 kinetic theory of gases notes freeguru helpline. Pdf unsteady plasma flow near an oscillating rigid plane. The molecules of a given gas are all identical but
are different from those of another gas. Gas kinetic theory the gas kinetic theory aims to explain and compute the macroscopic. But we shall see that the kinetic theory, based as it is on classical
newtonian physics, is limited in what it can describe.
The locality of ret is required from the outset and leads to the quasilinear hyperbolic structure of equations. Myong, gyeongsang national university, south korea feb. Introduction to thermodynamics
and kinetic theory of. Ideal gas law and kinetic theory of gases chapter 20 entropy and the second law of thermodynamics now we to look at temperature, pressure, and internal energy in terms of the
motion of molecules and atoms. This is possible as the interatomic forces, which are short range forces that are important for solids and liquids, can be neglected for gases. Chapter 18 kinetic
theory of gases uva public people. In the rarefied gas phase, ketoenol tautomerization of neutral species can only occur via. Effect of abrupt change of the wall temperature in the. In this chapter
and the next, we will develop the kinetic theory of gases and examine some of its consequences. The discussion should include calculations of viscosity and heat conduction for ideal gases and explain
how the kinetic theory is rigorously derived from the boltzmann equation. The kinetic theory of gases as we know it dates to the paper of boltzmann in 1872.
It is the type of matter which has got fixed volume but no fixed shape. Boltzmann equation kinetic theory knudsen number collision cross section internal degree. Apr 22, 2019 assumptions of kinetic
theory of gases. We shall assume that the state of a molecule is fully determined by its coordinates x i by the vector x and by the components of its translation velocity. These lecture notes provide
the material for a short introductory course on effective equations for classical particle systems. Lectures on kinetic theory of gases and statistical physics.
It is shown that this equation reproduces the experimental results of knudsen and others over the entire range of laminar, slip, and knudsen flow within the accuracy that might be expected from.
Kinetic theory of gases grade 11 physics notes khullakitab. The kinetic theory of gases is a historically significant, but simple, model of the thermodynamic behavior of gases, with which many
principal concepts of thermodynamics were established. Macroscopic transport equations for rarefied gas flows. Kinetic theory of sound propagation in rarefied gases. The molecular kinetic theory of
gases the properties of a prefect ideal gas can be rationalized qualitatively in terms of a model in which the molecules of the gas are in continuous chaotic motion. They move rapidly and
continuously and make collisions with each other and the walls. The flow of rarefied gases scott 1962 aiche journal. According to the kinetic theory of rarefied gases, the rate of transfer of
momentum of mass.
In this paper, two different theories are discussed and compared to each other, namely the kinetic theorybased rational extended. The kinetic theory of gases is a historically significant, but
simple, model of the thermodynamic. However, strict balance law form might be lost in model reduction methods, e. The aim of kinetic theory is to account for the properties of gases in terms of the
forces between the molecules, assuming that their motions are described by the laws of mechanics usually classical newtonian mechanics, although quantum mechanics is needed in some cases. They move
rapidly and continuously and make collisions with each other and. Kinetic theory of rarefied atmospheres what a realistic kinetic theory of ionized gases prescribes about. Lecture 6 gas kinetic
theory and boltzmann equation. After introducing the models studied, numerical simulations carried out in various collisional regimes are presented and illustrate the interest in considering angular.
The kinetic theory of gases provides methods for calculating lyapunov exponents and other quantities, such as kolmogorovsinai entropies, that characterize the chaotic behavior of hardball gases. The
flow of rarefied gases the flow of rarefied gases scott, d. The fundamental kinetic equation of gas theory, the boltzniann equation, is a complex integrodiffcrential equation. From hyperbolic systems
to kinetic theory springerlink.
In kinetic theory, as seen above, balance law form arises naturally from taking moments of the boltzmann equation. Kinetic theory of gases, a theory based on a simplified molecular or particle
description of a gas, from which many gross properties of the gas can be derived. The main postulates of kinetic theory of gases are as. Aug 30, 2012 rarefied gas flow simulations using highorder gas
kinetic unified algorithms for boltzmann model equations progress in aerospace sciences, vol. Chapman and cowling, the mathematical theory of nonuniform gases lifshitz and pitaevskii, physical
kinetics both of these are old school. Our digital library saves in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Every gas consists
of extremely small particles known as molecules.
Kinetic theory of gases, an account of gas properties in terms of motion and interaction of submicroscopic particles in gases phonon, explaining properties of solids in terms of quantal collection
and interactions of submicroscopic particles. Basic theory is developed in a systematic way and presented in a form easily applied to practical use. Macroscopic transport equations for rarefied gas
flowsapproximation methods in kinetic theory. On the apparent permeability of porous media in rarefied. It is described by differential hyperbolic systems of balance laws with local constitutive
equations. Lecture 14 ideal gas law and terms of the motion of. The foundations of the kinetic theory of gases were formulated by l. Note 17 the kinetic theory of gases university of toronto. This
page contains revision notes on kinetic theory of gases. Equations of state are not always effective in continuum mechanics. Schekochihiny the rudolf peierls centre for theoretical physics,
university of oxford, oxford ox1 3np, uk merton college, oxford ox1 4jd, uk compiled on april 2020 these are the notes for my lectures on kinetic theory and statistical physics. The justification and
context of this equation has been clarified over the past half century to the extent that it comprises one of the most complete examples of manybody analyses exhibiting the contraction from a. It
compares rarefied and condensed matter, classical and quantum systems, and real and ideal gases. A forced convection heat transfer correlation of rarefied gases crossflowing over a circular cylinder
experimental thermal and fluid science, vol.
The book is a record of the significant advances in the broad field of rarefied gas dynamics that are considered to be of general and continuing interest. Macroscopic transport equations for rarefied
gas flows springer. The book takes a fresh approach to its subject matter, focusing equally on condensed matter and gases. The difficulties associated with its solution are the result not only of the
large number of independent variables, seven in the general case, but also of the very complicated structure of the collision integral.
Rational extended thermodynamics beyond the monatomic gas. Formulation of moment equations for rarefied gases within. Forces of attraction among particles in a gas can be ignored under ordinary
conditions. An ideal gas a gas that exactly follows the statements of the kinetic theory. They concern the basic equations in kinetic theory, written by boltzmann and landau, describing rarefied.
Mention should also be made of the chapmanenskog approach and grads moment approach which apply kinetic theory to nonideal gasses jim mcelwaine 09. A computational method for eus generalized
hydrodynamic. Ideal gases experiment shows that 1 mole of any gas, such as helium, air, hydrogen, etc at the same volume and temperature has almost the same pressure. Kinetic theory of gases a it was
a commonly given for ideal gases b this theory was proposed by bernoulli and next to developed by clausius, maxwell, kroning and boltzmann. The steady behaviour of a rarefied gas around a rotating
sphere is studied. The theory that achieves this is a marvel of classical physics. Relate to the 1st law of thermodynamics thermal expansion cracking the nut. Free electron model, a model for the
behavior of charge carriers in a metallic solid.
The aim is to explain the macroscopic properties of gases, described in section 1. However, for the mechanics of rarefied gases the primary interest. Maxwell and boltzmann created a kinetic theory of
gases, using classical mechanics. Many gases deviate slightly from agreeing perfectly with the kinetic theory of gases. Fundamental solutions greens functions are derived for the regularised moment
system r of rarefied gas dynamics, for small departures from equilibrium. A new equation for the flow of gases in capillaries is presented, in which all flow constants can be calculated from the
simple kinetic theory of gases. In case of ret it is inherited from kinetic theory and due to the balance form of the boltzmann equation. Kinetic theory and irreversible thermodynamics, 1992 for
modeling the motion of gases far removed from equilibrium. Kinetic theory of gases this is a statistical treatment of the large ensemble of molecules that make up a gas. The solution is furthermore
analytic for the transition domain, as well as for situations where the two gases are in different flow domains, because of different densities.
Macroscopic transport equations for rarefied gas flows approximation methods in kinetic theory 1st e access to it is set as public so you can get it instantly. Sphere oscillating in a rarefied gas
journal of fluid mechanics. It is shown that the usual expansion solutions of the boltzmann equation chapman. We shall now see how this model can be expressed quantitatively in terms of the kinetic
theory of gases.
Nov 29, 20 hi guys, im studying the kinetic theory of gases from paulis book vol. Standard references about the kinetic theory of rarefied gases and the boltzmann equation are the books by boltzmann
93, carleman 119, chapman and cowling 154, uhlenbeck and ford 433, truesdell and muncaster 430, cercignani 141, 148, cercignani, illner and pulvirenti 149, as well as the survey paper by grad 250,
the book by. From hyperbolic systems to kinetic theory a personalized. In addition to the general theory, specific details are worked out for two molecular potentials, namely, maxwell molecules and
rigid spheres. Superaerodynamics, mechanics of rarefied gases journal of. On the kinetic theory of rarefied gases on the kinetic theory of rarefied gases grad, harold 19491201 00. General remarks
relating to the boltzmann equation 4. The actual atomic theory got established more than 150 years later. Consider a gas consisting of n monatomic molecules.
According to this theory, gases are made up of tiny particles in random, straight line motion. This was the first theory to describe gas pressure in terms of collisions with the walls of the
container. Reif ends with a much wider ranging discussion of kinetic theory, transport and stochastic processes. The model describes a gas as a large number of identical submicroscopic particles
atoms or molecules, all of which are in constant, rapid, random motion. The practical importance of truncated moment hierarchies in rarefied gas dynamics and microfluidics motivates us to develop a
new strategy for establishing the full generic structure of truncated moment equations, based on nonentropyproducing irreversible processes associated with casimir symmetry. The kinetic theory of
gases also known as kineticmolecular theory is a law that explains the behavior of a hypothetical ideal gas. The equations of the kinetic theory of gases springerlink. The present discussion focuses
on dilute ideal gases, in which molecular collisions. It is the type of matter which has got fixed shape and volume. Hc verma objective solutions hc verma short answer solutions hc verma solution
book hc verma solution online hc verma solution part 2 hc verma solutions download hc verma solutions part 2 hc verma solutions pdf hc verma solutions volume 1 download hc verma solutions volume 2
Yet maxwell and boltzmann only used trajectories like hyperbolas, reasonable for rarefied gases, but wrong without bound trajectories if the mean free path between collisions tends to 0. Here he
describes a section on the mean free path, where the probability of two particles with speed v and v colliding is described as. Kinetic theory explains the behaviour of gases based on the idea that
the gas consists of rapidly moving atoms or molecules. A kinetic theory of gases and liquids the main object of this book is to formulate a kinetic theory of certain properties of matter, which shall
apply equally well to matter in any state. Sphere oscillating in a rarefied gas volume 794 ying wan yap, john e. As ret has been strictly related to the kinetic theory through the closure method of
moment hierarchy associated to the boltzmann equation, the applicability range of the theory has been restricted within rarefied monatomic gases. Such a model describes a perfect gas and its
properties and is a reasonable approximation to a real gas. The book has also been brought up to date in matters not connected with molecular collision, and has been treated in a way so that the
results are connected. At low densities the pressures become even closer and obey the ideal gas law. Standard references about the kinetic theory of rarefied gases and the boltzmann equation are the
books by boltzmann 93, carleman 119, chapman and. Lectures on kinetic theory of gases and statistical physics oxford physics paper a1 alexander a. Ideal gasthe number of molecules is largethe average
separation between molecules is largemolecules moves randomlymolecules obeys newtons lawmolecules collide elastically with each other and. Kinetic theory of rarefied atmospheres what a realistic. Buy
introduction to thermodynamics and kinetic theory of matter. | {"url":"https://grounorafvan.web.app/610.html","timestamp":"2024-11-07T18:55:59Z","content_type":"text/html","content_length":"22762","record_id":"<urn:uuid:8cbcbdf5-f75f-47cb-986a-92e759d12f14>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00068.warc.gz"} |
Number line with 'Piñata Fever'
Ages 9-14
• Extending the number line
• Adding negative numbers
• Adding and subtracting negative numbers
Use your number line addition and subtraction skills to travel towards a piñata and smash it!
1. Piñatas appear randomly along the top of the screen and start to descend toward the number line
2. Click on the number line at the position of the descending piñata
3. Maths question appears at the bottom of the screen
4. Answer the question using the keypad on the right.
If you answer:
• correctly - you see the hero walk to your chosen position. From here, click anywhere on the screen to jump towards the piñata to smash it! Watch the strike meter to maximise your jump!
• incorrectly - you see the hero walk to the incorrect position, shake their head, and then move back to their original position - costing you precious time.
Strike meter - appears over the hero’s head. Click anywhere on the screen to stop the meter and make your hero jump to smash the piñata. Get your timing right and aim to stop in the green zone:
• red zone - you miss the strike, wasting time. But don’t worry you can try again!
• yellow zone - you whack the piñata, earning you points and candy.
• green zone - you smash the piñata’s critical ‘Sweet Spot’, winning you points and even more candy. Sometimes you also make a special Gift Box fly out
In this maths game you’re the host of a crazy piñata party. To keep the party pumping you need to keep the candy jar topped up – and the only way to get more candy is by smashing piñatas!
Gift Boxes and Power-Ups - If you hit a piñata’s ‘Sweet Spot’ (i.e. stop the strike meter in the green zone), sometimes you will see a Gift Box fly out and land on the number line.
To collect a Gift Box you simply move the hero to its position. The box then opens automatically to reveal a present ranging from bonus points to power-ups:
• ×1 Bonus – Points equivalent to the current stage’s piñata score
• ×2 Bonus – Points equivalent to the current stage’s piñata score × 2
• ×3 Bonus – Points equivalent to the current stage’s piñata score × 3
• ×5 Bonus – Points equivalent to the current stage’s piñata score × 5
• Magic Trainers – Increases the hero’s movement speed for a few moments
• Shockwave – Pushes all piñatas in view back up toward the top of the screen
• Super Striker – Temporarily improves the hero’s accuracy, making it easier to hit piñatas
• Cold Snap – Freezes the piñatas for a few moments, stopping them descending.
The present you receive is chosen randomly. However, the fancier the Gift Box, the more valuable the present inside.
To get fancier Gift Boxes, you need to hit multiple Sweet Spots in a row. Each time you hit a Sweet Spot, the gauge in the top-right corner of the screen fills by one, meaning that your next Gift Box
will be even better.
But be warned: as the gauge fills, the strike meter gets faster too! If you miss a piñata or perform a ‘normal’ smash (i.e. stop the strike meter in the yellow zone), the gauge resets.
El Nibblo - If a piñata gets too close to the number line it will be stolen by El Nibblo, the notorious sweet-toothed piñata thief, and you lose precious candy.
There are 3 modes in this game. Complete the relevant mode and beat the Target Score to earn a medal (Bronze, Silver, Gold).
• Mode 1: Extending the number line >> Finish the game + beat the Target Score >>> 8,000 (B), 20,000 (S), 40,000 (G)
• Mode 2: Adding negative numbers >> Finish the game + beat the Target Score >>> 12,000 (B), 30,000 (S), 60,000 (G)
• Mode 3: Adding and subtracting negative numbers >> Finish the game + beat the Target Score >>> 16,000 (B), 40,000 (S), 80,000 (G)
Scoring calculation
On each Stage the score for each piñata smashed is calculated as:
Example 1: You play Mode 1 (Mode Multiplier = 1) and reach Stage 16 (Stage Number = 16), but have not looped the game (Loop = 0). So your score for each piñata smashed = {50 + 20 × (16 + 24 × 0)} × 1
= 50 + 20 × 16 = 370 points.
Example 2: You play Mode 3 (Mode Multiplier = 3) and reach Stage 8 (Stage Number = 8) having looped the game once (Loop = 1). So your score for each piñata smashed = {50 + 20 × (8 + 24 × 1)} × 3 =
{50 + 20 × 32} × 3 = 2,070 points.
After each game, your best score is recorded automatically.
Within each Mode there are 24 individual Stages to progress through. On each Stage the number line varies to make the maths gradually harder. The higher the Stage, the more points you earn for each
piñata smashed, enabling good players to rack up really high scores!
Answer the maths problems correctly and consistently to move up through the Stages. The fewer errors you make, the faster you progress to the higher Stages and the more points you can make.
Get relegated to a lower stage if you’re consistently wrong – which is good for letting you practise and improve your maths skills, but not so good for your score!
‘Loop’ the game if you’re a fantastic player – move past Stage 24 and return to the earlier Stages. When this happens the game becomes faster and tougher, so don’t get complacent! Each time you loop
the game, an asterisk (*) appears after the Stage number.
The maths problems take the form of either A + ? = B or A - ? = B, where A is the hero’s current position on the number line, B is the position you want the hero to move to, and ? is the value you
need to input.
Mode 1 Strategies: Adding/Subtracting Positive Numbers
• When moving to the right all problems are A + ? = B (? is always a positive number)
• When moving to the left all problems are A - ? = B (? is always a positive number).
• Solving the problems by counting the gap: When solving a problem, you could start by looking at where your character is standing. Next, count how far it is to the piñata you want to smash or gift
you want to grab. Your number line might be marked in 1s so count 1, 2, 3, 4, etc. But some number lines go in 2s or 5s so you might need to count 2, 4, 6, 8, etc, or 5, 10, 15, 20, etc. Once you
work out how far your character has to travel, you know what size number you need to enter.
• Solving problems by counting on: When solving a problem, you may need to move your character to the right. This means counting on (adding) from the position you are standing at. For example: -3 +
? = 5. You can solve this problem in your head by ‘counting on’. Start by counting on 3 more from -3 to give -3 + 3 = 0. This gets you to zero. Then think of counting on 5 more to give 0 + 5 = 5.
So all together you added on 3 and 5 to give 8. So -3 + ? = 5 becomes -3 + 8 = 5.
• Solving problems by counting back: When solving a problem, you may need to move your character to the left. This means counting back (subtracting) from the position you are standing at. For
example: 4 - ? = -3. You can solve this problem in your head by ‘counting back’. Start by counting back 4 from 4 to give 4 - 4 = 0. This gets you to zero. Then think of counting back 3 more to
give 0 - 3 = -3. So all together you subtracted 4 and 3 to give 7. So 4 - ? = -3 becomes 4 - 7 = -3.
Mode 2 Strategies (Advanced): Adding/Subtracting Positive Numbers
• When moving to the right all problems are A + ? = B (? is always a positive number)
• When moving to the left problems are either: A - ? = B (? is a positive number), or… A + ? = B (? is a negative number)
• In Mode 2 there are two ways to move your character to the left, that is, to perform a subtraction. For example: if you want to move from position 4 to position -3, you might have to solve this
problem [4 - ? = -3] or this problem [4 + ? = -3]. To solve 4 - ? = -3 you need to know that you subtract 7 to get from 4 to -3. So the answer is 4 - 7 = -3. To solve 4 + ? = -3 you need to know
that adding (-7) is the same thing as subtracting 7. So the answer is 4 + (-7) = -3.
Mode 3 Strategies (Advanced): Adding/Subtracting Positive and Negative Numbers
• When moving to the right problems are either: A + ? = B ? is a positive number), or… A - ? = B ? is a negative number).
• When moving to the left problems are either: A - ? = B (? is a positive number), or… A + ? = B (? is a negative number).
• In Mode 3 there are two ways to move your character to the right, that is, to perform an addition. For example: if you want to move from position -2 to position 3, you might have to solve this
problem [-2 + ? = 3] or this problem [-2 - ? = 3]. To solve -2 + ? = 3 you need to know that you add 5 to get from -2 to 3. So the answer is -2 + 5 = 3. To solve -2 - ? = 3 you need to know that
subtracting (-5) is the same thing as adding 5. So the answer is -2 - (-5) = 3.
• Generally you should aim to smash the lowest piñatas first. However, if you’re starting to get swamped and running all over the place then it may be wise to sacrifice a piñata or two so you can
catch your breath and plan a new strategy
• If you have several piñatas descending together in a row, start at the left-most one and work your way right – this will mean you get easier maths
• The fancier the gift box, the better the present inside. So if you have several gift boxes to choose from, go for the fanciest-looking one first
• If a gift box starts flashing, that means it’s about to disappear. If you haven’t started moving yet to collect the box, probably best to forget it – unless you’re super-fast, chances are you
won’t reach it before it disappears.
Learn how to assign a game activity here. | {"url":"https://support.mangahigh.com/l/en/article/9qn9or2p9i-game-guide-pinata-fever","timestamp":"2024-11-03T22:10:02Z","content_type":"text/html","content_length":"225106","record_id":"<urn:uuid:e9b6a785-b552-4cb9-a706-d758c1c41a34>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00178.warc.gz"} |
[Python-Dev] Re: PEP 218 (sets); moving set.py to Lib
Guido van Rossum guido@python.org
Tue, 20 Aug 2002 23:47:11 -0400
> Here's the pre-generator version I wrote using lists as the underlying
> representation. Should be trivially transformable into a generator
> version. I'd do it myself but I'm heads-down on bogofilter just now
> def powerset(base):
> "Compute the set of all subsets of a set."
> powerset = []
> for n in xrange(2 ** len(base)):
> subset = []
> for e in xrange(len(base)):
> if n & 2 ** e:
> subset.append(base[e])
> powerset.append(subset)
> return powerset
> Are you slapping your forehead yet? :-)
Yes! I didn't actually know that algorithm.
Here's the generator version for sets (still requires a real set as
def powerset(base):
size = len(base)
for n in xrange(2**size):
subset = []
for e, x in enumerate(base):
if n & 2**e:
yield Set(subset)
I would like to write n & (1<<e) instead of n & 2**e; but 1<<e drops
bits when e is > 31. Now, for a set with that many elements, there's
no hope that this will ever complete in finite time, but does that
mean it shouldn't start? I could write 1L<<e and avoid the issue, but
then I'd be paying for long ops that I'll only ever need in a case
that's only of theoretical importance.
A variation: rather than calling enumerate(base) 2**size times,
concretize in it into a list. We know it can't be very big or else
the result isn't representable:
def powerset(base):
size = len(base)
pairs = list(enumerate(base))
for n in xrange(2**size):
subset = []
for e, x in pairs:
if n & 2**e:
yield Set(subset)
Ah, and now it's trivial to write this so that base can be an
arbitrary iterable again, rather than a set (or sequence):
def powerset(base):
pairs = list(enumerate(base))
size = len(pairs)
for n in xrange(2**size):
subset = []
for e, x in pairs:
if n & 2**e:
yield subset
This is a generator that yields a series of lists whose values are the
items of base. And again, like cartesian product, it's now more a
generator thing than a set thing.
BTW, the correctness of all my versions trivially derives from the
correctness of your version -- each is a very simple transformation of
the previous one. My mentor Lambert Meertens calls this process
Algorithmics (and has developed a mathematical notation and theory for
program transformations).
--Guido van Rossum (home page: http://www.python.org/~guido/) | {"url":"https://mail.python.org/pipermail/python-dev/2002-August/028121.html","timestamp":"2024-11-03T23:18:34Z","content_type":"text/html","content_length":"5185","record_id":"<urn:uuid:d2452b1a-6296-4a92-aee2-9eebae711a90>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00635.warc.gz"} |
Programming and Proving with Total Functions
Programming and Proving with Total Functions
The core design philosophy of F* is that the type of a term (a program fragment) is a specification of its runtime behavior. We write e : t to mean that a term e has type t. Many terms can have the
same type and the same term can have many types.
One (naive but useful) mental model is to think of a type as describing a set of values. For instance, the type int describes the set of terms which compute integer results, i.e., when you have e :
int, then when e is reduced fully it produces a value in the set {..., -2, -1, 0, 1, 2, ...}. Similarly, the type bool is the type of terms that compute or evaluate to one of the values in the set
{true,false}. Unlike many other languages, F* allows defining types that describe arbitrary sets of values, e.g., the type that contains only the number 17, or the type of functions that factor a
number into its primes.
When proving a program e correct, one starts by specifying the properties one is interested in as a type t and then trying to convince F* that e has type t, i.e., deriving e : t.
The idea of using a type to specify properties of a program has deep roots in the connections between logic and computation. You may find it interesting to read about propositions as types, a concept
with many deep mathematical and philosophical implications. For now, it suffices to think of a type t as a specification, or a statement of a theorem, and e : t as computer-checkable claim that the
term e is a proof of the theorem t.
In the next few chapters we’ll learn about how to program total functions and prove them correct. | {"url":"https://fstar-lang.org/tutorial/book/part1/part1.html","timestamp":"2024-11-08T00:51:10Z","content_type":"text/html","content_length":"12616","record_id":"<urn:uuid:356a0728-2145-41e1-af33-3d23443093ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00347.warc.gz"} |
Machine Learning (CS 6140) Homework 2 solved
1. Logistic Regression. We consider the following models of logistic regression for a binary classification with a sigmoid function σ(z) = 1
Model 1: P(Y = 1|X, w1, w2) = σ(w1X1 + w2X2)
Model 2: P(Y = 1|X, w0, w1, w2) = σ(w0 + w1X1 + w2X2)
We have three training examples:
, y1
) = ([1, 1]>, 1), (x
, y2
) = ([1, 0]>, −1), (x
, y3
) = ([0, 0]>, 1)
A. How does the the learned value of w = (w1, w2) change if we change the label of the third
example to −1? How about in Model 2? Explain (Hint: think of the decision boundary on 2D
B. Now, suppose we train the logistic regression model (Model 2) based on the N training examples
, . . . , x
N and labels y
, . . . , yn by maximizing the penalized log-likelihood of the labels:
log P(y
, w) −
For large λ (strong regularization), the log-likelihood terms will behave as linear functions of w.
log σ(y
) ≈
Express the penalized log-likelihood using this approximation (with Model 1), and derive the expression for MLE w in terms of λ and training data {(x
, yi
i=1. Based on this, explain how w
behaves as λ increases.
2. Support Vector Machine. Consider a binary classification problem in one-dimensional space
where the sample contains four data points S = {(1, −1),(−1, −1),(2, 1),(−2, 1)} as shown in
Fig. 1.
Figure 1: Red points represent instances from class +1 and blue points represent instances from class -1.
A. Define Ht = [t, ∞). Consider a class of linear separateors H = {Ht
: t ∈ R}, i.e., for
∀Ht ∈ H, Ht(x) = 1 if x ≥ t otherwise −1. Is there any linear separator Ht ∈ H that achieves 0
classification error on this sample? If yes, show one of the linear separators that achieves 0 classification error on this example. If not, briefly explain why there cannot be such linear separator.
B. Now consider a feature map φ : R → R
2 where φ(x) = (x, x2
). . Apply the feature map to
all the instances in sample S to generate a transformed sample S
0 = {(φ(x), y) : (x, y) ∈ S}.Let
H0 = {ax1 + bx2 + c ≥ 0 : a
2 + b
2 6= 0} be a collection of half-spaces in R2
. More specifically,
Ha,b,c((x1, x2)) = 1 if ax1 + bx2 + c ≥ 0 otherwise −1. Is there any half-space H0 ∈ H0
achieves 0 classification error on the transformed sample S
? If yes, give the equation of the maxmargin linear separator and compute the corresponding margin.
C. What is the kernel corresponding to the feature map φ(·) in the last question, i.e., give the
kernel function K(x, z) : R × R → R.
3. Constructing Kernels. In this question you will be asked to construct new kernels from existing kernels. Suppose K1(x, z) : R
d × R
d → R and K2(x, z) : R
d × R
d → R are both kernels,
show the following functions are also kernels:
A. K(x, z) = c1K1(x, z) + c2K2(x, z) with c1, c2 ≥ 0.
B. K(x, z) = K1(x, z) · K2(x, z).
C. Let q(t) = Pp
i=0 cit
i be a polynomial function with nonnegative coefficients, i.e., ci ≥ 0, ∀i.
Show that K(x, z) = q(K1(x, z)) is a kernel.
D. K(x, z) = exp(K1(x, z)). (Hint: you can use the previous results to prove this.)
E. Let A be a positive semidefinite matrix and define K(x, z) = x
F. K(x, z) = exp(−kx − zk
4. Support Vectors. In question 2, we explicitly constructed the feature map and find the corresponding kernel to help classify the instances using linear separator in the feature space. However
in most cases it is hard to manually construct the desired feature map, and the dimensionality of
the feature space can be very high, even infinity, which makes explicit computation in the feature
space infeasible in practice. In this question we will develop the dual of the primal optimization problem to avoid working in the feature space explicitly. Suppose we have a sample set
S = (x1, y1), …,(xn, yn) of labeled examples in R
d with label set {+1, −1}. Let φ : R
d → RD be
a feature map that transform each input example to a feature vector in R
D. Recall from the lecture
notes that the primal optimization of SVM is given by
2 + C
subject to yi(w
T φ(xi)) ≥ 1 − ξi ∀i = 1, · · · , n
ξi ≥ 0 ∀i = 1, ·, n
which is equivalent to the following dual optimization
αi −
T φ(xj )
subject to 0 ≤ αi ≤ C ∀i = 1, . . . , n
αiyi = 0 ∀i = 1, . . . , n
Recall from the lecture notes ξ1, …, ξn are called slack variables. The optimal slack variables have
intuitive geometric interpretation as shown in Fig. 3. Basically, when ξi = 0, the corresponding
feature vector φ(xi) is correctly classified and it will either lie on the margin of the separator or on
the correct side of the margin. Feature vector with 0 < ξi ≤ 1 lies within the margin but is still be correctly classified. When ξi > 1, the corresponding feature vector is misclassified. Support
vectors correspond to the instances with ξi > 0 or instances that lie on the margin. The optimal
vector w can be represented in terms of αi
, i = 1, · · · , n as w =
i=1 αiyiφ(xi).
A. Suppose the optimal ξ1, …, ξn have been computed. Use the ξi
to obtain an upper bound
on the number of misclassified instances.
B. In the primal optimization of SVM, what?s the role of the coefficient C? Briefly explain your
answer by considering two extreme cases, i.e., C → 0 and C → ∞.
C. Explain how to use the kernel trick to avoid the explicit computation of the feature vector
φ(xi)? Also, given a new instance x, , how to make prediction on the instance without explicitly
computing the feature vector φ(x)?
5. Generalized Lagrangian Function. Consider the optimization problem
f(w) s.t. gj (w) ≤ 0, ∀j = 1, . . . , m, hj (w) = 0, ∀j = 1, . . . , p (1)
Show that for the generalized Lagrangian function, defined by
L(w, α, β) , f(w) +Xn
αjgj (w) +X
βjhj (w)
the following always holds
L(w, α, β) ≤ min
L(w, α, β)
6. Dual Optimization. Consider the optimization program
>P0x + q
0 x + r0 s.t.
>Pix + q
i x + ri ≤ 0, i = 1, . . . , m
where P0 and all Pi are assumed to be positive semi-definite matrices. A) Form the generalized
Lagrangian function. B) Compute the Lagrange dual function. C) Derive the dual maximization
7. Logistic Regression Implementation.
A) Write down a code in Python whose input is a training dataset {(x
, y1
), . . . ,(x
N , yN )} and its
output is the weight vector w in the logistic regression model y = σ(w>x).
B) Download the dataset on the course webpage. Use ‘dataset1’. Run the code on the training
dataset to compute w and evaluate on the test dataset. Report w, classification error on the training set and classification error on the test set. Plot the data (use different colors for data in
classes) and plot the decision boundary found by the logistic regressions.
C) Repeat part B using ‘dataset2’. Explain the differences in results between part A and B and
justify your observations/results.
8. SVM Implementation.
Implement SVM with the SMO algorithm and train it on the provided dataset. For your implementation, you only have to use the linear kernel. In addition, run SVM using the LIBSVM package and compare
the results. You can implement the simplified SMO, as described
in http://cs229.stanford.edu/materials/smo.pdf
A) Apply the SVM on the ‘dataset1’ and report the classification error (on both training and test
sets) as a function of the regularization parameter C.
B) Repeat part A using ‘dataset2’. Explain the differences in results between part A and B and
justify your observations/results.
Homework Submission Instructions:
– Submission of Written Part: You must submit your written report in the class BEFORE CLASS
STARTS. The written part, must contain the plots and results of running your code on the provided
–Submission of Code: You must submit your Python code (.py file) via email, BEFORE CLASS
STARTS. For submitting your code, please send an email to me and CC both TAs.
– The title of your email must be ”CS6140: Code: HW2: Your First and Last Name”.
– You must attach a single zip file to your email that contains all python codes and a readme file
on how to run your files.
– The name of the zip file must be ”HW2-Code: Your First and Last Name”. | {"url":"https://codeshive.com/questions-and-answers/machine-learning-cs-6140-homework-2-solved/","timestamp":"2024-11-08T18:00:31Z","content_type":"text/html","content_length":"114388","record_id":"<urn:uuid:945a6c8f-8c0d-4b6c-9c2b-d7776c787a9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00026.warc.gz"} |
Semi-parametric Hidden Markov Out-Tree (HMOT) models for cell lineage analysis
Semi-parametric Hidden Markov Out-Tree (HMOT) models for cell lineage analysis
Get Complete Project Material File(s) Now! »
Drawing tree-indexed data1
The aim of drawing tree-indexed data is to produce informative geometric representa-tions automatically for visualization purposes. Since tree-indexed data can be viewed as directed graphs, they
could be draw using conventions and algorithms previously pre-sented (see sub-section 1.1.2 page 10). But it is common practice to modify the directed graph drawing standards for trees to take
account of their topological particularities.
Aesthetics of tree drawings Trees and forests are by definition sparse graphs – same number of edges as vertices minus the number of roots – therefore node and link diagrams are preferred to
adjacency plots. Usually, node and link diagrams for drawing trees are compared by considering qualitative and quantitative aesthetic criteria:
area, defined as the surface area of the enclosing rectangle of the vertex drawing.
ratio, defined as the ratio of length of shortest side to length of longest side of the enclosing rectangle of the vertex drawing.
subtree separation property. A drawing of T satisfies the subtree separation prop-erty defined by Chan et al. (1997) if, for any two distinct vertices u and v of T , the enclosing rectangles of the
drawing of Tu and Tv do not overlap one with the other.
Moreover, considering the simplicity of tree topology compared to general directed graphs, two non-exclusive types of drawing are of considerable interest:
Directional drawing. Considering a directed axis in the coordinate system used to draw the trees, no child is placed before its parent. With such a convention, and for clarity, a switch may be made
from arrows to lines in order to represent a directed edge since there is no confusion about edge direction.
Planar drawing. A planar drawing is a drawing in which edges do not intersect. Pla-nar drawings are normally easier to understand than non-planar drawings (i.e. with edge-crossings). Since any tree
admits a planar drawing, it is desirable to obtain planar drawings for trees.
Both Eades (1991) and Fruchterman and Reingold (1991) reported that difficulties are encountered when drawing trees without edge-crossing by force-directed algorithms. Even when using the magnetic
extension of Sugiyama and Misue (1995), if the drawing tends to be directional, there is no guarantee that the result will be be planar. Here-inafter we present only two classes of tree layout
algorithms among many others (see Tamassia, 2007, chapter 5 and reference therein). We focus on these two classes since they are relatively easy to understand and implement, while producing high
quality layouts.
Level-based layouts The level-based approach in tree drawing is characterized by the fact that vertices at the same depth are aligned on the same straight line (i.e. the level) and for two given
depths these straight lines are parallel (Bloesch, 1993; Reingold and Tilford, 1981; Buchheim et al., 2002; Walker, 1990). Algorithms based on this approach produce intuitive drawings that clearly
display symmetries and comply with both planarity and directionality conventions (see algorithm 2 and its results in figure 2.2).
Radial layouts While drawing using the level-based layouts in most cases respects the usual tree drawing conventions and the subtree separation property (this is not the case for the algorithms
proposed by Buchheim et al. (2002) and Walker (1990)), quantitative aesthetics criteria are not satisfactory. For instance, let T be a perfect binary tree of depth d. In a perfect binary tree every
non-leaf vertex has two children. There are 2d leaves. The drawing of T produced using algorithm 2 has:
• an area of d • 2d units,
• a ratio of 2−d.
As presented in Tamassia (2007, Chapter 5), by considering a geometric transformation from Cartesian to polar coordinates, a level-based layout yields a radial layout (see algorithm 3 and its results
in figure 2.2).
Compared to the level drawing of T , the radial drawing produced by algorithm 3 has:
• an area of d2 units,
• a ratio of 1.
Such values are far more satisfactory for these criteria. Given a level-based layout re-specting planarity and directionality conventions such as those produced by algorithm 2, the geometric
transformation from Cartesian to polar coordinates preserves these con-ventions. Moreover, although algorithm 3 drawings do not respect sensu stricto the subtree separation property, if the enclosing
rectangle is changed to an enclosing trian-gle, the subtree separation property is respected.
Tree-indexed data and plants
Tree-indexed data are commonly used in signal processing (Crouse et al., 1998; Dasgupta et al., 2001) and in 2D and 3D images (Choi and Baraniuk, 2001) as multi-scale repre-sentations of path or
grid-indexed data. This thesis does not include a broad spectrum of tree-indexed data but focuses on those which can be collected in studies on plant de-velopment. In particular, we present the
usefulness of tree-indexed data representation on two scales:
A microscopic scale where tree-indexed data represent cell lineages in tissues through-out their development (Olariu et al., 2009),
A macroscopic scale where tree-indexed data represent whole plant architecture (Du-rand et al., 2005).
Tree-indexed data on a cellular scale
The study of morphogenesis A major challenge in developmental biology is to un-derstand how multi-cellular tissues can give rise to complex shapes in animals or plants. It is therefore crucial to be
able to quantify and explain the cellular and tissular patterns that arise during morphogenesis2. Although several studies have provided profound in-sight into the molecular regulatory networks that
play a role during development, the effects of such networks on shape transformations are often only described qualitatively. Describing size and shape changes as a geometrical output of gene activity
requires the quantification of growth patterns at a cellular resolution. Obtaining accurate geometric information about cell position and shape is key to developing quantitative models of
morphogenesis. The identification of groups of cells is also essential, not only based on their differentiation state, but also on the outcome of the mechanical, genetic and hormonal events that drive
Meristems and tree representation of tissues3 In plants, meristems drive mor-phogenesis. A meristem is a set of embryonic cells that organize the construction of the plant. It creates new tissues by
successive divisions of its stem cells. This division process is coupled with:
• a phenomenon that maintains certain cells obtained by division in a totipotency4 state,
• a phenomenon that enrolls certain cells obtained by division into a differentiation genetic program.
Divisions occur in such a way that cells entering the differentiating process will become part of new tissues and organs while the meristem does not disappear since the totipo-tent cells of which it
is composed are constantly regenerating. Given the tissues and organs produced, and location within the plant, three main types of meristems can be considered:
The Shoot Apical Meristem (SAM) located at the apex of the stem (see figure 2.3). It is responsible for genesis of the aerial part of the plant, i.e. leaves, stems and inflorescences5. Inflorescence
set up is the result of the transformation of a Shoot Apical Meristem (SAM) into a floral meristem.
The Root Apical Meristem (RAM) located at the tip of the root and responsible for genesis of the below ground part of the plant, i.e. the roots.
The secondary meristems which are responsible – when located inside stems – for the thickening of stems or roots when located inside roots.
The idea of representing meristem development in a tree-structure, is to follow meristem cells over time represented by vertices, and use edges to represent lineage relationships (see figure 2.3).
Tree-indexed data collection by 3D + t meristem imaging Fernandez et al. (2010) presented a method to generate 3D digitized tissues at cell resolution with au-tomatic tracking of cell lineage during
growth. To create a digitized tissue that can be used to quantitatively analyze growth in four dimensions, they developed an experimen-tal pipeline comprising two key steps:
Multi-angle Acquisition, 3 dimensional Reconstruction and Segmentation (MARS). The multi-angle acquisition produces stacks of 2D images that are transformed into 3D images. Each voxel in the images
stores the intensity with the signal associ-ated to cell walls or other sources, for instance genetic markers. At the end of the MARS step, the segmentation associates each voxel with a given cell in
the meristem, or with background (see figure 2.4).
Automated Lineage Tracking (ALT). Once the cells have been identified in the MARS step, the goal of the ALT step is to perform cell tracking throughout the experiment. At the end of the ALT step
lineage trees of cells are obtained.
Available data In this thesis we focus on joint work concerning flower morphogenesis in Arabidopsis thaliana conducted with J. Legrand, another Ph.D. student in the team (Legrand, 2014). We were
interested in SAMs of Arabidopsis thaliana transformed into floral meristems. In contrast to the original SAM, a floral meristem follows a determinate growth process6. This transformation is
controlled by the expression of particular genes called identity genes, specifying floral organs and causing determinate growth. This work focused on the L1 cell layer7 (see figure 2.4) and the
lineage tree was produced using the Fernandez et al. (2010) MARS-ALT method (see figure 2.3).
Under the assumption that the cell differentiation process in floral meristems can be assimilated to a succession of finite unobservable cell identities, we aimed tot recover these identities on the
basis of genetic and geometric cell characteristics (Legrand, 2014) such as:
• volume,
• surface areas (internal L1/L2 and external L1),
• inertia values (on three axes),
• principal and secondary curvatures,
• AHP68 concentration.
Moreover, in order to understand the early mechanisms involved during flower morpho-genesis, we aimed to identify and characterize cell identity motifs.
Tree-indexed data on the whole plant scale9
Plant architecture analysis The importance of topological structure in understand-ing and analyzing the development of plants was underlined by Hall´e et al. (1978) and Gatsuk et al. (1980) who were
the first to analyze plant architecture. Architectural analysis was at first essentially developed as a qualitative method for describing plants (Barth´el´emy et al., 1989). Subsequently, a major
research effort was devoted to validat-ing and refining architectural concepts and to studying their application in agronomic contexts. This led researchers to study progressively how to quantify
plant architec-ture and to develop corresponding concepts and tools (see Godin and Caraglio, 1998, and references therein). The quantitative approach rapidly ran into the problem of obtaining
computational representations of plants that are consistent with field obser-vations. This problem raises the question of measuring and formally representing plant topological structures.
SAM activity, plant modularity and tree representation of plants The notion of plant topological structure is based on the idea of decomposing a plant into elementary constituents and describing
their connections. To obtain natural decompositions, it is possible to take advantage of the fact that the outcome of the plant growth process is modular: a stem is a succession of metamers
constituted by an internode, the upper node, leaves and axillary buds attached to the node (see figure 2.5)
The topological structure stemming from a modular organism such as plants consists of a description of the connections between its elementary constituents. Considering only one SAM activity leads to
consider stems that can be viewed as sequences: a metamer is connected to an anterior metamer – called the predecessor metamer – and possibly to a posterior metamer – called the successor metamer.
But as a SAM produces buds containing other SAMs, as soon as an axillary meristem of a stem develops into a lateral axis, a metamer may have more than one child, counting the successor and the
lateral(s) metamer(s) but has only one predecessor. The whole plant can thus be viewed as a tree-like structure (see figure 2.6).
Figure 2.5 – (A) Shoot apical meristem and (B) stem organization (Barth´el´emy and Caraglio, 2007). Each leafy axis (B) ends in an apical meristem frequently protected by an apical bud (A). Each stem
comprises a succession of metamers (in gray on (B) constituted by an internode, the upper node, leaves and axil lary buds attached to the node).
Tree-indexed data collection by retrospective measurements Plant growth is often a cyclic phenomenon: metamer set-up may be interrupted by resting phases corresponding for instance to winter in
temperate species. It is thus interesting when collecting data on plant plant topological structure to consider meristematic activity at different scales depending in the growth strategy of the plant.
Indeed, if the metamer is the basic unit of plant architecture, according to the plant growth cycles the tree-indexation of data can be considered at different scales (see figure 2.7):
On the Growth Unit (GU) scale. The GU is composed of the metamers established in a ininterrupted phase of growth.
On the annual shoot scale. The annual shoot corresponds to the GUs established over a year.
On the axis scale. The axis corresponds to the succession of annual shoots or GUs produced by the same meristem.
When studying the architecture of a plant, appropriate selection of the botanical entity
– the elementary constituent on the considered scale – is therefore primordial in order to describe the plant’s growth strategy (see figure 2.6):
• The common walnut (Sabatier et al., 1998), possesses two types of annual shoots. Monocyclic annual shoots are preformed in the winter bud. The annual shoot and the GU are thus the same. Conversely,
bicyclic annual shoots are made up of two GUs. Depending on the objectives of the analysis, the botanical entity chosen could be the GU or the annual shoot.
Figure 2.6 – Tree-indexed data representation of plants (Durand et al., 2005). (A) The plant is represented on the Growth Unit (GU) scale where each GU is denoted by ev with v ∈ [|0, 14|[. (B) Formal
tree graph representation of the same plant: each GU ev is represented by a vertex v. Part of the topological information is not encoded in the graph but can be stored as a property (the three shoots
borne by e1). A few other vertex properties can be defined such the length of GUs, their top and bottom diameters. . . depending on the study.
• For some tropical plants, growth may be almost continuous. As a consequence, GUs are no longer relevant. A reasonable choice could therefore be to consider the metamers or the axis as the botanical
Moreover, it is noteworthy that although axis scales are defined for all plants, the GU and annual shoot scales are mainly defined for temperate species.
Morphological markers, which reflect past meristem activity, enable the botanist to reconstruct the life of a plant by identifying a posteriori growth periods. The tree graph
T is therefore constructed with respect to the plant growth strategy (see figure 2.6). Over the same period the univariate set x¯ or more generally the multivariate set x¯ is collected considering
characteristics – depending on the experiment – of botanical entities such as length, diameter, number (or presence) of flowers, number (or presence) of fruits.
Therein we only consider the collection of tree-indexed data. But since there are more than one pertinent scale on the same plant, data is actually collected using the Multiscale Tree Graph (MTG)
data structure defined by Godin and Caraglio (1998). This MTG data structure can be seen as tree-indexed data where scales are represented by a recursive quotienting of the tree on a finer scale (see
Godin and Caraglio (1998) for more details). Choice of scale is therefore made a posteriori in order to produce tree-indexed data and can depend on the growth aspect of the plant studied.
Figure 2.7 – Plant modularity (Barth´el´emy and Caraglio, 2007). This diagram represents main scales of organization (botanical entity) and repetition phenomena (terms in italics or in boxes) in seed
Available data In this thesis we focus on joint work on mango tree phenology con-ducted with Anna¨elle Dambreville, Pierre-Eric Lauri and Fr´ed´eric Normand. We used mango MTGs containing 15 trees
belonging to 5 cultivars collected during the thesis work by Dambreville (2012) to highlight and characterize mango tree patchiness phe-nomenon. Like other tropical trees, mango is characterized by
marked phenological asynchronisms between and within trees, entailing patchiness (Chacko, 1986, see fig-ure 2.8). Patchiness is characterized by clumps of either vegetative or reproductive GUs within
the canopy: while some parts of the tree canopy develop vegetative GUs, other parts may remain in rest or produce inflorescences at the same time. These asynchro-nisms concern more or less large
branching systems (Ram´ırez and Davenport, 2010). They entail various agronomic problems, such as the repeated use of pesticides to pro-tect recurrent susceptible phenological stages from pests, or
an excessively extended period of fruit maturity which may lead to difficulties organizing fruit harvesting.
Previous studies by Dambreville et al. (2013) showed that the fate and burst date of a daughter GU are strongly affected by those of ancestor GUs, indicating that patch-iness pattern formation could
be studied using spatio-temporal analysis. Our twofold objective here was to:
• Characterize tree patchiness. As stated above, patchiness corresponds to more or less large branching systems sharing similar GU fates. We therefore aimed to recover a quotient tree of tree-indexed
data on the GU scale in which quotients were roughly homogeneous in terms of GU fates.
• Identify the mechanisms responsible for the set-up of tree patchiness. An inquiry of fate alternations along paths within the tree or successions of homogeneous zones in mango trees could reveal
the mechanisms involved in this set-up. To this end, we therefore aimed to highlight particular fate motifs in mango trees on the GU scale.
Figure 2.8 – Il lustration of mango tree patchiness (Dambrevil le, 2012). This mango tree is separated into two parts. The left one in dark green is a clump of old GUs wherein fruits can be found. In
contrary the right one in light green is a clump of new vegetative GUs.
Markov models for tree indexed-data
Here we assume that the indexed set, x¯ = (xt)t T , or more generally x¯ = (xt)t T , is the outcome of a random process.
here we consider that τ is sensu stricto a tree. The only root of the tree is noted r.
In a forest, trees are considered as independent and identically distributed.
Markov models
Let us first consider the simple case where x¯ is the realization of a X -valued stochastic ¯ such that X ⊂ N is called the state space. We are interested here process X = (Xt)t T in modeling the
distribution of the random process ¯ (2.1) P X = x¯ .
When considering the case of tree-indexed data, the simplest dependent model that can be constructed is that which directly consider the tree-graph of the data as a graphical model combined with a
usual homogeneity assumption. Combining both hypotheses leads to the following factorization of (2.1):
¯ P Xt = xt Xpa(t) = xpa(t) . (2.2)
P X = x¯ = P (Xr = xr )
t∈T \{r}
Given factorization (2.2), classical Markovian models for path-indexed data were easily adapted to tree-indexed data. These models are called Independent Markov Out-Tree (IMOT) where ”independent”
means that for such models siblings are assumed to be independent given their parent. Considering the mango tree application we aimed to highlight GU fate motifs assuming that at some point a switch
occurred from a homogeneous tree to a heterogeneous patchy tree. In order to detect such patterns, we assumed that for a given parent fate:
• and a given growth period, only a few different state combinations could be ob-served for children,
• and for a generation, all children states could be observed.
Under these assumptions we sought to model dependencies among children fates in order to obtain such inclusion/exclusion patterns. Since in (2.2) children fates are assumed to be independent given
their parent fate, we had to consider other models (see Durand et al., 2005, for a discussion of available models):
• Markov In-Tree (MIT) models. Instead of modeling siblings given their parent as in IMOT, the parent is modeled given its children, introducing the following factorization of (2.1), P ¯ P (Xl = xl )
P Xt = xt X ch(t) = xch(t) , (2.3) X = x¯ = l∈L t∈T \L where siblings are marginally independent but conditionally dependent on their parent.
• Multi-Type Branching Process (MTBP). Under a permutation invariance property (see Haccou et al., 2005; Kimmel and Axelrod, 2002, for more details), an extension of Markov Out-Tree (MOT) models that
considers dependencies between children and where tree topology is partially represented thought the parametrization of vertex out-degree combinatorics. The following factorization of (2.1) is
therefore introduced P ¯ ∝ P (Xr = xr ) P (N t = nt | Xt = xt) , (2.4) X = x¯ t∈T where N t is the discrete random vector of the number of children of vertex t in each state.
In the context of our mango tree analysis, the assumption of unordered children and the combinatorics induced by the variable and large number of child vertices in each state inflates the number of
model parameters. We therefore focused on parametric versions of these models. Since parametric MIT models are not suitable for left-right cases (see chapter 5.4), we thus focused on MTBP models. The
issue of specifying parametric MTBPs reduces to the problem of defining parametric models for discrete multivariate counts. The classical discrete multivariate distributions catalog (Johnson et al.,
1997) only proposes rigid dependence and covariance patterns. The next step toward modeling mango tree patchiness was thus to derive flexible discrete multivariate distributions with complex
dependency patterns. This was dealt by the introduction of mixed graphical models for multivariate discrete random vectors.
Hidden Markov Tree (HMT) models
When confronted by tree-indexed data that do not contain a few discrete outcomes, as in the mango tree case, but multidimensional heterogeneous outcomes as in the floral meristem case, MIT and MTBP
models cannot be considered as they stand. A widespread extension of Markov Tree (MT) models in such cases is to consider Hidden Markov Tree (HMT) models. HMT models introduced by Crouse et al.
(1998) are for MT models what Hidden Markov Chain (HMC) models are to Markov Chain (MC) models. As for HMC models (see Ephraim and Merhav, 2002, for more details), HMT models are no longer restricted
to categorical variables but deal with any type of random variable or vector at a low cost in terms of parameters.
Table of contents :
1 Graphs and graphical models frameworks
1.1 Introduction to graph theory
1.1.1 Definitions
1.1.2 Drawings
1.1.3 Graph properties
1.2 Graphical model framework
1.2.1 Random vectors and independencies
1.2.2 From graphs to distributions
1.2.3 From distributions to graphs
1.3 Gaussian graphical models
1.3.1 Parametrizations
1.3.2 Inference
2 Tree-indexed data and Markov Tree (MT) models
2.1 Introduction to tree-indexed data
2.1.1 Definitions
2.1.2 Drawing tree-indexed data
2.2 Tree-indexed data and plants
2.2.1 Tree-indexed data on a cellular scale
2.2.2 Tree-indexed data on the whole plant scale
2.3 Markov models for tree indexed-data
2.3.1 Markov models
2.3.2 Hidden Markov Tree (HMT) models
3 Semi-parametric Hidden Markov Out-Tree (HMOT) models for cell lineage analysis
3.1 Introduction
3.2 Definitions
3.2.1 Markov Out-Tree (MOT) models
3.2.2 Hidden Markov Tree (HMT) models
3.3 Computational methods for Hidden Markov Out-Tree (HMOT) models
3.3.1 Upward-downward smoothing algorithm
3.3.2 Application of the EM algorithm
3.3.3 Dynamic programming restoration algorithm
3.4 Application to cell lineage trees
3.4.1 Results
3.4.2 Discussions
4 Inference ofMixed Acyclic GraphicalModels (MAGMs) inMulti-Type Branching Processes (MTBPs)
4.1 Introduction
4.2 Definitions
4.2.1 Multi-Type Branching Processes (MTBPs)
4.2.2 Poisson Mixed Acyclic Graphical Models (PMAGMs)
4.2.3 Discrete Parametric Mixed Acyclic Graphical Models (DPMAGMs)
4.3 Discrete Parametric Mixed Acyclic Graphical Models (DPMAGMs) inference
4.3.1 Parameter inference
4.3.2 Structure inference
4.4 Application to Multi-Type Branching Processes (MTBPs): the case of mango tree asynchronisms
4.5 Concluding remarks
5 Quantification of plant patchiness via tree-structured statistical mod- els: a tree-segmentation/clustering approach
5.1 Introduction
5.2 Material and methods
5.2.1 Tree-structured representation of plants
5.2.2 Modeling plant patchiness with tree segmentation/clustering models
5.2.3 Plant material
5.3 Results
5.3.1 Tree segmentation
5.3.2 Subtree clustering
5.3.3 Cultivar comparisons
5.4 Discussion
Work in progress and perspectives
StatisKit: graphical model inference in C++ and Python
Hidden Markov In-Tree (HMIT) models
Multivariate mixture models in Multi-Type Branching Processes (MTBPs)
Integrative models for deciphering mango tree asynchronisms
Index of references | {"url":"https://www.bestpfe.com/semi-parametric-hidden-markov-out-tree-hmot-models-for-cell-lineage-analysis/","timestamp":"2024-11-14T21:06:43Z","content_type":"text/html","content_length":"76060","record_id":"<urn:uuid:5ea90086-a7d9-402f-8d77-9b90da1fdb92>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00146.warc.gz"} |
Main Article Content
This study aims to know the effect of language literacy and basic Mathematics competence toward students’ ability to solve word problems. The research was done by giving three sets of questions;
language literacy (LL) set, basic Mathematics competence (BM) set, and word problems (WP) set; to the research sample. Research sample was 315 tenth grade students from five schools in Jakarta.
Score of students in each set was analyzed as research data. Score of LL set was treated as data of independent variable 1, score of BM set was treated as data of independent variable 2, and score of
WP set as data of dependent variable. Preliminary data analyses, such as normality, validity, and reliability test, were done. Then, data was analyzed using Wilcoxon test and calculation of R-square.
The result shows that each of independent variable affects dependent variable with BM variable has more effect on WP variable.
word problem basic Mathematics competence language literacy
Article Details
1. Awofala, A. O., Balogun, T. A., & Olagunju, M. A. (2011). Effects of three modes of personalisation on students' achievement in mathematical word problems in Nigeria. International Journal for
Mathematics Teaching & Learning.
2. Clement, J. J. (2008). Does Decoding Increase Word Problem Solving Skills?.Northeast: University of Nebraska-Lincoln accessed onMarch 5th, 2013 from scimath.unl.edu/MIM/files/research/SlackJ.pdf.
3. Conway, P & Sloane, F. C. (2005). International trend in post-primary mathematics education: Perspectives on learning, teaching and assessment. Washington DC: NCCA.
4. Gardner, H. J. (1999). Intelligence Reframed: Multiple Intelligences for 21st century. New York: Basic Book.
5. Gardner, H. J. (2004). Frames of Mind: The theory of Multiple Intelligences. New York: Basic Book.
6. Geary, D. C. (2000). From infancy to adulthood: The development of numerical abilities. European child & adolescent psychiatry, 9, S11-S16.
7. Gouthro, M, & Griffore, J. (2004). Leading Math Success: Mathematical Literacy. Ontario: Ontario Education, Accessed on November 16th, 2011 fromhttp://oame.on.ca/lmstips/files/lms/
8. Huda, N., & Kencana, A. G. (2013). Analisis Kesulitan Siswa Berdasarkan Kemampuan Pemahaman dalam Menyelesaikan Soal Cerita pada Materi Kubus dan Balok Di Kelas VIII SMP Negeri 30 Muaro Jambi.
Prosiding SEMIRATA 2013, 1(1).
9. Kersaint, G., Thompson, D. R., & Petkova, M. (2014). Teaching mathematics to English language learners. Routledge.
10. Larwin, K. H. (2010). Reading is Fundamental in Predicting Math Achievement in 10th Graders?. IEJME-Mathematics Education, 5(3), 131-145
11. Lee, C. (2006). Language for Learning Mathematics: Assessment for Learning in Practice. New York: Open University Press.
12. Lave, J. (2016). Word problems: a microcosm of theories of learning. (2016). Context and Cognition: Ways of Learning and Knowing. London: Routledge.
13. Marjanović, M. M. (1999). A Broader way Through Themas of Elementary School Mathematics, II. The teaching of mathematics, (3), 81-103.
14. Moursund, D. (2007). Computational Thinking and Math Maturity: Improving Math Education in K-8 Schools (2nd Ed.). Oregon: University of Oregon.
15. Niss, M. (1998). Teacher qualifications and the education of teachers. In Perspectives on the Teaching of Geometry for the 21st Century (pp. 297-318). Springer Netherlands..
16. Organisation for Economic Co-operation and Development (OECD). (2011). The PISA 2003 Assessment Framework-Mathematics, Reading, Science, and problem solving knowledge and skill. Paris: OECD
17. Ojose, B. (2008). Applying Piaget's Theory of Cognitive Development to Mathematics Instruction.Journal of Mathematics Educator, 18(1), 26-30.
18. Rindyana, B. S. B., & Chandra, T. D. (2012). Analisis Kesalahan Siswa Dalam Menyelesaikan Soal Cerita Matematika Materi Sistem Persamaan Linear Dua Variabel Berdasarkan Analisis Newman (Studi
Kasus MAN Malang 2 Batu). Artikel Ilmiah Universitas Negeri Malang.
19. Sammons, L. (2011). Building Mathematical comprehension: Using literacy strategies to make meaning. Huntington Beach: Shell Education
20. Schwanebeck, T. (2008). A study of summarization of word problems.Nebraska: University of Nebraska.
21. Seifi, M., Haghverdi, M., & Azizmohamadi, F. (2012). Recognition of students’ difficulties in solving mathematical word problems from the viewpoint of teachers. Journal of Basic and Applied
Scientific Research, 2(3), 2923-2928.
22. Schoppek, W., & Tulis, M. (2010). Enhancing arithmetic and word-problem solving skills efficiently by individualized computer-assisted practice. The Journal of Educational Research, 103(4),
23. Sutarni, M. (2011). Penerapan metode mind mapping dalam meningkatkan kemampuan mengerjakan soal cerita bilangan pecahan. Jurnal Pendidikan Penabur, 1(16), 26-33.
24. Vilenius†Tuohimaa, P. M., Aunola, K., & Nurmi, J. E. (2008). The association between mathematical word problems and reading comprehension. Educational Psychology, 28(4), 409-426.
25. Wilson, W. S. (2009). Elementary school mathematics priorities. AASA Journal of Scholarship & Practice, 6(1), 40-49.
26. Xin, Y. P. (2007). Word problem solving tasks in textbooks and their relation to student performance. The Journal of Educational Research, 100(6), 347-360. | {"url":"http://www.e-journal.stkipsiliwangi.ac.id/index.php/infinity/article/view/303","timestamp":"2024-11-14T02:16:51Z","content_type":"text/html","content_length":"36569","record_id":"<urn:uuid:b8989f7c-9e09-4a6e-95df-eccfa9b03e52>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00812.warc.gz"} |
Dragi partneri,
Dobrodošli u projekt Skriveni svijet parabola.
Nadamo se da će ovo putovanje u svet parabola biti zanimljivo i učenicima i vama, njihovim nastavnicima. Predstavimo se! Na samom početku ostavimo trag tako da upišemo školu, predmet koji predajemo i
kada imamo praznike u ovoj školskoj godini, kako bismo lakše isplanirali tijek aktivnosti.
Dear partners,
Welcome to the Hidden World of Parabolas project.
We hope that this journey to the world of parabolas will be interesting both, the students and their teachers. Let's introduce each other!
At the beginning, we can write the name of our schools, a subject we teach and when we have holidays during this school year to help us plan the course of activities. | {"url":"https://twinspace.etwinning.net/92446/pages/page/674446","timestamp":"2024-11-11T10:43:48Z","content_type":"text/html","content_length":"42121","record_id":"<urn:uuid:f9af5390-01d7-4e47-849c-6edd14cc85d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00699.warc.gz"} |
Hi All! Long time, no blog...
I've been spending most of my 'spare' time on Twitter and my Facebook Fan Page but I decided to share some of the activities I've developed with the few faithful readers who might still remember me!
For those of you looking for richer problem-solving which conforms to the Mathematical Practices of the State Standards, here's one you might find useful. Please let me know how you adapt it for your
students and how they responded. I hope to be writing more of these.
Note: If the image below is cut off, click on it and zoom in if needed.
If interested in purchasing my NEW 2012 Math Challenge Problem/Quiz book, click on BUY NOW at top of right sidebar. 175 problems divided into 35 quizzes with answers at back. Suitable for SAT I, Math
I/II Subject Tests, Math Contest practice and Daily/Weekly Problems of the Day. Includes multiple choice, case I/II/III type and constructed response items. Price is $9.95 and includes detailed
solutions, strategies, tips, hints and key facts for the first 8 quizzes as well as answers for all quizzes Secured pdf will be emailed when purchase is verified. DON'T FORGET TO SEND ME AN EMAIL
(dmarain "at gmail dot com") FIRST SO THAT I WILL BE ABLE TO SEND THE ATTACHMENT!
Excerpt from my local paper, The Record, 9-3-12
For the first time there is broad national consensus on the most necessary skills, so a third grader in Paramus and his camp buddy in Peoria will face roughly the same expectations when they walk
back through the school doors this week.
"It's almost the entire country coming to an agreement about what kids should learn." Many parents in affluent suburbs might assume that widespread worries about weaknesses in the American education
system relate to poverty and think their own kids' schools are doing just fine but this shift aims to raise the bar for everybody. Some studies for example conclude that even advantaged U.S. children
with college educated parents can barely compete internationally in math.
How long have I waited to read these words?
How long have I been advocating this and supporting Prof. Wm. Schmidt's recommendations?
How many of my blog posts have been dedicated to this topic in the past 6 years?
How many of you supported me? Opposed me? Argued that this will never happen?
So do I feel vindicated? I'll let my readers guess!
If interested in purchasing my NEW 2012 Math Challenge Problem/Quiz book, click on BUY NOW at top of right sidebar. 175 problems divided into 35 quizzes with answers at back. Suitable for SAT I, Math
I/II Subject Tests, Math Contest practice and Daily/Weekly Problems of the Day. Includes multiple choice, case I/II/III type and constructed response items. Price is $9.95. Secured pdf will be
emailed when purchase is verified. DON'T FORGET TO SEND ME AN EMAIL (dmarain "at gmail dot com") FIRST SO THAT I CAN SEND THE ATTACHMENT!
Mr. Canastar had 3 identical decks of cards, and told his class that each contained more than 100 but less than 200 cards. He told Yohan to count the 1st deck by 3's and there were 2 left over. He
told Matt to count the 2nd deck by 5's and there were 3 left over. CC counted the 3rd deck by 7's and there were 2 left.
He challenged the rest of the class to figure out how many cards were in each deck. Can you? And explain your method too!
If interested in purchasing my NEW 2012 Math Challenge Problem/Quiz book, Revised Version 1.1, click on BUY NOW at top of right sidebar. 175 problems divided into 35 quizzes with answers at back.
There are now detailed solutions, hints and strategies for the first 8 quizzes. Suitable for SAT I, Math I/II Subject Tests, Math Contest practice and Daily/Weekly Problems of the Day. Includes
multiple choice, case I/II/III type and constructed response items. Special Limited Price $7.99. Secured pdf will be emailed when purchase is verified. DON'T FORGET TO SEND ME AN EMAIL (dmarain "at
gmail dot com") FIRST SO THAT I CAN SEND THE ATTACHMENT!
Never too late in the school year to review percents, right? Well, even if you don't agree, here goes...
First a problem similar to the one I posted on Twitter the other day.
Middle School Level?
The cost of a meal including a 10% tip was $13.75. What was the tip, in dollars?
Ans: $1.25
SAT-type (Higher level of difficulty)
The cost of a meal is $M. With an x% tip included, the bill came to $T. Which of the following is an expression for x in terms of M and T? (A) T/M (B) (T-M)/M (C) 100T/M (D) 100(T-M)/M (E) (T-M)/
Ans: D
Thoughts and Questions...
What % of your middle school students could handle the first question? For that matter, what % of your secondary students would solve it?
Can you predict which of your students would be able to solve the first question mentally or with some quick trial-and-error (ok, G-T-R), using their calculators. I chose 10% to make this possible.
Do you get upset when students do this? Should you?
What do you predict would be the difficulties your algebra students might confront in the 2nd problem?
Is it easy to eliminate some of the answer choices and to make an educated guess from the rest?
(NOTE: I composed the question and the answer choices and I know some of you could improve upon my efforts!)
NOTE: The 2nd question is representative of the harder problems on the SATs and there are many of these in my new Challenge Math Problem/Quiz Book mentioned below.
If interested in purchasing my NEW 2012 Math Challenge Problem/Quiz book, click on BUY NOW at top of right sidebar. 175 problems divided into 35 quizzes with answers at back. Suitable for SAT I, Math
I/II Subject Tests, Math Contest practice and Daily/Weekly Problems of the Day. Includes multiple choice, case I/II/III type and constructed response items. Price is $9.95. Secured pdf will be
emailed when purchase is verified. DON'T FORGET TO SEND ME AN EMAIL (dmarain "at gmail dot com") FIRST SO THAT I CAN SEND THE ATTACHMENT!
No matter where you introduce a lesson on geometric sequences, we can always begin with definitions, rules and formulas.
The alternative is to build on student intuition and natural curiosity by asking them to write their own observations and questions they would like to have answered.
Imaginary Scenario (or is it?)
Jack: Mom, all the terms are just powers of 3 or their opposites, right?
Mom (Jane): Write your hypothesis, test it and let me know.
If your students or your son is not 15 year old Jack Andraka, here are some suggestions...
1. What are the next 3 terms?
2. If the 99th term is x, write an expression for the 100th term? (Recursive thinking)
3. Which terms are positive? Negative?
4. Write an expression for the nth term.
5. How would we graph the sequence?
6. Are the terms of the sequence increasing? Decreasing? Both? Neither?
7. Which terms of the sequence are greater than a million? A trillion? Less than -1000000?
Another Imaginary Scenario (or is it?)
Uh, show me where this topic is in the CCSSM.
Uh, where does it say I have to ask all these questions?
Jack who?
Sent from my Verizon Wireless 4GLTE Phone
I'm the host, you're the player.
I shuffle 3 cards, 2 of which have the word "LOSE" on them, one has "WIN".
You randomly select a card but you're not allowed to turn it over and I do not turn over my 2 cards.
AT THIS POINT, WHO IS MORE LIKELY TO HOLD THE WINNING CARD?
I look at my cards and reveal a losing card.
NOW, WHO IS MORE LIKELY TO HOLD THE WINNING CARD!
I ALLOW YOU TO SWITCH TO THE REMAINING FACE DOWN CARD. SHOULD YOU?
I WOULD!
Hey, I figured I'd try my "hand" at this classic too! An important point here is whether my model of the original puzzle is equivalent.
Your thoughts?
Sent from my Verizon Wireless 4GLTE Phone
Which growth rate will make the Hulk taller?
A growth of 60% for the year OR
1% growth per week?
Fresh fruit is so expensive these days. I cannot find a _ _ _ _ _ _ _ _ _ _.
Fill in blanks, with two 5-letter words which are anagrams of each other.
First 3 correct answers to the math problem (with explanation) and the PolyAnagram will win my Challenge Math Book. Email me at dmarain at gmail dot com.
Sent from my Verizon Wireless 4GLTE Phone
The x- and y-intercepts of a line are 2t^3 and 3t respectively. If the slope of a perpendicular line is 3/2, the positive value of t is ?
Ans: 3/2
1. I've received several thoughts re my PolyAnagrams. I'm a word puzzle fanatic as you might have guessed by now and I enjoy writing these. Let me know if you'd like to see more or restrict a math
blog to math!
2) I'm actually thinking of writing 50 of these and offering it on Amazon for a couple of bucks. My question for my readers is, would you buy it?
3) I'm still frustrated by reviews of this blog that no one comments that it is essentially intended for teachers. I use the problems as a vehicle for deeper reflection about our practice. That's why
I usually ask a series of questions after the problem. Does anyone actually read these!
4) I noticed that my post about an explanation of one of my problems drew more readers than all others combined! Should I interpret that to mean that my readers want to see solutions more than
answers? Pls comment!
Sent from my Verizon Wireless 4GLTE Phone
For how many pairs (x,y) of positive integers is 2x+3y<24?
Ans to QuadAnagram:
Today's PentAnagram!
Complete the sentence with FIVE 4-letter words which are anagrams of each other.
Mr. Jones' students watched with ---- attention when he took a -----fall, onto the ----. But this was just ---- of a ---- he was setting
Sent from my Verizon Wireless 4GLTE Phone
First, here's a restatement of yesterday's probability question :
Compare these 2 probabilities and explain method:
(a) Prob of rolling exactly 3 sixes in 5 rolls of a fair die.
(b) Prob of rolling exactly 3 sevens in 5 rolls of a pair of fair dice
Discussion :
Both are examples of binomial probability because they involve repeated independent trials each of which has 2 outcomes. The following explanation is intentionally detailed and 'repetitious'.
The prob of a 6 on each roll is 1/6. Each roll produces only 2 outcomes, either a 6 (prob=1/6) or not a 6 (prob = 5/6).
The prob of a 7 on each roll of a pair of dice is 6/36 or 1/6. Each roll of the pair has only 2 outcomes, either a 7 (prob=1/6) or not a 7 (prob=5/6).
Therefore, the probabilities of getting 3 successes in 5 trials is the same. Since the question asks for a comparison, we're done.
The actual prob is C(5,3)(1/6)^3•(5/6)^2 where C(5,3) is the 'MathNotation' for the number of ways of arranging 5 objects, one group of 3 identical objects and a separate group of 2 identical
objects. This is not the usual way of defining combinations but I like this interpretation.
I guess the QuadAnagram was a bit challenging. Here's a hint for the ending:
...he's a bored L---R.
Email me at dmarain at gmail dot com with your answer.
Sent from my Verizon Wireless 4GLTE Phone
Well if you tried yesterday's TriAnagram you know the rules. This time we're looking for FOUR 5-letter words to fill in the blanks. The words are all anagrams of each other.
John was so bored with being a ----- that he took his -----, went to the airport, saw his boss who was a regular ----- and now John is a bored -----.
Ok, some math..
Compare these 2 probabilities and explain method:
(a) Prob of rolling exactly 3 sixes in 5 rolls of a fair die.
(b) Prob of rolling exactly 3 sevens in 5 rolls of a pair of fair dice
We had 2 winners yesterday and each received my new New Math Challenge Book.
FIRST 3 TO SOLVE TODAY'S ANAGRAM AND MATH PUZZLE WILL RECEIVE MY BOOK AND THEIR NAME WILL BE PUBLISHED!
EMAIL ME AT dmarain at gmail dot com
Sent from my Verizon Wireless 4GLTE Phone
Mark James is our first winner today and he already has received his prize! Two to go...
Charles Drake Poole is our 2nd winner!
Joshua Zucker is our 3rd and final winner! Congratulations! First if you haven't seen my QuadAnagrams and Trianagrams on Twitter, I'll start you off with a fairly easy Triple- or TriAnagram.2
I opened my mouth ----- but my ----- braces still felt -----.
Object: Replace the dashes with 3 different 5-letter words which are anagrams of each other.
First 3 to email me at dmarain at gmail dot com with the solution to my TriAnagram and the unique property shared by 135 and 144 will receive a free copy of my new Math Challenge Problem Quiz Book.
Ok, back to asking your students the bigger question:
What makes 135 and 144 so special!
1) Have them work individually or in pairs?
2) Use calculator?
3) Get them started or ask someone for an idea?
4) What if they say 144 is a perfect square? Does the question imply that the properties must apply to both? Should I have made it clearer in the wording of the problem or is the word and sufficient
to convey that?
5) The really unusual property I'm looking for is only shared by 0,1,135 and 144. Good luck finding it!
Sent from my Verizon Wireless 4GLTE Phone
For exercise, a prisoner was chained to one corner (lower) of a 10 ft concrete cube located in the center of the yard. If the chain was 16 ft long and was not obstructed except for the cube, over how
many sq ft of ground could he roam?
Ans: 210π sq ft
1. Give the students the diagram or have them draw it themselves?
2. Have them work individually or in groups?
3. How much time would you give them to work on this in class?
4. After discussion, how would you know if they 'got' it? Assessment?
5. Makes more sense to give them a variant of the problem for HW or ask them to design their own and solve it?
Sent from my Verizon Wireless 4GLTE Phone
A fairly common standardized test question for Algebra 1,2 or SATs is something like
The sum of 2 numbers is 20 and their product is 64. What is the larger number?
This question requires the student to actually find the numbers as opposed to a question with the same given info but asking for the positive difference of the numbers.
Do you suggest to students that many of these types of questions can be handled by inspection with mental math? This is because the majority of standardized math questions involve simple integer
values or adhere to the "Keep it Simple" philosophy!
From either of the given relationships students should be able to arrive at 16 and 4 as the values and proceed from there. For the 25% or so of questions which do not admit a simple solution there's
always straight algebra or the "test each answer choice" strategy for Multiple Choice. By the way this is why item writers often shy away from direct "solve for x" types, preferring the "find the
positive difference " type.
Please don't forget to make that critical connection to the graph of a linear-quadratic system. A quick sketch of the line x+y=20 and the rectangular hyperbola xy=64 suggests there are 2 pairs of
solutions which involve the same numbers by symmetry, i.e., (4,16) and (16,4).
Sent from my Verizon Wireless 4GLTE Phone
Well, SATs are now over for this month but anytime we can exercise students' minds is not a waste of time IMO.
If x=2.76, what is the value of
(x-3)/(x-2) - (1-x)/(x-2)?
NO CALCULATORS - 30 sec...
(1) Would students think "there must be a trick here"?
(2) Do you see value in this quickie?
(3) It might be fun to have half the class use pencil, paper and calculator while other half does it mentally.
(4) Of course most students should be careful when doing standardized test questions so we're not advocating quick mental math methods for all questions!
Sent from my Verizon Wireless 4GLTE Phone
Once students learn the strategy for doing these kinds of questions, the SAT and other standardized tests seem rather easy!
Find x^3+y^3
Ans: -350
(1) Before giving students this question you may wish to scaffold with finding xy first.
Ans: 45
(2) To promote connection-making and to deepen their thought processes, give them the answer -350 and ask:
(a) Without graphing. explain why the graphs of the 2 given eqns DO NOT INTERSECT!
(b) Then how can there be a solution!
Sent from my Verizon Wireless 4GLTE Phone
Vi Hart must be having an effect on me! After proudly explaining for over 40 years why 0.9999... must equal 1 using the Density Property of the Reals (see my post Another Proof that 0.9999...=1), I
just had an epiphany of sorts.
If 0.9999...=1, then (0.9999...)^2 must also equal 1 from the properties of the reals. But squaring a finite string of 9's (with or without a decimal point) produces a fascinating result:
(0.9999)^2=0.99980001 etc...
This sequence of decimals seems to suggest the existence of a non-real number which differs from 1 by an infinitesimal amount, so-called hyperreal numbers, leading to the non-standard analysis of
Abraham Robinson. Who knows where the teaching of calculus might be today if Dr. Robinson had not died at the age of 55 from the disease that took my wife 2 months ago -- pancreatic cancer.
Well, maybe it's healthy to have one' roots shaken after many years. After all, my tag line for this blog for a couple of years involved how new ideas are often at first ridiculed, then vehemently
opposed and finally accepted as obvious ...
NOTE: I omitted the hyperlinks in this article. I was getting too 'hyper'!
Sent from my Verizon Wireless 4GLTE Phone
Here's another quick exploration for middle schoolers and beyond. I believe it builds mental math and number sense skills and more.
With your partner write as many "interesting" observations about the number 841 as you can in the next 5 minutes. Yes, calculators are permitted.
If they've learned the Pyth Thm, you may want to suggest afterward that 841 could be the square of the hypotenuse of a right triangle - let them find the 3 sides (unless a team comes up with that!
).Talk about making connections!
Sent from my Verizon Wireless 4GLTE Phone
How many positive integers less than 1000 have exactly
(a) 3 positive integer factors
Ans: 11
(b) 5 pos int factors
Ans: 3
(c) 7 factors
Ans: 2
Is this topic in the middle school core standards? Under divisibility? Factors?
Have you seen questions like these on state tests? SATs?
What strategy would you like your 6th-8th graders to use? Assuming they don't know a 'rule' for this problem, how can they best discover a pattern? Would it make sense for students to make a 2-column
table of integers and number of factors?
Why am I addressing middle school curriculum when the title of this post refers to SATs?
Is this question not worth all the time it would consume?
Do you believe this question is only for the 'mathletes' who take math contests?
Sent from my Verizon Wireless 4GLTE Phone
1) To show myself that I still can
2) To let my faithful readers and fellow/sister bloggers know that I'm back
3) To have that feeling of accomplishment seeing my posts ranked #1 on Alltop again
4) To keep busy and distract my mind from other thoughts
Don't worry if you can't keep up with my manic publishing pace. I will soon be slowing down!
Dave Marain
Sent from my Verizon Wireless 4GLTE Phone
Ever wonder about practical applications of those 'some liquid is being drained from a conical tank' calculus problems?
Well, they do manufacture storage tanks with cylindrical tops and cone-shaped bottoms. Ask your students why, then share the following excerpt 'borrowed' from the website of a company which makes
"Cone bottoms provide for quick and complete drainage."
Alright already - enough motivation for a geometry problem! No calculus needed!
A conical storage tank with a maximum depth of 10 feet is completely filled with a chemical solution. Some of its contents are then drained from the bottom.
Ask your students:
(a) When depth of liquid falls to 5 ft, explain intuitively (no calculations) why much more than half the contents has drained out.
(b) Now for the geometry application...
What % of the total liquid has been drained when depth drops to 5 ft?
Ans: 87.5%
(c) (More challenging) What should depth be for tank to be half full? Give both one place approx and 'exact' answer.
Ans: approx 7.9 ft
I'll leave exact answer to my astute readers!
Note for instructor: You may want to explore different depths like 6', 7', 8' first to see how close we can come to half full.
WHAT ARE THE BIG IDEAS HERE?
DO YOU BELIEVE THIS CONCEPT IS ASSESSED ON SATs?
Sent from my Verizon Wireless 4GLTE Phone
Show that the area of a 13-14-15 triangle is 84. Compute mentally - 30 seconds tick tick tick...
I'm being silly with the ticking clock but it is possible to do this if you choose the "right" base! Unless of course you can mentally apply Heron's formula which is doable! Ok, so there's more than
one way as always!
So what makes it special!? Somebody out there knows...
If you like these challenges consider purchasing my new Math Challenge Problem/Quiz Book - 175 questions - SAT format - with answers. Go to top of right sidebar to order.
Sent from my Verizon Wireless 4GLTE Phone
A diagonal of length x of a rectangle makes a 30° angle with the base.
(a) Show that the area of the rectangle is
(b) The formula in (a) is also the area of an equilateral triangle of side length x. What triangle is this the area of? Explain!
Sent from my Verizon Wireless 4GLTE Phone
I just tweeted this flight of fancy...
The next time a student says, "When are we ever going to use this?", try
"If you're referring to your brain, I was thinking the same thing!"
Sent from my Verizon Wireless 4GLTE Phone
The mean of 3^(m+2) and 3^(m+4) can be expressed as b•3^(m+3). If m>0, then b=?
Ans: 5/3
On an actual College Board test, this would likely be multiple choice and perhaps a bit easier but s similar question appeared on the October 2008 exam.
Would you recommend to your students 'plugging in' say m=1?
Even if students avoid an algebraic approach, we as educators can still use this example to review exponent skills, yes?
Sent from my Verizon Wireless 4GLTE Phone
I took this picture of a section the floor of the hospital where I volunteer and fortunately I wasn't dragged to the psych ward. Students see tiling patterns every day yet rarely think of applying
their knowledge of geometry.
Assume each white square has side length 2 and that the shaded square is obtained by rotating one of the white squares 45 degrees.
Show that the overlap is a regular octagon of side length 2√2 - 2.
Sent from my Verizon Wireless 4GLTE Phone
If # of left-handers are 11 1/9% of right-handers, what % of total pop are left-handed? (disregard ambidextrous)
Questions for Middle School Teachers
1) At what grade level would this kind of problem be introduced?
2) Would you allow use of calculator here or expect students to change 11 1/9% to 100/9% and 111 1/9% to 1000/9%? More importantly, am I out of my mind to think that students at any grade level
including secondary would do this!
3) WHAT ARE THE BIG IDEAS HERE?
4) WHERE DOES THIS TYPE OF QUESTION FIT INTO CORE STANDARDS?
Sent from my Verizon Wireless 4GLTE Phone
If p is prime, which of the following could be prime?
I. p+7
II. 4p^2-4p+1
III. p^2-p
(A) I only (B) II only (C) III only
(D) I,II,III (E) none
What KNOWLEDGE must middle/secondary students have to solve this? In what grade is this taught?
Ask students: If "could" was replaced by "must" would the answer change? Explain.
For homework, ask students to write their own version of this problem. You may get some awesome questions you can use later on!
Sent from my Verizon Wireless 4GLTE Phone
If a^2 = b^2 = c^2 = 4 and abc ≠ 0, how many different values are possible for a+b+c?
(A) 1 (B) 2 (C) 3 (D) 4 (E) 5
Ans: 4
For those of you who find this question trivial, remember that difficulty is very subjective.
Whst is(are) the BIG IDEAS here?
Can you predict which of your students will choose an algebraic approach vs "plugging in"?
Extension for students: Suppose we use 5 variables instead of 3.
Ans: 6
Generalize and explain!
Sent from my Verizon Wireless 4GLTE Phone
An equal number of Democrats and Republicans are locked in a room (at least 2 of each). If 2 are released at random, what is the probability that there will be one from each party? Remember, your
answer must be both mathematically and politically correct.
(a) What questions should your students ask before starting the problem? And if they don't..
(b) Is it worthwhile to give students 10 sec to make an intuitive guess?
(c) Do you think 1/2 will be intuitively guessed by a majority?
(d) What strategies do you want your students to use with an open-ended question like this?
(e) Would you have your students solve problem if there were originally 2 from each party, then, say, 3 from each?
(f) Show that if there are originally n from each party, the desired probability is n/(2n-1).
(g) As n increases beyond all bound...
(h) What do you see as the benefits of this inquiry?
(i) How would you extend this investigation? (j) How would you have done it differently depending with middle schoolers vs secondary?
Sent from my Verizon Wireless 4GLTE Phone
Twitter Problem posted 4-21-12
How many pos integers less than 1000 are not multiples of 7?
Middle school problem?
Strategies you teach your students?
Calculator appropriate?
"Big Ideas" here?
Ans: 857
Sketch of one possible method:
1000/7=142.857... ---> 142 multiples of 7 less than 1000 ---> 999-142 = 857 non-mult
The devil is in the details of course which I intentionally omitted! Why didn't I mention that the largest mult of 7 less than 1000 is 994? Would most solutions involve finding 994 first?
Someone out there is thinking about the repeating decimal expansion of 1/7 = 0.142857142857… and why the ans to our problem is 857. A coincidence?
Too bad we have no time in our classrooms to explore and go in depth. If we spend time doing that we'll never cover all the required topics in the Core Curriculum. Yes?
Sent from my Verizon Wireless 4GLTE Phone
I saw a problem similar to the following on some website recently. The problem stayed in my head but not the site. If you recognize it, please comment so that I can provide proper attribution.
Circle I and circle II each of radius 10 are tangent to each other and to a common external tangent line T.
A square ABCD is drawn between the circles such that A,B are on circles I and II respectively and C,D are on line T.
(a) Draw the diagram from the above description.
(b) Show that a side of the square is 4.
Sent from my Verizon Wireless 4GLTE Phone
I've posted the following geometry classic before but it seems relevant now with SATs and other standardized tests looming.
Given 2 concentric circles, segment AB is a chord of one and a tangent segm of the other. If AB=10, show that the pos difference of the areas of the circles CAN BE DETERMINED!
Sent from my Verizon Wireless 4GLTE Phone
Posted on Twitter 4-14: I have 3 cards with a blue dot and 3 cards with a red dot. If I have no other cards, how many cards "do" I have?
Too easy for most secondary students?
Too ambiguous?
How would it be modified for SATs?
How would 3rd or 4th graders respond?
What do think my underlying purpose is?
What are the "Big Ideas" here?
Hiw would you present this in a 4th grade vs a 10th grade classroom?
After discussion how would you assess understanding?
Sent from my Verizon Wireless 4GLTE Phone
A set of 5 playing cards consists of a 10, Jack, Queen, King and Ace. If 2 of the cards are chosen at random, what is the probability that neither card is a king nor an ace?
Ans: 3/10
Explain using at least 3 different methods!
Interested in seeing 175 more of these kinds of problems with answers? Look at the top right sidebar for info on my new Math Challenge Quiz/Problem Book...
Sent from my Verizon Wireless 4GLTE Phone
If interested in purchasing my NEW 2012 Math Challenge Problem/Quiz book, click on BUY NOW at top of right sidebar. 175 problems divided into 35 quizzes with answers at back. Suitable for SAT I, Math
I/II Subject Tests, Math Contest practice and Daily/Weekly Problems of the Day. Includes multiple choice, case I/II/III type and constructed response items. Price is $9.95. Secured pdf will be
emailed when purchase is verified. DON'T FORGET TO SEND ME AN EMAIL (dmarain "at gmail dot com") FIRST SO THAT I CAN SEND THE ATTACHMENT!
Students taking May SAT MAY want to try today's Twitter problem I just posted at twitter.com/dmarain
If n is a positive integer, then the expression n(n+3) + (n+3)(n+8) must be divisible by
I. 2
II. 4
III. 8
This is a typical "cases" type but I omitted the usual choices like
(A) I only
Might be worth some discussion to consider more than the typical student's "plug-in" approach. That's why I added "EXPLAIN! "
There is some rich mathematics to be unearthed here IMO...
Interested in 175 more of these types with answers? Try my new Math Challenge Problem/Quiz Book. Look at top of right sidebar.
Sent from my Verizon Wireless 4GLTE Phone
Two wheels with diameters 18 and 8 are touching and are on level ground. Show that the diameter of a 3rd wheel on the ground which touches the other 2 is 2.88.
For info on my NEW MATH CHALLENGE QUIZ/PROBLEM BOOK check top of right sidebar!
Sent from my Verizon Wireless 4GLTE Phone
The answer may be "no" in some parallel universe but here on earth the title of this post is rhetorical.
So we show children a diagram of 4 identical pizzas each divided into 8 equal slices or for the younger set we have manipulatives. We would probably not use so many pieces when introducing this but I
needed an example which could also appear on the next state test.
We cross out or shade all the slices in 3 of the pizzas and 5 of the slices in the 4th pizza, representing what a group of kids ate.
What are the questions we ask or might appear in the text or on the worksheets or on the state mandated tests?
What do you believe are the major stumbling blocks for most children and what can we as educators or parents or tutors do to help?
Here are some thoughts...
Is the issue more conceptual or procedural?
How would you rank the importance of how each question is worded?
You want the answer to be both the improper fraction 29/8 and the mixed numeral 3 5/8. How should the questions be worded? Hey, there's no universal remedy here! Some children will misunderstand the
questions no matter how they're expressed or simply have not yet made sense of the ideas. BUT on an assessment the wording must be mathematically correct and age-appropriate, right?
How would you react to the child who responds 29/32? Is (s)he wrong? How could the question be asked for which this correct? Is the child confused or was it the question itself?
Whether you're a 3rd grade teacher, a professor of math/math ed, a math staff developer or coordinator/administrator I hope you'll weigh in on this with your reflections and/or anecdotal experiences.
I consider this issue to be of vital importance in the development of the concepts and skills of fractions and part vs whole.
What do you think?
Sent from my Verizon Wireless 4GLTE Phone
Share your teaching methods for the classic problem in the title:
If a hen and a half can lay an egg and a half in a day and a half, how many eggs can 3 hens lay in 3 days?
Would you ask students for an immediate intuitive guess and expect many to say 3?
The answer is 6 so the purpose of this post is reflection on sharing instructional strategies.
What are the BIG IDEAS here? Do you use one basic strategy in teaching all ratio problems? Does dimensional analysis work for middle schoolers? Should students always reduce everything to a single
unit ratio like 1 hen per day? How many ways could your students devise if we tell them the answer is not 3?
Not much sharing going on like in the old days of this blog but maybe it's time..
Shameless self-promotion!
Consider buying my Challenge Math Problem book designed for standardized tests, Problems of the Day, etc. Go to top of right sidebar for more info.
Sent from my Verizon Wireless 4GLTE Phone
This is that special time of the year when some districts, particularly in elementary grades, hand out practice materials for the test and teachers are expected to devote the majority of class time
to it.
Here's my question...
Does anyone out there feel that these materials seem to be somewhat different and of a higher level of difficulty than regular classroom materials/tests? If this is the case then what are the
implications for the child, the teacher and the district? I do have strong feelings about this but I'll wait for your comments first. WHAT ARE YOUR THOUGHTS?
Also I have to repeat a tweet I just saw from the brilliant Timandra Harkness from the UK. She made my day...
"I'm decorating my bathroom with those new fractal tiles. I think it's going to take forever."
Sent from my Verizon Wireless 4GLTE Phone
Two thin cylindrical steel disks have diameters of 35 in and 25 in. The area of the base of the larger is what % more than the smaller?
We would hope juniors in Alg 2 or Precalc would know the basic setup for % more, % increase/decrease or % change types, particularly since this is a middle school concept. Of course we know this is
often not the case!
After having students work in small groups for a few minutes and watching them pushing calculator buttons you have someone come up and explain, asking questions and reviewing basics. As is typical,
some students will use the diameters instead of the radii and get the right answer anyway. What are the "BIG IDEAS" here?
Write on the board (35/25)^2 = 49/25, then 24/25 = 96%. No explanation. You give students in small groups 2 minutes to make sense of this and have 2 groups take turns explaining it to the class.A
mental calculation?
Before you kneejerk reflexively react to this with " Even some of my honors students would struggle with that", I would like my readers to reflect on our obligation to stretch their minds and promote
conceptual understanding.
Of cou
Sent from my Verizon Wireless 4GLTE Phone
In square ABCD of side 1, E is the point on diagonal AC such that AE=1.
(a) Explain without numerical calculation why
√2 < BE + DE < 2
(b) Show that BE+DE = 2(√(2-√2)) ≈ 1.531 without using Law of Cosines
(c) Be a math researcher! How might you generalize this?
Sent from my Verizon Wireless 4GLTE Phone
If interested in purchasing my NEW 2012 Math Challenge Problem/Quiz book, click on BUY NOW at top of right sidebar. 175 problems divided into 35 quizzes with answers at back. Suitable for SAT I, SAT
Math I/II, Math Contest practice or daily/weekly Problems of the Day. Questions include multiple choice, cases I/II/III type and constructed response. Price is $9.95 and secured pdf will be emailed
when purchase is verified. DON'T FORGET TO SEND ME AN EMAIL FIRST SO THAT I CAN SEND THE ATTACHMENT! (dmarain "at" "geemail dot com")
My beloved wife of 42 yrs passed away on 2-28-12 after battling pancreatic cancer over 8 months. She gave tirelessly to those in need all of her life and never asked for anything for herself. She can
finally have the rest she has earned. I, her 7 children, 4 grandchildren and all those whose lives she touched feel a gaping hole in our hearts...
Visit this link for rich applications of math:
The particular article in the link addresses dimensional analysis and fundamental science in relation to math. MOST IMPORTANTLY IT MAY ENGAGE EVEN THE LESS MOTIVATED!
Try it with your prealgebra students as an application of unit conversions, exponents, scientific notation and basic geometry concepts.
Students will need to convert mass (kg) to volume (liters) using the specific gravity of gold, then to cm^3. This will allow them to see how the world's estimated annual production of gold will
approximately fill up a cube 14 ft on an edge or, equivalently, a rectangular room 14 ft by 28 ft with 7 ft high ceilings.
By requiring students to research the data and conversion constants, you can integrate the web into the assignment.
A scientific calculator is an appropriate tool here, however, require students to WRITE all steps. It is very easy to lose track of the process when we do a series of 5 or more calculations. The
result will appear to be incorrect but the student will be hard pressed to find the error. It's human nature to press CLEAR and keep pressing more buttons until it works, then forget the exact
sequence of keystrokes!
Let me know if you try it and how it works out.
Sent from my Verizon Wireless 4GLTE Phone
If interested in purchasing my NEW 2012 Math Challenge Problem/Quiz book, click on BUY NOW at top of right sidebar. 175 problems divided into 35 quizzes with answers at back. Suitable for SAT I, Math
I/II Subject Tests, Math Contests and Daily/Weekly Problems of the Day. Includes multiple choice, cases I/II/III type and constructed response items.
Price is $9.95. Secured pdf will be emailed when purchase is verified. DON'T FORGET TO SEND ME AN EMAIL (dmarain "at gmail dot com") FIRST SO THAT I CAN SEND THE ATTACHMENT!
DON'T FORGET TO VISIT ME ON TWITTER AT twitter.com/dmarain
A regular octagon is formed by cutting congruent isosceles right triangles from the corners of
a square of side 1. What is the length of a side of the octagon?
[Ans: ≈ 0.414; also give "exact" answer!]
If interested in purchasing my new Math Challenge Problem/Quiz book, click on BUY NOW at top of right sidebar. 175 problems divided into 35 quizzes with answers at back. Suitable for SAT/Math Contest
practice or Problems of the Day/Week.
Price is $9.99 and secured pdf will be emailed when purchase is verified. DON'T FORGET TO SEND ME AN EMAIL FIRST SO THAT I CAN SEND THE ATTACHMENT!
"All Truth passes through Three Stages: First, it is Ridiculed... Second, it is Violently Opposed... Third, it is Accepted as being Self-Evident." - Arthur SchoDONTpenhauer (1778-1860) You've got to
be taught To hate and fear, You've got to be taught From year to year, It's got to be drummed In your dear little ear You've got to be carefully taught. --from South Pacific
First, congrats to the NY/NJ Giants on another extraordinary accomplishment!
Kudos to both teams and Eli-te and Tom B in particular.
My opinion, but it drives me nuts when I hear that the Miracle Catch 4 years ago was just luck. It wasn't luck that enabled David Tyree to jump higher than Rodney Harrison. It wasn't luck which
enabled him to hold onto the ball despite every attempt by the defender to rip the ball from his hand or his helmet. Not to mention Eli's "refuse to lose" attitude. The throw by Eli and catch by
Mario Manningham last night was the result of endless practice, skill, and a will that was stronger than the opposition's. Why was Manningham able to make the HIGH PRESSURE play but Welker was not? I
say it was DRILL,SKILL,WILL -- necessary ingredients for success in life!
Alright, you haven't read a blog post from me in eons so I cannot disappoint...
I do hope y'all have been following me on twitter.com/dmarain. I've tweeted numerous challenges for your students. Hope you've enjoyed them...
MATH CHALLENGE GRADES 3-8
LIST THE WAYS A FOOTBALL TEAM CAN SCORE 21 PTS. DO THE SAME FOR 17 PTS.
Repondez s'il vous plait! (supply your own accents!)
By the way, one of my daughter's friends had boxes of 9-0 and 9-3 in either order. She won $75 for the 1st quarter and $150 for quarter 2! My buddy had 0-8 in either order for the final score. If the
Giants kick the field goal making it 18-17 and the Patriots kick a field goal, making the final score 20-18, he would have won $2500! He's a big Giants fan but talk about mixed emotions!
"All Truth passes through Three Stages: First, it is Ridiculed... Second, it is Violently Opposed... Third, it is Accepted as being Self-Evident." - Arthur Schopenhauer (1778-1860) You've got to be
taught To hate and fear, You've got to be taught From year to year, It's got to be drummed In your dear little ear You've got to be carefully taught. --from South Pacific | {"url":"https://mathnotations.blogspot.com/2012/","timestamp":"2024-11-08T13:44:14Z","content_type":"application/xhtml+xml","content_length":"414782","record_id":"<urn:uuid:8f5d6c1f-7c50-45eb-bff2-8a01b9a0febc>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00033.warc.gz"} |
Improved dual-level direct dynamics method for reaction rate calculations with inclusion of multidimensional tunneling effects and validation for the reaction of H with trans-N<sub>2</sub>H<sub>2</sub>
A new scheme for carrying out dual-level direct dynamics calculations is presented in this paper. A better estimate of the barrier width is obtained by using the high-level imaginary frequency at the
saddle point as well as high-level values of the energies of three stationary points (i.e., reactants, products, and saddle point). Furthermore, a more robust formula is introduced for incorporating
high-level vibrational frequency corrections on the generalized normal modes along the reaction path. Incorporating these improvements, we carry out dual-level calculations of the reaction rate of H
+ N[2]H[2] → H[2] + N[2]H by employing variational transition-state theory with optimized multidimensional tunneling. Dual-level calculations at the level of zero-curvature tunneling (ZCT) show
excellent agreement with an earlier calculation involving high-level computations at 11 times as many geometries. Having validated the dual-level approach at the ZCT level, we next extend the
dual-level calculations to include small-curvature, large-curvature, and optimized multidimensional tunneling approximations. Four choices of low-level surface are used to gauge the sensitivity to
these choices.
Dive into the research topics of 'Improved dual-level direct dynamics method for reaction rate calculations with inclusion of multidimensional tunneling effects and validation for the reaction of H
with trans-N[2]H[2]'. Together they form a unique fingerprint. | {"url":"https://experts.umn.edu/en/publications/improved-dual-level-direct-dynamics-method-for-reaction-rate-calc","timestamp":"2024-11-07T17:54:22Z","content_type":"text/html","content_length":"54030","record_id":"<urn:uuid:f5035035-4c15-4429-80d6-efbbd470c1a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00892.warc.gz"} |
The Matrix Biology Definition of Darwin’s Matrices
The Matrix Biology Definition of Darwin’s Matrices
Even the Matrix Biology Definition is a set of guidelines for choosing involving Darwin matrices. The sentence is a matrix is an collection of equations by which the variables just about every should
always be multiplied by each one of the other individuals. By matriceswe signify any assortment of equations which often can be utilised to compute the cure into a linear equation. The Matrix
Biological Definition says any matrices that almost always do not possess some in the letters I, U, V, and W needs to be excluded.
The Matrix Biology Definition promises that those people policies won’t be able to be broken . The fact is that there are. As a substitute, they crossed out or can not at any time be researched.
Despite the fact that you are going to see a large number of other individuals, the expression”matrix” is used as it isn’t utilized even when within the sphere of evolutionary biology. It’s purely
the distinguish of this meaning employed by laptop builders and geologists to reflect a program of the”matrix” might be an assortment of arrays usedto become a symbol of factors or even the not known
The Matrix Biology Definition Might be Described as the Wildtype Definition Biology. That really is as it’s utilized to help identify no matter if a gene was typed in a type organism. It is not
really the matrix utilised in evolutionary biology. It is the most ordinarily used matrices in evolutionary sciences.
The Matrix Definition states that U, V, and W need to be integrated. This expression is described as the Matrix Definition Biology Considering accounting homework help that the Matrix Biology
Definition is exactly what will likely be utilised in literary biology. ” the Matrix Definition is usually referred to.
We are aware that type organisms ordinarily do not take advantage of the Matrix BiologyDefinition. They have the only wild multitude and also at each copy quantities. For picking out somewhere
between Darwin matrices other laws could be used by them, on the other hand, those who are probably not matrices. They are only portions of specimens that are distinct.
You’ll find it particularly necessary to take note that uncontrolled sort organisms don’t use the precise Matrix Biology Definition. Individuals copies may perhaps improve, just relish the two of the
copies can range, even if they have 3 duplicates.
” the Matrix Biology Definition is important to this biology’s mathematical overall performance. It is instead hard to be able of applying math to decide on from Darwin’s matrices.
Receives got the course of action of utilizing the team techniques, or even just and so the matrix economics definition. The typical group techniques are very complicated, but they are not relevant
in the corporation of evolutionary biology.
Can go in addition to most of the matrix arithmetic definition to benefit it turn out to be effortless to work out matrix solutions to a couple of dilemmas in biology. This absolutely may perhaps be
the sole will mean to avert implementing classification solutions.
Even the Matrix Biology Definition is just a word participate in by laptop programmers. It can be simply a word produced by programmers so as to confuse people in the particular this means
of”matrix”. As a result, which the matrices will be scientific, in spite of this in addition they will likely not be just simply clarified by a phrase.
Deixe um comentário | {"url":"https://utexavantes.com.br/the-matrix-biology-definition-of-darwins-matrices/","timestamp":"2024-11-08T12:17:37Z","content_type":"text/html","content_length":"39285","record_id":"<urn:uuid:64ade5df-5b3e-4ddb-aaa6-0ba1ce72f951>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00250.warc.gz"} |
Lesson 7
From Parallelograms to Triangles
7.1: Same Parallelograms, Different Bases (5 minutes)
This warm-up reinforces students’ understanding of bases and heights in a parallelogram. In previous lessons, students calculated areas of parallelograms using bases and heights. They have
also determined possible bases and heights of a parallelogram given a whole-number area. They saw, for instance, that finding possible bases and heights of a parallelogram with an area of 20 square
units means finding two numbers with a product of 20. Students extend that work here by working with decimal side lengths and area.
As students work, notice students who understand that the two identical parallelograms have equal area and who use that understanding to find the unknown base. Ask them to share later.
Give students 2 minutes of quiet work time and access to their geometry toolkits.
Students should be adequately familiar with bases and heights to begin the warm-up. If needed, however, briefly review the relationship between a pair of base and height in a parallelogram, using
questions such as:
• “Can we use any side of a parallelogram as a base?” (Yes.)
• “Is the height always the length of one of the sides of the parallelogram?” (No.)
• “Once we have identified a base, how do we identify a height?” (It can be any segment that is perpendicular to the base and goes from the base to the opposite side.)
• “Can a height segment be drawn outside of a parallelogram?” (Yes.)
Student Facing
Here are two copies of a parallelogram. Each copy has one side labeled as the base \(b\) and a segment drawn for its corresponding height and labeled \(h\).
1. The base of the parallelogram on the left is 2.4 centimeters; its corresponding height is 1 centimeter. Find its area in square centimeters.
2. The height of the parallelogram on the right is 2 centimeters. How long is the base of that parallelogram? Explain your reasoning.
Anticipated Misconceptions
Some students may not know how to begin answering the questions because measurements are not shown on the diagrams. Ask students to label the parallelograms based on the information in the task
Students may say that there is not enough information to answer the second question because only one piece of information is known (the height). Ask them what additional information might be needed.
Prompt them to revisit the task statement and see what it says about the two parallelograms. Ask what they know about the areas of two figures that are identical.
Students may struggle to find the unknown base in the second question because the area of the parallelogram is a decimal and they are unsure how to divide a decimal. Ask them to explain how they
would reason about it if the area was a whole number. If they understand that they need to divide the area by 2 (since the height is 2 cm), see if they could reason in terms of multiplication (i.e.,
2 times what number is 2.4?) or if they could reason about the division using fractions (i.e., 2.4 can be seen as \(2\frac{4}{10}\) or \(\frac {24}{10}\); what is 24 tenths divided by 2?).
Activity Synthesis
Select 1–2 previously identified students to share their responses. If not already explained by students, emphasize that we know the parallelograms have the same area because they are identical,
which means that when one is placed on top of the other they would match up exactly.
Before moving on, ask students: “How can we verify that the height we found is correct, or that the two pairs of bases and heights produce the same area?" (We can multiply the values of each pair and
see if they both produce 2.4.)
7.2: A Tale of Two Triangles (Part 1) (15 minutes)
In earlier lessons, students saw that a square can be decomposed into two identical isosceles right triangles. They concluded that the area of each of those triangles is half of the area of the
square. They used this observation to determine the area of composite regions.
This activity helps students see that parallelograms other than squares can also be decomposed into two identical triangles by drawing a diagonal. They check this by tracing a triangle on tracing
paper and then rotating it to match the other copy. The process prepares students to see any triangle as occupying half of a parallelogram, and consequently, as having one half of its area. To
generalize about quadrilaterals that can be decomposed into identical triangles, students need to analyze the features of the given shapes and look for structure (MP7).
There are a number of geometric observations in this unit that must be taken for granted at this point in students' mathematical study. This is one of those instances. Students have only seen
examples of a parallelogram being decomposable into two copies of the same triangle, or have only verified this conjecture through physical experimentation, but for the time being it can be
considered a fact. Starting in grade 8, they will begin to prove some of the observations they have previously taken to be true.
Arrange students in groups of 3–4. Give students access to geometry toolkits and allow for 2 minutes of quiet think time for the first two questions. Then, ask them to share their drawings with their
group and discuss how they drew their lines. If group members disagree on whether a quadrilateral can be decomposed into two identical triangles, they should note the disagreement, but it is not
necessary to come to an agreement. They will soon have a chance to verify their responses.
Next, ask students to use tracing paper to check that the pairs of triangles that they believe to be identical are indeed so (i.e., they would match up exactly if placed on top of one another). Tell
students to divide the checking work among the members of their group to optimize time.
Though students have worked with tracing paper earlier in the unit, some may not recall how to use it to check the congruence of two shapes; some explicit guidance might be needed. Encourage students
to work carefully and precisely. A straightedge can be used in tracing but is not essential and may get in the way. Once students finish checking the triangles in their list and verify that they are
identical (or correct their initial response), ask them to answer the last question.
Students using the digital activity can decompose the shapes using an applet. Encourage students to use the segment tool rather than free-drawing a segment to divide the shapes.
Representation: Internalize Comprehension. Represent the same information through different modalities by providing access to a hands-on alternative. To determine which quadrilaterals can be
decomposed into two identical triangles, some students may benefit from enlarged cut-outs of the quadrilaterals that they can manipulate, fold, or cut.
Supports accessibility for: Conceptual processing; Visual-spatial processing
Conversing, Representing: MLR2 Collect and Display. To help students reason about and use the mathematical language of decompose, diagonal, and identical, listen to students talk about how they are
making their drawings. Record and display common or important phrases you hear students say as well as examples of their drawings. Continue to update collected student language throughout the lesson,
and remind students to borrow language from the display as needed.
Design Principle(s): Support sense-making; Maximize meta-awareness
Student Facing
Two polygons are identical if they match up exactly when placed one on top of the other.
1. Draw one line to decompose each polygon into two identical triangles, if possible. If you choose to, you can also draw the triangles.
2. Which quadrilaterals can be decomposed into two identical triangles?
Pause here for a small-group discussion.
3. Study the quadrilaterals that were, in fact, decomposable into two identical triangles. What do you notice about them? Write a couple of observations about what these quadrilaterals have in
Student Facing
Are you ready for more?
Draw some other types of quadrilaterals that are not already shown. Try to decompose them into two identical triangles. Can you do it? Come up with a general rule about what must be true if a
quadrilateral can be decomposed into two identical triangles.
Arrange students in groups of 3–4. Give students access to geometry toolkits and allow for 2 minutes of quiet think time for the first two questions. Then, ask them to share their drawings with their
group and discuss how they drew their lines. If group members disagree on whether a quadrilateral can be decomposed into two identical triangles, they should note the disagreement, but it is not
necessary to come to an agreement. They will soon have a chance to verify their responses.
Next, ask students to use tracing paper to check that the pairs of triangles that they believe to be identical are indeed so (i.e., they would match up exactly if placed on top of one another). Tell
students to divide the checking work among the members of their group to optimize time.
Though students have worked with tracing paper earlier in the unit, some may not recall how to use it to check the congruence of two shapes; some explicit guidance might be needed. Encourage students
to work carefully and precisely. A straightedge can be used in tracing but is not essential and may get in the way. Once students finish checking the triangles in their list and verify that they are
identical (or correct their initial response), ask them to answer the last question.
Students using the digital activity can decompose the shapes using an applet. Encourage students to use the segment tool rather than free-drawing a segment to divide the shapes.
Representation: Internalize Comprehension. Represent the same information through different modalities by providing access to a hands-on alternative. To determine which quadrilaterals can be
decomposed into two identical triangles, some students may benefit from enlarged cut-outs of the quadrilaterals that they can manipulate, fold, or cut.
Supports accessibility for: Conceptual processing; Visual-spatial processing
Conversing, Representing: MLR2 Collect and Display. To help students reason about and use the mathematical language of decompose, diagonal, and identical, listen to students talk about how they are
making their drawings. Record and display common or important phrases you hear students say as well as examples of their drawings. Continue to update collected student language throughout the lesson,
and remind students to borrow language from the display as needed.
Design Principle(s): Support sense-making; Maximize meta-awareness
Student Facing
Two polygons are identical if they match up exactly when placed one on top of the other.
1. Draw one line to decompose each polygon into two identical triangles, if possible. Use a straightedge to draw your line.
2. Which quadrilaterals can be decomposed into two identical triangles?
Pause here for a small-group discussion.
3. Study the quadrilaterals that can, in fact, be decomposed into two identical triangles. What do you notice about them? Write a couple of observations about what these quadrilaterals have in
Student Facing
Are you ready for more?
On the grid, draw some other types of quadrilaterals that are not already shown. Try to decompose them into two identical triangles. Can you do it?
Come up with a rule about what must be true about a quadrilateral for it to be decomposed into two identical triangles.
Anticipated Misconceptions
It may not occur to students to rotate triangles to check congruence. If so, tell students that we still consider two triangles identical even when one needs to be rotated to match the other.
Activity Synthesis
The discussion should serve two goals: to highlight how quadrilaterals can be decomposed into triangles and to help students make generalizations about the types of quadrilaterals that can be
decomposed into two identical triangles. Consider these questions:
• How did you decompose the quadrilaterals into two triangles? (Connect opposite vertices, i.e. draw a diagonal.)
• Did the strategy of drawing a diagonal work for decomposing all quadrilaterals into two triangles? (Yes.) Are all of the resulting triangles identical? (No.)
• What is it about C and E that they cannot be decomposed into two identical triangles? (They don't have equal sides or equal angles. Their opposite sides are not parallel.)
• What do A, B, and D have that C and E do not? (A, B, and D have two pairs of parallel sides that are of equal lengths. They are parallelograms.)
Ask students to complete this sentence starter: For a quadrilateral to be decomposable into two identical triangles it must be (or have) . . .
If time permits, discuss how students verified the congruence of the two triangles.
• How did you check if the triangles are identical? Did you simply stack the traced triangle or did you do something more specific? (They may notice that it is necessary to rotate one triangle—or
to reflect one triangle it twice—before the triangles could be matched up.)
• Did anyone use another way to check for congruence? (Students may also think in terms of the parts or composition of each triangle. E.g. “Both triangles have all the same side lengths; they both
have a right angle”).
7.3: A Tale of Two Triangles (Part 2) (15 minutes)
Previously, students decomposed quadrilaterals into two identical triangles. The work warmed them to the idea of a triangle as a half of a familiar quadrilateral. This activity prompts them to think
the other way—to compose quadrilaterals using two identical triangles. It helps students see that two identical triangles of any kind can always be joined to produce a parallelogram. Both
explorations prepare students to make connections between the area of a triangle and that of a parallelogram in the next lesson.
A key understanding to uncover here is that two identical copies of a triangle can be joined along any corresponding side to produce a parallelogram, and that more than one parallelogram can be
As students work, look for different compositions of the same pair of triangles. Select students using different approaches to share later.
When manipulating the cutouts students are likely to notice that right triangles can be composed into rectangles (and sometimes squares) and that non-right triangles produce parallelograms that are
not rectangles. Students may not immediately recall that squares and rectangles are also parallelograms. Consider preparing a reference for students to consult. Here is an example:
As before, students make generalizations here that they don't yet have the tools to justify them. This is appropriate at this stage. Later in their mathematical study they will learn to verify what
they now take as facts.
For students using the digital activity, an applet can be used to compose triangles into other shapes.
Keep students in the same groups. Give each group one set of triangles labeled P–U (two copies of each triangle) from the blackline master and access to scissors if the triangles are not pre-cut. The
set includes different types of triangles (isosceles right, scalene right, obtuse, acute, and equilateral). Ask each group member to take 1–2 pairs of triangles.
Reiterate that students learned that certain types of quadrilaterals can be decomposed into two identical triangles. Explain that they will now see if it is possible to compose quadrilaterals out of
two identical triangles, and, if so, to find out what types of quadrilaterals would result.
Give students 1–2 minutes of quiet work time, and then 5 minutes to discuss their responses and answer the second question with their group.
Conversing: MLR8 Discussion Supports. To reinforce use of the language that students have previously learned about quadrilaterals, create and display the reference chart, as described, for students
to consult. Use this display to help students visualize the different types of quadrilaterals. Ask students to discuss with a partner, “What is the same and different about the different types of
quadrilaterals?” Tell students to take turns sharing what they notice or remember from previous lessons, then call on different groups to share what they notice with the whole class. When recording
and displaying students’ observations, listen for opportunities to re-voice the mathematical terms that students used.
Design Principle(s): Support sense-making; Cultivate conversation
Student Facing
This applet has eight pairs of triangles. Each group member should choose 1–2 pairs of triangles. Use them to help you answer the following questions.
1. Which pair(s) of triangles do you have?
2. Can each pair be composed into a rectangle? A parallelogram?
2. Discuss your responses to the first question with your group. Then, complete each of the following statements with all, some, or none. Sketch 1–2 examples to illustrate each completed statement.
1. ________________ of these pairs of identical triangles can be composed into a rectangle.
2. ________________ of these pairs of identical triangles can be composed into a parallelogram.
Keep students in the same groups. Give each group one set of triangles labeled P–U (two copies of each triangle) from the blackline master and access to scissors if the triangles are not pre-cut. The
set includes different types of triangles (isosceles right, scalene right, obtuse, acute, and equilateral). Ask each group member to take 1–2 pairs of triangles.
Reiterate that students learned that certain types of quadrilaterals can be decomposed into two identical triangles. Explain that they will now see if it is possible to compose quadrilaterals out of
two identical triangles, and, if so, to find out what types of quadrilaterals would result.
Give students 1–2 minutes of quiet work time, and then 5 minutes to discuss their responses and answer the second question with their group.
Conversing: MLR8 Discussion Supports. To reinforce use of the language that students have previously learned about quadrilaterals, create and display the reference chart, as described, for students
to consult. Use this display to help students visualize the different types of quadrilaterals. Ask students to discuss with a partner, “What is the same and different about the different types of
quadrilaterals?” Tell students to take turns sharing what they notice or remember from previous lessons, then call on different groups to share what they notice with the whole class. When recording
and displaying students’ observations, listen for opportunities to re-voice the mathematical terms that students used.
Design Principle(s): Support sense-making; Cultivate conversation
Student Facing
Your teacher will give your group several pairs of triangles. Each group member should take 1 or 2 pairs.
1. Which pair(s) of triangles do you have?
2. Can each pair be composed into a rectangle? A parallelogram?
2. Discuss with your group your responses to the first question. Then, complete each statement with All, Some, or None. Sketch 1 or 2 examples to illustrate each completed statement.
1. ________________ of these pairs of identical triangles can be composed into a rectangle.
2. ________________ of these pairs of identical triangles can be composed into a parallelogram.
Anticipated Misconceptions
Students may draw incorrect conclusions if certain pieces of their triangles are turned over (to face down), or if it did not occur to them that the pieces could be moved. Ask them to try
manipulating the pieces in different ways.
Seeing that two copies of a triangle can always be composed into a parallelogram, students might mistakenly conclude that any two copies of a triangle can only be composed into a parallelogram (i.e.,
no other quadrilaterals can be formed from joining two identical triangles). Showing a counterexample may be a simple way to help students see that this is not the case.
Activity Synthesis
The focus of this discussion would be to clarify whether or not two copies of each triangle can be composed into a rectangle or a parallelogram, and to highlight the different ways two triangles
could be composed into a parallelogram.
Ask a few students who composed different parallelograms from the same pair of triangles to share. Invite the class to notice how these students ended up with different parallelograms. To help them
see that a triangle can be joined along any side of its copy to produce a parallelogram, ask questions such as:
• Here is one way of composing triangles S into a parallelogram. Did anyone else do it this way? Did anyone obtain a parallelogram a different way?
• How many different parallelograms can be created with any two copies of a triangle? Why? (3 ways, because there are 3 sides along which the triangles could be joined.)
• What kinds of triangles can be used to compose a rectangle? How? (Right triangles, by joining two copies along the side opposite of the right angle.)
• What kinds of triangles can be used to compose a parallelogram? How? (Any triangle, by joining two copies along any side with the same length.)
Lesson Synthesis
Display and revisit representative works from the two main activities. Draw out key observations about the special connections between triangles and parallelograms.
First, we tried to decompose or break apart quadrilaterals into two identical triangles.
• “What strategy allowed us to do that?” (Drawing a segment connecting opposite vertices.)
• “Which types of quadrilaterals could always be decomposed into two identical triangles?” (Parallelograms.)
• “Can quadrilaterals that are not parallelograms be decomposed into triangles?” (Yes, but the resulting triangles may not be identical.)
Then, we explored the relationship between triangles and quadrilaterals the other way around. We tried to compose or create quadrilaterals from pairs of identical triangles.
• “What types of quadrilaterals were you able to compose with a pair of identical triangles?” (Parallelograms—some of them are rectangles.)
• “Does it matter what type of triangles was used?” (No. Any two copies of a triangle could be composed into a parallelogram.)
• “Was there a particular side along which the two triangles must be joined to form a parallelogram?” (No. Any of the three sides could be used.)
We saw how two identical copies of a triangle can be combined to make a parallelogram. This is true for any triangle. The reverse is also true: any parallelogram can be split into two identical
triangles. In grade 8 we will acquire some tools to prove these observations. For now, we will take the special relationships between triangles and parallelograms as a fact. We will use them to find
the area of any triangle in upcoming lessons.
7.4: Cool-down - A Tale of Two Triangles (Part 3) (5 minutes)
Student Facing
A parallelogram can always be decomposed into two identical triangles by a segment that connects opposite vertices.
Going the other way around, two identical copies of a triangle can always be arranged to form a parallelogram, regardless of the type of triangle being used.
To produce a parallelogram, we can join a triangle and its copy along any of the three sides, so the same pair of triangles can make different parallelograms.
Here are examples of how two copies of both Triangle A and Triangle F can be composed into three different parallelograms.
This special relationship between triangles and parallelograms can help us reason about the area of any triangle. | {"url":"https://im.kendallhunt.com/MS/teachers/1/1/7/index.html","timestamp":"2024-11-05T23:10:49Z","content_type":"text/html","content_length":"218178","record_id":"<urn:uuid:da9e3d7e-14cb-4bb1-9fb1-91c8cbb9e828>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00810.warc.gz"} |
Optimal Partitioning for Parallel Matrix Computation on a Small Number of Abstract Heterogeneous Processors
Title Optimal Partitioning for Parallel Matrix Computation on a Small Number of Abstract Heterogeneous Processors
Publication Thesis
Year of 2014
Authors DeFlumere, A.
Thesis Type PhD
Advisor Lastovetsky, A.
Academic School of Computer Science and Informatics
University University College Dublin
City Dublin
Number of 161
Date 09/2014
High Performance Computing (HPC) has grown to encompass many new architectures and algorithms. The Top500 list, which ranks the world's fastest supercomputers every six months, shows this
trend towards a va- riety of heterogeneous architectures - particularly multicores and general purpose Graphical Processing Units (GPUs). Heterogeneity, whether it is in computational
power or communication interconnect, provides new chal- lenges in programming and algorithm development. The general trend has been to adapt algorithms used on homogeneous parallel
systems for use in the new heterogeneous parallel systems. However, assumptions carried over from those homogeneous systems are not always applicable to heterogeneous systems. Linear
algebra matrix operations are widely used in scientific computing and are an area of significant HPC study. To parallelise matrix operations over many nodes in an HPC system, each
processor is given a section of the matrix to compute. These sections are collectively called the data partition. Linear algebra operations, such as matrix matrix multiplication (MMM) and
Abstract LU factorisation, use data partitioning based on the original homogeneous algorithms. Specifically, each processor is assigned a rectangular sub matrix. The primary motivation of this
work is to question whether the rectangular data partitioning is optimal for heterogeneous systems. This thesis will show the rectangular data partitioning is not universally optimal when
applied to the heterogeneous case. The major contribution will be a new method for creating optimal data partitions, called the Push Technique. This method is used to make small,
incremental changes to a data partition, while guaranteeing not to worsen it. The end result is a small number of potentially optimal data partitions, called candidates. These candidates
are then analysed for differing numbers of processors and topologies. The validity of the Push Technique is verified analytically and experimentally. The optimal data partition for matrix
operations is found for systems of two and three heterogeneous processors, including differing communication topologies. A methodology is outlined for applying the Push Technique to
matrix computations other than MMM, such as LU Factorisation, and for larger numbers of heterogeneous processors. | {"url":"https://hcl.ucd.ie/biblio/417","timestamp":"2024-11-04T14:13:48Z","content_type":"application/xhtml+xml","content_length":"18486","record_id":"<urn:uuid:9a8c7985-49b3-4061-b56c-da099fd35440>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00416.warc.gz"} |
What’s Really Going On in Machine Learning? Some Minimal Models
The Mystery of Machine Learning
It’s surprising how little is known about the foundations of machine learning. Yes, from an engineering point of view, an immense amount has been figured out about how to build neural nets that do
all kinds of impressive and sometimes almost magical things. But at a fundamental level we still don’t really know why neural nets “work”—and we don’t have any kind of “scientific big picture” of
what’s going on inside them.
The basic structure of neural networks can be pretty simple. But by the time they’re trained up with all their weights, etc. it’s been hard to tell what’s going on—or even to get any good
visualization of it. And indeed it’s far from clear even what aspects of the whole setup are actually essential, and what are just “details” that have perhaps been “grandfathered” all the way from
when computational neural nets were first invented in the 1940s.
Well, what I’m going to try to do here is to get “underneath” this—and to “strip things down” as much as possible. I’m going to explore some very minimal models—that, among other things, are more
directly amenable to visualization. At the outset, I wasn’t at all sure that these minimal models would be able to reproduce any of the kinds of things we see in machine learning. But, rather
surprisingly, it seems they can.
And the simplicity of their construction makes it much easier to “see inside them”—and to get more of a sense of what essential phenomena actually underlie machine learning. One might have imagined
that even though the training of a machine learning system might be circuitous, somehow in the end the system would do what it does through some kind of identifiable and “explainable” mechanism. But
we’ll see that in fact that’s typically not at all what happens.
Instead it looks much more as if the training manages to home in on some quite wild computation that “just happens to achieve the right results”. Machine learning, it seems, isn’t building structured
mechanisms; rather, it’s basically just sampling from the typical complexity one sees in the computational universe, picking out pieces whose behavior turns out to overlap what’s needed. And in a
sense, therefore, the possibility of machine learning is ultimately yet another consequence of the phenomenon of computational irreducibility.
Why is that? Well, it’s only because of computational irreducibility that there’s all that richness in the computational universe. And, more than that, it’s because of computational irreducibility
that things end up being effectively random enough that the adaptive process of training a machine learning system can reach success without getting stuck.
But the presence of computational irreducibility also has another important implication: that even though we can expect to find limited pockets of computational reducibility, we can’t expect a
“general narrative explanation” of what a machine learning system does. In other words, there won’t be a traditional (say, mathematical) “general science” of machine learning (or, for that matter,
probably also neuroscience). Instead, the story will be much closer to the fundamentally computational “new kind of science” that I’ve explored for so long, and that has brought us our Physics
Project and the ruliad.
In many ways, the problem of machine learning is a version of the general problem of adaptive evolution, as encountered for example in biology. In biology we typically imagine that we want to
adaptively optimize some overall “fitness” of a system; in machine learning we typically try to adaptively “train” a system to make it align with certain goals or behaviors, most often defined by
examples. (And, yes, in practice this is often done by trying to minimize a quantity normally called the “loss”.)
And while in biology there’s a general sense that “things arise through evolution”, quite how this works has always been rather mysterious. But (rather to my surprise) I recently found a very simple
model that seems to do well at capturing at least some of the most essential features of biological evolution. And while the model isn’t the same as what we’ll explore here for machine learning, it
has some definite similarities. And in the end we’ll find that the core phenomena of machine learning and of biological evolution appear to be remarkably aligned—and both fundamentally connected to
the phenomenon of computational irreducibility.
Most of what I’ll do here focuses on foundational, theoretical questions. But in understanding more about what’s really going on in machine learning—and what’s essential and what’s not—we’ll also be
able to begin to see how in practice machine learning might be done differently, potentially with more efficiency and more generality.
Traditional Neural Nets
Note: Click any diagram to get Wolfram Language code to reproduce it.
To begin the process of understanding the essence of machine learning, let’s start from a very traditional—and familiar—example: a fully connected (“multilayer perceptron”) neural net that’s been
trained to compute a certain function f[x]:
If one gives a value x as input at the top, then after “rippling through the layers of the network” one gets a value at the bottom that (almost exactly) corresponds to our function f[x]:
Scanning through different inputs x, we see different patterns of intermediate values inside the network:
And here’s (on a linear and log scale) how each of these intermediate values changes with x. And, yes, the way the final value (highlighted here) emerges looks very complicated:
So how is the neural net ultimately put together? How are these values that we’re plotting determined? We’re using the standard setup for a fully connected multilayer network. Each node (“neuron”) on
each layer is connected to all nodes on the layer above—and values “flow” down from one layer to the next, being multiplied by the (positive or negative) “weight” (indicated by color in our pictures)
associated with the connection through which they flow. The value of a given neuron is found by totaling up all its (weighted) inputs from the layer before, adding a “bias” value for that neuron, and
then applying to the result a certain (nonlinear) “activation function” (here ReLU or Ramp[z], i.e. If[z < 0, 0, z]).
What overall function a given neural net will compute is determined by the collection of weights and biases that appear in the neural net (along with its overall connection architecture, and the
activation function it’s using). The idea of machine learning is to find weights and biases that produce a particular function by adaptively “learning” from examples of that function. Typically we
might start from a random collection of weights, then successively tweak weights and biases to “train” the neural net to reproduce the function:
We can get a sense of how this progresses (and, yes, it’s complicated) by plotting successive changes in individual weights over the course of the training process (the spikes near the end come from
“neutral changes” that don’t affect the overall behavior):
The overall objective in the training is progressively to decrease the “loss”—the average (squared) difference between true values of f[x] and those generated by the neural net. The evolution of the
loss defines a “learning curve” for the neural net, with the downward glitches corresponding to points where the neural net in effect “made a breakthrough” in being able to represent the function
It’s important to note that typically there’s randomness injected into neural net training. So if one runs the training multiple times, one will get different networks—and different learning
curves—every time:
But what’s really going on in neural net training? Effectively we’re finding a way to “compile” a function (at least to some approximation) into a neural net with a certain number of (real-valued)
parameters. And in the example here we happen to be using about 100 parameters.
But what happens if we use a different number of parameters, or set up the architecture of our neural net differently? Here are a few examples, indicating that for the function we’re trying to
generate, the network we’ve been using so far is pretty much the smallest that will work:
And, by the way, here’s what happens if we change our activation function from ReLU
Later we’ll talk about what happens when we do machine learning with discrete systems. And in anticipation of that, it’s interesting to see what happens if we take a neural net of the kind we’ve
discussed here, and “quantize” its weights (and biases) in discrete levels:
The result is that (as recent experience with large-scale neural nets has also shown) the basic “operation” of the neural net does not require precise real numbers, but survives even when the numbers
are at least somewhat discrete—as this 3D rendering as a function of the discreteness level δ also indicates:
Simplifying the Topology: Mesh Neural Nets
So far we’ve been discussing very traditional neural nets. But to do machine learning, do we really need systems that have all those details? For example, do we really need every neuron on each layer
to get an input from every neuron on the previous layer? What happens if instead every neuron just gets input from at most two others—say with the neurons effectively laid out in a simple mesh? Quite
surprisingly, it turns out that such a network is still perfectly able to generate a function like the one we’ve been using as an example:
And one advantage of such a “mesh neural net” is that—like a cellular automaton—its “internal behavior” can readily be visualized in a rather direct way. So, for example, here are visualizations of
“how the mesh net generates its output”, stepping through different input values x:
And, yes, even though we can visualize it, it’s still hard to understand “what’s going on inside”. Looking at the intermediate values of each individual node in the network as a function of x doesn’t
help much, though we can “see something happening” at places where our function f[x] has jumps:
So how do we train a mesh neural net? Basically we can use the same procedure as for a fully connected network of the kind we saw above (ReLU activation functions don’t seem to work well for mesh
nets, so we’re using ELU here):
Here’s the evolution of differences in each individual weight during the training process:
And here are results for different random seeds:
At the size we’re using, our mesh neural nets have about the same number of connections (and thus weights) as our main example of a fully connected network above. And we see that if we try to reduce
the size of our mesh neural net, it doesn’t do well at reproducing our function:
Making Everything Discrete: A Biological Evolution Analog
Mesh neural nets simplify the topology of neural net connections. But, somewhat surprisingly at first, it seems as if we can go much further in simplifying the systems we’re using—and still
successfully do versions of machine learning. And in particular we’ll find that we can make our systems completely discrete.
The typical methodology of neural net training involves progressively tweaking real-valued parameters, usually using methods based on calculus, and on finding derivatives. And one might imagine that
any successful adaptive process would ultimately have to rely on being able to make arbitrarily small changes, of the kind that are possible with real-valued parameters.
But in studying simple idealizations of biological evolution I recently found striking examples where this isn’t the case—and where completely discrete systems seemed able to capture the essence of
what’s going on.
As an example consider a (3-color) cellular automaton. The rule is shown on the left, and the behavior one generates by repeatedly applying that rule (starting from a single-cell initial condition)
is shown on the right:
The rule has the property that the pattern it generates (from a single-cell initial condition) survives for exactly 40 steps, and then dies out (i.e. every cell becomes white). And the important
point is that this rule can be found by a discrete adaptive process. The idea is to start, say, from a null rule, and then at each step to randomly change a single outcome out of the 27 in the rule
(i.e. make a “single-point mutation” in the rule). Most such changes will cause the “lifetime” of the pattern to get further from our target of 40—and these we discard. But gradually we can build up
“beneficial mutations”
that through “progressive adaptation” eventually get to our original lifetime-40 rule:
We can make a plot of all the attempts we made that eventually let us reach lifetime 40—and we can think of this progressive “fitness” curve as being directly analogous to the loss curves in machine
learning that we saw before:
If we make different sequences of random mutations, we’ll get different paths of adaptive evolution, and different “solutions” for rules that have lifetime 40:
Two things are immediately notable about these. First, that they essentially all seem to be “using different ideas” to reach their goal (presumably analogous to the phenomenon of different branches
in the tree of life). And second, that none of them seem to be using a clear “mechanical procedure” (of the kind we might construct through traditional engineering) to reach their goal. Instead, they
seem to be finding “natural” complicated behavior that just “happens” to achieve the goal.
It’s nontrivial, of course, that this behavior can achieve a goal like the one we’ve set here, as well as that simple selection based on random point mutations can successfully reach the necessary
behavior. But as I discussed in connection with biological evolution, this is ultimately a story of computational irreducibility—particularly in generating diversity both in behavior, and in the
paths necessary to reach it.
But, OK, so how does this model of adaptive evolution relate to systems like neural nets? In the standard language of neural nets, our model is like a discrete analog of a recurrent convolutional
network. It’s “convolutional” because at any given step the same rule is applied—locally—throughout an array of elements. It’s “recurrent” because in effect data is repeatedly “passed through” the
same rule. The kinds of procedures (like “backpropagation”) typically used to train traditional neural nets wouldn’t be able to train such a system. But it turns out that—essentially as a consequence
of computational irreducibility—the very simple method of successive random mutation can be successful.
Machine Learning in Discrete Rule Arrays
Let’s say we want to set up a system like a neural net—or at least a mesh neural net—but we want it to be completely discrete. (And I mean “born discrete”, not just discretized from an existing
continuous system.) How can we do this? One approach (that, as it happens, I first considered in the mid-1980s—but never seriously explored) is to make what we can call a “rule array”. Like in a
cellular automaton there’s an array of cells. But instead of these cells always being updated according to the same rule, each cell at each place in the cellular automaton analog of “spacetime” can
make a different choice of what rule it will use. (And although it’s a fairly extreme idealization, we can potentially imagine that these different rules represent a discrete analog of different
local choices of weights in a mesh neural net.)
As a first example, let’s consider a rule array in which there are two possible choices of rules: k = 2, r = 1 cellular automaton rules 4 and 146 (which are respectively class 2 and class 3):
A particular rule array is defined by which of these rules is going to be used at each (“spacetime”) position in the array. Here are a few examples. In all cases we’re starting from the same
single-cell initial condition. But in each case the rule array has a different arrangement of rule choices—with cells “running” rule 4 being given a
We can see that different choices of rule array can yield very different behaviors. But (in the spirit of machine learning) can we in effect “invert this”, and find a rule array that will give some
particular behavior we want?
A simple approach is to do the direct analog of what we did in our minimal modeling of biological evolution: progressively make random “single-point mutations”—here “flipping” the identity of just
one rule in the rule array—and then keeping only those mutations that don’t make things worse.
As our sample objective, let’s ask to find a rule array that makes the pattern generated from a single cell using that rule array “survive” for exactly 50 steps. At first it might not be obvious that
we’d be able to find such a rule array. But in fact our simple adaptive procedure easily manages to do this:
As the dots here indicate, many mutations don’t lead to longer lifetimes. But every so often, the adaptive process has a “breakthrough” that increases the lifetime—eventually reaching 50:
Just as in our model of biological evolution, different random sequences of mutations lead to different “solutions”, here to the problem of “living for exactly 50 steps”:
Some of these are in effect “simple solutions” that require only a few mutations. But most—like most of our examples in biological evolution—seem more as if they just “happen to work”, effectively by
tapping into just the right, fairly complex behavior.
Is there a sharp distinction between these cases? Looking at the collection of “fitness” (AKA “learning”) curves for the examples above, it doesn’t seem so:
It’s not too difficult to see how to “construct a simple solution” just by strategically placing a single instance of the second rule in the rule array:
But the point is that adaptive evolution by repeated mutation normally won’t “discover” this simple solution. And what’s significant is that the adaptive evolution can nevertheless still successfully
find some solution—even though it’s not one that’s “understandable” like this.
The cellular automaton rules we’ve been using so far take 3 inputs. But it turns out that we can make things even simpler by just putting ordinary 2-input Boolean functions into our rule array. For
example, we can make a rule array from And and Xor functions (r = 1/2 rules 8 and 6):
Different And+Xor (
But are there for example And+Xor rule arrays that will compute any of the 16 possible (2-input) functions? We can’t get Not or any of the 8 other functions with
And in fact we can also set up And+Xor rule arrays for all other “even” Boolean functions. For example, here are rule arrays for the 3-input rule 30 and rule 110 Boolean functions:
It may be worth commenting that the ability to set up such rule arrays is related to functional completeness of the underlying rules we’re using—though it’s not quite the same thing. Functional
completeness is about setting up arbitrary formulas, that can in effect allow long-range connections between intermediate results. Here, all information has to explicitly flow through the array. But
for example the functional completeness of Nand (r = 1/2 rule 7, First (r = 1/2 rule 12,
OK, but what happens if we try to use our adaptive evolution process—say to solve the problem of finding a pattern that survives for exactly 30 steps? Here’s a result for And+Xor rule arrays:
And here are examples of other “solutions” (none of which in this case look particularly “mechanistic” or “constructed”):
But what about learning our original f[x] = x and f[x] in our discrete rule array system. And one approach is to do this simply in terms of the position of a black cell (“one-hot encoding”). So, for
example, in this case there’s an initial black cell at a position corresponding to about x = –1.1. And then the result after passing through the rule array is a black cell at a position corresponding
to f[x] = 1.0:
So now the question is whether we can find a rule array that successfully maps initial to final cell positions according to the mapping x f[x] we want. Well, here’s an example that comes at least
close to doing this (note that the array is taken to be cyclic):
So how did we find this? Well, we just used a simple adaptive evolution process. In direct analogy to the way it’s usually done in machine learning, we set up “training examples”, here of the form:
Then we repeatedly made single-point mutations in our rule array, keeping those mutations where the total difference from all the training examples didn’t increase. And after 50,000 mutations this
gave the final result above.
We can get some sense of “how we got there” by showing the sequence of intermediate results where we got closer to the goal (as opposed to just not getting further from it):
Here are the corresponding rule arrays, in each case highlighting elements that have changed (and showing the computation of f[0] in the arrays):
Different sequences of random mutations will lead to different rule arrays. But with the setup defined here, the resulting rule arrays will almost always succeed in accurately computing f[x]. Here
are a few examples—in which we’re specifically showing the computation of f[0]:
And once again an important takeaway is that we don’t see “identifiable mechanism” in what’s going on. Instead, it looks more as if the rule arrays we’ve got just “happen” to do the computations we
want. Their behavior is complicated, but somehow we can manage to “tap into it” to compute our f[x].
But how robust is this computation? A key feature of typical machine learning is that it can “generalize” away from the specific examples it’s been given. It’s never been clear just how to
characterize that generalization (when does an image of a cat in a dog suit start being identified as an image of a dog?). But—at least when we’re talking about classification tasks—we can think of
what’s going on in terms of basins of attraction that lead to attractors corresponding to our classes.
It’s all considerably easier to analyze, though, in the kind of discrete system we’re exploring here. For example, we can readily enumerate all our training inputs (i.e. all initial states containing
a single black cell), and then see how frequently these cause any given cell to be black:
By the way, here’s what happens to this plot at successive “breakthroughs” during training:
But what about all possible inputs, including ones that don’t just contain a single black cell? Well, we can enumerate all of them, and compute the overall frequency for each cell in the array to be
As we would expect, the result is considerably “fuzzier” than what we got purely with our training inputs. But there’s still a strong trace of the discrete values for f[x] that appeared in the
training data. And if we plot the overall probability for a given final cell to be black, we see peaks at positions corresponding to the values 0 and 1 that f[x] takes on:
But because our system is discrete, we can explicitly look at what outcomes occur:
The most common overall is the “meaningless” all-white state—that basically occurs when the computation from the input “never makes it” to the output. But the next most common outcomes correspond
exactly to f[x] = 0 and f[x] = 1. After that is the “superposition” outcome where f[x] is in effect “both 0 and 1”.
But, OK, so what initial states are “in the basins of attraction of” (i.e. will evolve to) the various outcomes here? The fairly flat plots in the last column above indicate that the overall density
of black cells gives little information about what attractor a particular initial state will evolve to.
So this means we have to look at specific configurations of cells in the initial conditions. As an example, start from the initial condition
which evolves to:
Now we can ask what happens if we look at a sequence of slightly different initial conditions. And here we show in black and white initial conditions that still evolve to the original “attractor”
state, and in pink ones that evolve to some different state:
What’s actually going on inside here? Here are a few examples, highlighting cells whose values change as a result of changing the initial condition:
As is typical in machine learning, there doesn’t seem to be any simple characterization of the form of the basin of attraction. But now we have a sense of what the reason for this is: it’s another
consequence of computational irreducibility. Computational irreducibility gives us the effective randomness that allows us to find useful results by adaptive evolution, but it also leads to changes
having what seem like random and unpredictable effects. (It’s worth noting, by the way, that we could probably dramatically improve the robustness of our attractor basins by specifically including in
our training data examples that have “noise” injected.)
Multiway Mutation Graphs
In doing machine learning in practice, the goal is typically to find some collection of weights, etc. that successfully solve a particular problem. But in general there will be many such collections
of weights, etc. With typical continuous weights and random training steps it’s very difficult to see what the whole “ensemble” of possibilities is. But in our discrete rule array systems, this
becomes more feasible.
Consider a tiny 2×2 rule array with two possible rules. We can make a graph whose edges represent all possible “point mutations” that can occur in this rule array:
In our adaptive evolution process, we’re always moving around a graph like this. But typically most “moves” will end up in states that are rejected because they increase whatever loss we’ve defined.
Consider the problem of generating an And+Xor rule array in which we end with lifetime-4 patterns. Defining the loss as how far we are from this lifetime, we can draw a graph that shows all possible
adaptive evolution paths that always progressively decrease the loss:
The result is a multiway graph of the type we’ve now seen in a great many kinds of situations—notably our recent study of biological evolution.
And although this particular example is quite trivial, the idea in general is that different parts of such a graph represent “different strategies” for solving a problem. And—in direct analogy to our
Physics Project and our studies of things like game graphs—one can imagine such strategies being laid out in a “branchial space” defined by common ancestry of configurations in the multiway graph.
And one can expect that while in some cases the branchial graph will be fairly uniform, in other cases it will have quite separated pieces—that represent fundamentally different strategies. Of
course, the fact that underlying strategies may be different doesn’t mean that the overall behavior or performance of the system will be noticeably different. And indeed one expects that in most
cases computational irreducibility will lead to enough effective randomness that there’ll be no discernable difference.
But in any case, here’s an example starting with a rule array that contains both And and Xor—where we observe distinct branches of adaptive evolution that lead to different solutions to the problem
of finding a configuration with a lifetime of exactly 4:
Optimizing the Learning Process
How should one actually do the learning in machine learning? In practical work with traditional neural nets, learning is normally done using systematic algorithmic methods like backpropagation. But
so far, all we’ve done here is something much simpler: we’ve “learned” by successively making random point mutations, and keeping only ones that don’t lead us further from our goal. And, yes, it’s
interesting that such a procedure can work at all—and (as we’ve discussed elsewhere) this is presumably very relevant to understanding phenomena like biological evolution. But, as we’ll see, there
are more efficient (and probably much more efficient) methods of doing machine learning, even for the kinds of discrete systems we’re studying.
Let’s start by looking again at our earlier example of finding an And+Xor rule array that gives a “lifetime” of exactly 30. At each step in our adaptive (“learning”) process we make a single-point
mutation (changing a single rule in the rule array), keeping the mutation if it doesn’t take us further from our goal. The mutations gradually accumulate—every so often reaching a rule array that
gives a lifetime closer to 30. Just as above, here’s a plot of the lifetime achieved by successive mutations—with the “internal” red dots corresponding to rejected mutations:
We see a series of “plateaus” at which mutations are accumulating but not changing the overall lifetime. And between these we see occasional “breakthroughs” where the lifetime jumps. Here are the
actual rule array configurations for these breakthroughs, with mutations since the last breakthrough highlighted:
But in the end the process here is quite wasteful; in this example, we make a total of 1705 mutations, but only 780 of them actually contribute to generating the final rule array; all the others are
discarded along the way.
So how can we do better? One strategy is to try to figure out at each step which mutation is “most likely to make a difference”. And one way to do this is to try every possible mutation in turn at
every step (as in multiway evolution)—and see what effect each of them has on the ultimate lifetime. From this we can construct a “change map” in which we give the change of lifetime associated with
a mutation at every particular cell. The results will be different for every configuration of rule array, i.e. at every step in the adaptive evolution. But for example here’s what they are for the
particular “breakthrough” configurations shown above (elements in regions that are colored gray won’t affect the result if they are changed; ones colored red will have a positive effect (with more
intense red being more positive), and ones colored blue a negative one:
Let’s say we start from a random rule array, then repeatedly construct the change map and apply the mutation that it implies gives the most positive change—in effect at each step following the “path
of steepest descent” to get to the lifetime we want (i.e. reduce the loss). Then the sequence of “breakthrough” configurations we get is:
And this in effect corresponds to a slightly more direct “path to a solution” than our sequence of pure single-point mutations.
By the way, the particular problem of reaching a certain lifetime has a simple enough structure that this “steepest descent” method—when started from a simple uniform rule array—finds a very
“mechanical” (if slow) path to a solution:
What about the problem of learning f[x] =
So what happens in this case if we follow the “path of steepest descent”, always making the change that would be best according to the change map? Well, the results are actually quite unsatisfactory.
From almost any initial condition the system quickly gets stuck, and never finds any satisfactory solution. In effect it seems that deterministically following the path of steepest descent leads us
to a “local minimum” from which we cannot escape. So what are we missing in just looking at the change map? Well, the change map as we’ve constructed it has the limitation that it’s separately
assessing the effect of each possible individual mutation. It doesn’t deal with multiple mutations at a time—which could well be needed in general if one’s going to find the “fastest path to
success”, and avoid getting stuck.
But even in constructing the change map there’s already a problem. Because at least the direct way of computing it scales quite poorly. In an n×n rule array we have to check the effect of flipping
about n^2 values, and for each one we have to run the whole system—taking altogether about n^4 operations. And one has to do this separately for each step in the learning process.
So how do traditional neural nets avoid this kind of inefficiency? The answer in a sense involves a mathematical trick. And at least as it’s usually presented it’s all based on the continuous nature
of the weights and values in neural nets—which allow us to use methods from calculus.
Let’s say we have a neural net like this
that computes some particular function f[x]:
We can ask how this function changes as we change each of the weights in the network:
And in effect this gives us something like our “change map” above. But there’s an important difference. Because the weights are continuous, we can think about infinitesimal changes to them. And then
we can ask questions like “How does f[x] change when we make an infinitesimal change to a particular weight w[i]?”—or equivalently, “What is the partial derivative of f with respect to w[i] at the
point x?” But now we get to use a key feature of infinitesimal changes: that they can always be thought of as just “adding linearly” (essentially because ε^2 can always be ignored compared to ε). Or,
in other words, we can summarize any infinitesimal change just by giving its “direction” in weight space, i.e. a vector that says how much of each weight should be (infinitesimally) changed. So if we
want to change f[x] (infinitesimally) as quickly as possible, we should go in the direction of steepest descent defined by all the derivatives of f with respect to the weights.
In machine learning, we’re typically trying in effect to set the weights so that the form of f[x] we generate successfully minimizes whatever loss we’ve defined. And we do this by incrementally
“moving in weight space”—at every step computing the direction of steepest descent to know where to go next. (In practice, there are all sorts of tricks like “ADAM” that try to optimize the way to do
But how do we efficiently compute the partial derivative of f with respect to each of the weights? Yes, we could do the analog of generating pictures like the ones above, separately for each of the
weights. But it turns out that a standard result from calculus gives us a vastly more efficient procedure that in effect “maximally reuses” parts of the computation that have already been done.
It all starts with the textbook chain rule for the derivative of nested (i.e. composed) functions:
This basically says that the (infinitesimal) change in the value of the “whole chain” d[c[b[a[x]]]] can be computed as a product of (infinitesimal) changes associated with each of the “links” in the
chain. But the key observation is then that when we get to the computation of the change at a certain point in the chain, we’ve already had to do a lot of the computation we need—and so long as we
stored those results, we always have only an incremental computation to perform.
So how does this apply to neural nets? Well, each layer in a neural net is in effect doing a function composition. So, for example, our d[c[b[a[x]]]] is like a trivial neural net:
But what about the weights, which, after all, are what we are trying to find the effect of changing? Well, we could include them explicitly in the function we’re computing:
And then we could in principle symbolically compute the derivatives with respect to these weights:
For our network above
the corresponding expression (ignoring biases) is
where ϕ denotes our activation function. Once again we’re dealing with nested functions, and once again—though it’s a bit more intricate in this case—the computation of derivatives can be done by
incrementally evaluating terms in the chain rule and in effect using the standard neural net method of “backpropagation”.
So what about the discrete case? Are there similar methods we can use there? We won’t discuss this in detail here, but we’ll give some indications of what’s likely to be involved.
As a potentially simpler case, let’s consider ordinary cellular automata. The analog of our change map asks how the value of a particular “output” cell is affected by changes in other cells—or in
effect what the “partial derivative” of the output value is with respect to changes in values of other cells.
For example, consider the highlighted “output” cell in this cellular automaton evolution:
Now we can look at each cell in this array, and make a change map based on seeing whether flipping the value of just that cell (and then running the cellular automaton forwards from that point) would
change the value of the output cell:
The form of the change map is different if we look at different “output cells”:
Here, by the way, are some larger change maps for this and a couple of other cellular automaton rules:
But is there a way to construct such change maps incrementally? One might have thought that there would immediately be at least for cellular automata that (unlike the cases here) are fundamentally
reversible. But actually such reversibility doesn’t seem to help much—because although it allows us to “backtrack” whole states of the cellular automaton, it doesn’t allow us to trace the separate
effects of individual cells.
So how about using discrete analogs of derivatives and the chain rule? Let’s for example call the function computed by one step in rule 30 cellular automaton evolution w[x, y, z]. We can think of the
“partial derivative” of this function with respect to x at the point x as representing whether the output of w changes when x is flipped starting from the value given:
(Note that “no change” is indicated as False or True or
One can compute a discrete analog of a derivative for any Boolean function. For example, we have
which we can write as:
We also have:
And here is a table of “Boolean derivatives” for all 2-input Boolean functions:
And indeed there’s a whole “Boolean calculus” one can set up for these kinds of derivatives. And in particular, there’s a direct analog of the chain rule:
where Xnor[x,y] is effectively the equality test x == y:
But, OK, how do we use this to create our change maps? In our simple cellular automaton case, we can think of our change map as representing how a change in an output cell “propagates back” to
previous cells. But if we just try to apply our discrete calculus rules we run into a problem: different “chain rule chains” can imply different changes in the value of the same cell. In the
continuous case this path dependence doesn’t happen because of the way infinitesimals work. But in the discrete case it does. And ultimately we’re doing a kind of backtracking that can really be
represented faithfully only as a multiway system. (Though if we just want probabilities, for example, we can consider averaging over branches of the multiway system—and the change maps we showed
above are effectively the result of thresholding over the multiway system.)
But despite the appearance of such difficulties in the “simple” cellular automaton case, such methods typically seem to work better in our original, more complicated rule array case. There’s a bunch
of subtlety associated with the fact that we’re finding derivatives not only with respect to the values in the rule array, but also with respect to the choice of rules (which are the analog of
weights in the continuous case).
Let’s consider the And+Xor rule array:
Our loss is the number of cells whose values disagree with the row shown at the bottom. Now we can construct a change map for this rule array both in a direct “forward” way, and “backwards” using our
discrete derivative methods (where we effectively resolve the small amount of “multiway behavior” by always picking “majority” values):
The results are similar, though in this case not exactly the same. Here are a few other examples:
And, yes, in detail there are essentially always local differences between the results from the forward and backward methods. But the backward method—like in the case of backpropagation in ordinary
neural nets—can be implemented much more efficiently. And for purposes of practical machine learning it’s actually likely to be perfectly satisfactory—especially given that the forward method is
itself only providing an approximation to the question of which mutations are best to do.
And as an example, here are the results of the forward and backward methods for the problem of learning the function f[x] =
What Can Be Learned?
We’ve now shown quite a few examples of machine learning in action. But a fundamental question we haven’t yet addressed is what kind of thing can actually be learned by machine learning. And even
before we get to this, there’s another question: given a particular underlying type of system, what kinds of functions can it even represent?
As a first example consider a minimal neural net of the form (essentially a single-layer perceptron):
With ReLU (AKA Ramp) as the activation function and the first set of weights all taken to be 1, the function computed by such a neural net has the form:
With enough weights and biases this form can represent any piecewise linear function—essentially just by moving around ramps using biases, and scaling them using weights. So for example consider the
This is the function computed by the neural net above—and here’s how it’s built up by adding in successive ramps associated with the individual intermediate nodes (neurons):
(It’s similarly possible to get all smooth functions from activation functions like ELU, etc.)
Things get slightly more complicated if we try to represent functions with more than one argument. With a single intermediate layer we can only get “piecewise (hyper)planar” functions (i.e. functions
that change direction only at linear “fault lines”):
But already with a total of two intermediate layers—and sufficiently many nodes in each of these layers—we can generate any piecewise function of any number of arguments.
If we limit the number of nodes, then roughly we limit the number of boundaries between different linear regions in the values of the functions. But as we increase the number of layers with a given
number of nodes, we basically increase the number of sides that polygonal regions within the function values can have:
So what happens with the mesh nets that we discussed earlier? Here are a few random examples, showing results very similar to shallow, fully connected networks with a comparable total number of
OK, so how about our fully discrete rule arrays? What functions can they represent? We already saw part of the answer earlier when we generated rule arrays to represent various Boolean functions. It
turns out that there is a fairly efficient procedure based on Boolean satisfiability for explicitly finding rule arrays that can represent a given function—or determine that no rule array (say of a
given size) can do this.
Using this procedure, we can find minimal And+Xor rule arrays that represent all (“even”) 3-input Boolean functions (i.e. r = 1 cellular automaton rules):
It’s always possible to specify any n-input Boolean function by an array of 2^n bits, as in:
But we see from the pictures above that when we “compile” Boolean functions into And+Xor rule arrays, they can take different numbers of bits (i.e. different numbers of elements in the rule array).
(In effect, the “algorithmic information content” of the function varies with the “language” we’re using to represent them.) And, for example, in the n = 3 case shown here, the distribution of
minimal rule array sizes is:
There are some functions that are difficult to represent as And+Xor rule arrays (and seem to require 15 rule elements)—and others that are easier. And this is similar to what happens if we represent
Boolean functions as Boolean expressions (say in conjunctive normal form) and count the total number of (unary and binary) operations used:
OK, so we know that there is in principle an And+Xor rule array that will compute any (even) Boolean function. But now we can ask whether an adaptive evolution process can actually find such a rule
array—say with a sequence of single-point mutations. Well, if we do such adaptive evolution—with a loss that counts the number of “wrong outputs” for, say, rule 254—then here’s a sequence of
successive breakthrough configurations that can be produced:
The results aren’t as compact as the minimal solution above. But it seems to always be possible to find at least some And+Xor rule array that “solves the problem” just by using adaptive evolution
with single-point mutations.
Here are results for some other Boolean functions:
And so, yes, not only are all (even) Boolean functions representable in terms of And+Xor rule arrays, they’re also learnable in this form, just by adaptive evolution with single-point mutations.
In what we did above, we were looking at how machine learning works with our rule arrays in specific cases like for the
Of course, there can be specific restrictions. Like the And+Xor rule arrays we’re using here can’t represent (“odd”) functions where Nand+First rule arrays we discussed above nevertheless can.) But
in general it seems to be a reflection of the Principle of Computational Equivalence that pretty much any setup is capable of representing any function—and also adaptively “learning” it.
By the way, it’s a lot easier to discuss questions about representing or learning “any function” when one’s dealing with discrete (countable) functions—because one can expect to either be able to
“exactly get” a given function, or not. But for continuous functions, it’s more complicated, because one’s pretty much inevitably dealing with approximations (unless one can use symbolic forms, which
are basically discrete). So, for example, while we can say (as we did above) that (ReLU) neural nets can represent any piecewise-linear function, in general we’ll only be able to imagine successively
approaching an arbitrary function, much like when you progressively add more terms in a simple Fourier series:
Looking back at our results for discrete rule arrays, one notable observation that is that while we can successfully reproduce all these different Boolean functions, the actual rule array
configurations that achieve this tend to look quite messy. And indeed it’s much the same as we’ve seen throughout: machine learning can find solutions, but they’re not “structured solutions”; they’re
in effect just solutions that “happen to work”.
Are there more structured ways of representing Boolean functions with rule arrays? Here are the two possible minimum-size And+Xor rule arrays that represent rule 30:
At the next-larger size there are more possibilities for rule 30:
And there are also rule arrays that can represent rule 110:
But in none of these cases is there obvious structure that allows us to immediately see how these computations work, or what function is being computed. But what if we try to explicitly
construct—effectively by standard engineering methods—a rule array that computes a particular function? We can start by taking something like the function for rule 30 and writing it in terms of And
and Xor (i.e. in ANF, or “algebraic normal form”):
We can imagine implementing this using an “evaluation graph”:
But now it’s easy to turn this into a rule array (and, yes, we haven’t gone all the way and arranged to copy inputs, etc.):
“Evaluating” this rule array for different inputs, we can see that it indeed gives rule 30:
Doing the same thing for rule 110, the And+Xor expression is
the evaluation graph is
and the rule array is:
And at least with the evaluation graph as a guide, we can readily “see what’s happening” here. But the rule array we’re using is considerably larger than our minimal solutions above—or even than the
solutions we found by adaptive evolution.
It’s a typical situation that one sees in many other kinds of systems (like for example sorting networks): it’s possible to have a “constructed solution” that has clear structure and regularity and
is “understandable”. But minimal solutions—or ones found by adaptive evolution—tend to be much smaller. But they almost always look in many ways random, and aren’t readily understandable or
So far, we’ve been looking at rule arrays that compute specific functions. But in getting a sense of what rule arrays can do, we can consider rule arrays that are “programmable”, in that their input
specifies what function they should compute. So here, for example, is an And+Xor rule array—found by adaptive evolution—that takes the “bit pattern” of any (even) Boolean function as input on the
left, then applies that Boolean function to the inputs on the right:
And with this same rule array we can now compute any possible (even) Boolean function. So here, for example, it’s evaluating Or:
Other Kinds of Models and Setups
Our general goal here has been to set up models that capture the most essential features of neural nets and machine learning—but that are simple enough in their structure that we can readily “look
inside” and get a sense of what they are doing. Mostly we’ve concentrated on rule arrays as a way to provide a minimal analog of standard “perceptron-style” feed-forward neural nets. But what about
other architectures and setups?
In effect, our rule arrays are “spacetime-inhomogeneous” generalizations of cellular automata—in which adaptive evolution determines which rule (say from a finite set) should be used at every
(spatial) position and every (time) step. A different idealization (that in fact we already used in one section above) is to have an ordinary homogeneous cellular automaton—but with a single “global
rule” determined by adaptive evolution. Rule arrays are the analog of feed-forward networks in which a given rule in the rule array is in effect used only once as data “flows through” the system.
Ordinary homogeneous cellular automata are like recurrent networks in which a single stream of data is in effect subjected over and over again to the same rule.
There are various interpolations between these cases. For example, we can imagine a “layered rule array” in which the rules at different steps can be different, but those on a given step are all the
same. Such a system can be viewed as an idealization of a convolutional neural net in which a given layer applies the same kernel to elements at all positions, but different layers can apply
different kernels.
A layered rule array can’t encode as much information as a general rule array. But it’s still able to show machine-learning-style phenomena. And here, for example, is adaptive evolution for a layered
And+Xor rule array progressively solving the problem of generating a pattern that lives for exactly 30 steps:
One could also imagine “vertically layered” rule arrays, in which different rules are used at different positions, but any given position keeps running the same rule forever. However, at least for
the kinds of problems we’ve considered here, it doesn’t seem sufficient to just be able to pick the positions at which different rules are run. One seems to either need to change rules at different
(time) steps, or one needs to be able to adaptively evolve the underlying rules themselves.
Rule arrays and ordinary cellular automata share the feature that the value of each cell depends only on the values of neighboring cells on the step before. But in neural nets it’s standard for the
value at a given node to depend on the values of lots of nodes on the layer before. And what makes this straightforward in neural nets is that (weighted, and perhaps otherwise transformed) values
from previous nodes are taken to be combined just by simple numerical addition—and addition (being n-ary and associative) can take any number of “inputs”. In a cellular automaton (or Boolean
function), however, there’s always a definite number of inputs, determined by the structure of the function. In the most straightforward case, the inputs come only from nearest-neighboring cells. But
there’s no requirement that this is how things need to work—and for example we can pick any “local template” to bring in the inputs for our function. This template could either be the same at every
position and every step, or it could be picked from a certain set differently at different positions—in effect giving us “template arrays” as well as rule arrays.
So what about having a fully connected network, as we did in our very first neural net examples above? To set up a discrete analog of this we first need some kind of discrete n-ary associative
“accumulator” function to fill the place of numerical addition. And for this we could pick a function like And, Or, Xor—or Majority. And if we’re not just going to end up with the same value at each
node on a given layer, we need to set up some analog of a weight associated with each connection—which we can achieve by applying either Identity or Not (i.e. flip or not) to the value flowing
through each connection.
Here’s an example of a network of this type, trained to compute the
There are just two kinds of connections here: flip and not. And at each node we’re computing the majority function—giving value 1 if the majority of its inputs are 1, and 0 otherwise. With the
“one-hot encoding” of input and output that we used before, here are a few examples of how this network evaluates our function:
This was trained just using 1000 steps of single-point mutation applied to the connection types. The loss systematically goes down—but the configuration of the connection types continues to look
quite random even as it achieves zero loss (i.e. even after the function has been completely learned):
In what we’ve just done we assume that all connections continue to be present, though their types (or effectively signs) can change. But we can also consider a network where connections can end up
being zeroed out during training—so that they are effectively no longer present.
Much of what we’ve done here with machine learning has centered around trying to learn transformations of the form x f[x]. But another typical application of machine learning is autoencoding—or in
effect learning how to compress data representing a certain set of examples. And once again it’s possible to do such a task using rule arrays, with learning achieved by a series of single-point
As a starting point, consider training a rule array (of cellular automaton rules 4 and 146) to reproduce unchanged a block of black cells of any width. One might have thought this would be trivial.
But it’s not, because in effect the initial data inevitably gets “ground up” inside the rule array, and has to be reconstituted at the end. But, yes, it’s nevertheless possible to train a rule array
to at least roughly do this—even though once again the rule arrays we find that manage to do this look quite random:
But to set up a nontrivial autoencoder let’s imagine that we progressively “squeeze” the array in the middle, creating an increasingly narrow “bottleneck” through which the data has to flow. At the
bottleneck we effectively have a compressed version of the original data. And we find that at least down to some width of bottleneck, it’s possible to create rule arrays that—with reasonable
probability—can act as successful autoencoders of the original data:
The success of LLMs has highlighted the use of machine learning for sequence continuation—and the effectiveness of transformers for this. But just as with other neural nets, the forms of transformers
that are used in practice are typically very complicated. But can one find a minimal model that nevertheless captures the “essence of transformers”?
Let’s say that we have a sequence that we want to continue, like:
We want to encode each possible value by a vector, as in
so that, for example, our original sequence is encoded as:
Then we have a “head” that reads a block of consecutive vectors, picking off certain values and feeding pairs of them into And and Xor functions, to get a vector of Boolean values:
Ultimately this head is going to “slide” along our sequence, “predicting” what the next element in the sequence will be. But somehow we have to go from our vector of Boolean values to (probabilities
of) sequence elements. Potentially we might be able to do this just with a rule array. But for our purposes here we’ll use a fully connected single-layer Identity+Not network in which at each output
node we just find the sum of the number of values that come to it—and treat this as determining (through a softmax) the probability of the corresponding element:
In this case, the element with the maximum value is 5, so at “zero temperature” this would be our “best prediction” for the next element.
To train this whole system we just make a sequence of random point mutations to everything, keeping mutations that don’t increase the loss (where the loss is basically the difference between
predicted next values and actual next values, or, more precisely, the “categorical cross-entropy”). Here’s how this loss progresses in a typical such training:
At the end of this training, here are the components of our minimal transformer:
First come the encodings of the different possible elements in the sequence. Then there’s the head, here shown applied to the encoding of the first elements of the original sequence. Finally there’s
a single-layer discrete network that takes the output from the head, and deduces relative probabilities for different elements to come next. In this case the highest-probability prediction for the
next element is that it should be element 6.
To do the analog of an LLM we start from some initial “prompt”, i.e. an initial sequence that fits within the width (“context window”) of the head. Then we progressively apply our minimal
transformer, for example at each step taking the next element to be the one with the highest predicted probability (i.e. operating “at zero temperature”). With this setup the collection of
“prediction strengths” is shown in gray, with the “best prediction” shown in red:
Running this even far beyond our original training data, we see that we get a “prediction” of a continued sine wave:
As we might expect, the fact that our minimal transformer can make such a plausible prediction relies on the simplicity of our sine curve. If we use “more complicated” training data, such as the
“mathematically defined” () blue curve in
the result of training and running a minimal transformer is now:
And, not surprisingly, it can’t “figure out the computation” to correctly continue the curve. By the way, different training runs will involve different sequences of mutations, and will yield
different predictions (often with periodic “hallucinations”):
In looking at “perceptron-style” neural nets we wound up using rule arrays—or, in effect, spacetime-inhomogeneous cellular automata—as our minimal models. Here we’ve ended up with a slightly more
complicated minimal model for transformer neural nets. But if we were to simplify it further, we would end up not with something like a cellular automaton but instead with something like a tag system
, in which one has a sequence of elements, and at each step removes a block from the beginning, and—depending on its form—adds a certain block at the end, as in:
And, yes, such systems can generate extremely complex behavior—reinforcing the idea (that we have repeatedly seen here) that machine learning works by selecting complexity that aligns with goals that
have been set.
And along these lines, one can consider all sorts of different computational systems as foundations for machine learning. Here we’ve been looking at cellular-automaton-like and tag-system-like
examples. But for example our Physics Project has shown us the power and flexibility of systems based on hypergraph rewriting. And from what we’ve seen here, it seems very plausible that something
like hypergraph rewriting can serve as a yet more powerful and flexible substrate for machine learning.
So in the End, What’s Really Going On in Machine Learning?
There are, I think, several quite striking conclusions from what we’ve been able to do here. The first is just that models much simpler than traditional neural nets seem capable of capturing the
essential features of machine learning—and indeed these models may well be the basis for a new generation of practical machine learning.
But from a scientific point of view, one of the things that’s important about these models is that they are simple enough in structure that it’s immediately possible to produce visualizations of what
they’re doing inside. And studying these visualizations, the most immediately striking feature is how complicated they look.
It could have been that machine learning would somehow “crack systems”, and find simple representations for what they do. But that doesn’t seem to be what’s going on at all. Instead what seems to be
happening is that machine learning is in a sense just “hitching a ride” on the general richness of the computational universe. It’s not “specifically building up behavior one needs”; rather what it’s
doing is to harness behavior that’s “already out there” in the computational universe.
The fact that this could possibly work relies on the crucial—and at first unexpected—fact that in the computational universe even very simple programs can ubiquitously produce all sorts of complex
behavior. And the point then is that this behavior has enough richness and diversity that it’s possible to find instances of it that align with machine learning objectives one’s defined. In some
sense what machine learning is doing is to “mine” the computational universe for programs that do what one wants.
It’s not that machine learning nails a specific precise program. Rather, it’s that in typical successful applications of machine learning there are lots of programs that “do more or less the right
thing”. If what one’s trying to do involves something computationally irreducible, machine learning won’t typically be able to “get well enough aligned” to correctly “get through all the steps” of
the irreducible computation. But it seems that many “human-like tasks” that are the particular focus of modern machine learning can successfully be done.
And by the way, one can expect that with the minimal models explored here, it becomes more feasible to get a real characterization of what kinds of objectives can successfully be achieved by machine
learning, and what cannot. Critical to the operation of machine learning is not only that there exist programs that can do particular kinds of things, but also that they can realistically be found by
adaptive evolution processes.
In what we’ve done here we’ve often used what’s essentially the very simplest possible process for adaptive evolution: a sequence of point mutations. And what we’ve discovered is that even this is
usually sufficient to lead us to satisfactory machine learning solutions. It could be that our paths of adaptive evolution would always be getting stuck—and not reaching any solution. But the fact
that this doesn’t happen seems crucially connected to the computational irreducibility that’s ubiquitous in the systems we’re studying, and that leads to effective randomness that with overwhelming
probability will “give us a way out” of anywhere we got stuck.
In some sense computational irreducibility “levels the playing field” for different processes of adaptive evolution, and lets even simple ones be successful. Something similar seems to happen for the
whole framework we’re using. Any of a wide class of systems seem capable of successful machine learning, even if they don’t have the detailed structure of traditional neural nets. We can see this as
a typical reflection of the Principle of Computational Equivalence: that even though systems may differ in their details, they are ultimately all equivalent in the computations they can do.
The phenomenon of computational irreducibility leads to a fundamental tradeoff, of particular importance in thinking about things like AI. If we want to be able to know in advance—and broadly
guarantee—what a system is going to do or be able to do, we have to set the system up to be computationally reducible. But if we want the system to be able to make the richest use of computation,
it’ll inevitably be capable of computationally irreducible behavior. And it’s the same story with machine learning. If we want machine learning to be able to do the best it can, and perhaps give us
the impression of “achieving magic”, then we have to allow it to show computational irreducibility. And if we want machine learning to be “understandable” it has to be computationally reducible, and
not able to access the full power of computation.
At the outset, though, it’s not obvious whether machine learning actually has to access such power. It could be that there are computationally reducible ways to solve the kinds of problems we want to
use machine learning to solve. But what we’ve discovered here is that even in solving very simple problems, the adaptive evolution process that’s at the heart of machine learning will end up
sampling—and using—what we can expect to be computationally irreducible processes.
Like biological evolution, machine learning is fundamentally about finding things that work—without the constraint of “understandability” that’s forced on us when we as humans explicitly engineer
things step by step. Could one imagine constraining machine learning to make things understandable? To do so would effectively prevent machine learning from having access to the power of
computationally irreducible processes, and from the evidence here it seems unlikely that with this constraint the kind of successes we’ve seen in machine learning would be possible.
So what does this mean for the “science of machine learning”? One might have hoped that one would be able to “look inside” machine learning systems and get detailed narrative explanations for what’s
going on; that in effect one would be able to “explain the mechanism” for everything. But what we’ve seen here suggests that in general nothing like this will work. All one will be able to say is
that somewhere out there in the computational universe there’s some (typically computationally irreducible) process that “happens” to be aligned with what we want.
Yes, we can make general statements—strongly based on computational irreducibility—about things like the findability of such processes, say by adaptive evolution. But if we ask “How in detail does
the system work?”, there won’t be much of an answer to that. Of course we can trace all its computational steps and see that it behaves in a certain way. But we can’t expect what amounts to a “global
human-level explanation” of what it’s doing. Rather, we’ll basically just be reduced to looking at some computationally irreducible process and observing that it “happens to work”—and we won’t have a
high-level explanation of “why”.
But there is one important loophole to all this. Within any computationally irreducible system, there are always inevitably pockets of computational reducibility. And—as I’ve discussed at length
particularly in connection with our Physics Project—it’s these pockets of computational reducibility that allow computationally bounded observers like us to identify things like “laws of nature” from
which we can build “human-level narratives”.
So what about machine learning? What pockets of computational reducibility show up there, from which we might build “human-level scientific laws”? Much as with the emergence of “simple continuum
behavior” from computationally irreducible processes happening at the level of molecules in a gas or ultimate discrete elements of space, we can expect that at least certain computationally reducible
features will be more obvious when one’s dealing with larger numbers of components. And indeed in sufficiently large machine learning systems, it’s routine to see smooth curves and apparent
regularity when one’s looking at the kind of aggregated behavior that’s probed by things like training curves.
But the question about pockets of reducibility is always whether they end up being aligned with things we consider interesting or useful. Yes, it could be that machine learning systems would exhibit
some kind of collective (“EEG-like”) behavior. But what’s not clear is whether this behavior will tell us anything about the actual “information processing” (or whatever) that’s going on in the
system. And if there is to be a “science of machine learning” what we have to hope for is that we can find in machine learning systems pockets of computational reducibility that are aligned with
things we can measure, and care about.
So given what we’ve been able to explore here about the foundations of machine learning, what can we say about the ultimate power of machine learning systems? A key observation has been that machine
learning works by “piggybacking” on computational irreducibility—and in effect by finding “natural pieces of computational irreducibility” that happen to fit with the objectives one has. But what if
those objectives involve computational irreducibility—as they often do when one’s dealing with a process that’s been successfully formalized in computational terms (as in math, exact science,
computational X, etc.)? Well, it’s not enough that our machine learning system “uses some piece of computational irreducibility inside”. To achieve a particular computationally irreducible objective,
the system would have to do something closely aligned with that actual, specific objective.
It has to be said, however, that by laying bare more of the essence of machine learning here, it becomes easier to at least define the issues of merging typical “formal computation” with machine
learning. Traditionally there’s been a tradeoff between the computational power of a system and its trainability. And indeed in terms of what we’ve seen here this seems to reflect the sense that
“larger chunks of computational irreducibility” are more difficult to fit into something one’s incrementally building up by a process of adaptive evolution.
So how should we ultimately think of machine learning? In effect its power comes from leveraging the “natural resource” of computational irreducibility. But when it uses computational irreducibility
it does so by “foraging” pieces that happen to advance its objectives. Imagine one’s building a wall. One possibility is to fashion bricks of a particular shape that one knows will fit together. But
another is just to look at stones one sees lying around, then to build the wall by fitting these together as best one can.
And if one then asks “Why does the wall have such-and-such a pattern?” the answer will end up being basically “Because that’s what one gets from the stones that happened to be lying around”. There’s
no overarching theory to it in itself; it’s just a reflection of the resources that were out there. Or, in the case of machine learning, one can expect that what one sees will be to a large extent a
reflection of the raw characteristics of computational irreducibility. In other words, the foundations of machine learning are as much as anything rooted in the science of ruliology. And it’s in
large measure to that science we should look in our efforts to understand more about “what’s really going on” in machine learning, and quite possibly also in neuroscience.
Historical & Personal Notes
In some ways it seems like a quirk of intellectual history that the kinds of foundational questions I’ve been discussing here weren’t already addressed long ago—and in some ways it seems like an
inexorable consequence of the only rather recent development of certain intuitions and tools.
The idea that the brain is fundamentally made of connected nerve cells was considered in the latter part of the nineteenth century, and took hold in the first decades of the twentieth century—with
the formalized concept of a neural net that operates in a computational way emerging in full form in the work of Warren McCulloch and Walter Pitts in 1943. By the late 1950s there were hardware
implementations of neural nets (typically for image processing) in the form of “perceptrons”. But despite early enthusiasm, practical results were mixed, and at the end of the 1960s it was announced
that simple cases amenable to mathematical analysis had been “solved”—leading to a general belief that “neural nets couldn’t do anything interesting”.
Ever since the 1940s there had been a trickle of general analyses of neural nets, particularly using methods from physics. But typically these analyses ended up with things like continuum
approximations—that could say little about the information-processing aspects of neural nets. Meanwhile, there was an ongoing undercurrent of belief that somehow neural networks would both explain
and reproduce how the brain works—but no methods seemed to exist to say quite how. Then at the beginning of the 1980s there was a resurgence of interest in neural networks, coming from several
directions. Some of what was done concentrated on very practical efforts to get neural nets to do particular “human-like” tasks. But some was more theoretical, typically using methods from
statistical physics or dynamical systems.
Before long, however, the buzz died down, and for several decades only a few groups were left working with neural nets. Then in 2011 came a surprise breakthrough in using neural nets for image
analysis. It was an important practical advance. But it was driven by technological ideas and development—not any significant new theoretical analysis or framework.
And this was also the pattern for almost all of what followed. People spent great effort to come up with neural net systems that worked—and all sorts of folklore grew up about how this should best be
done. But there wasn’t really even an attempt at an underlying theory; this was a domain of engineering practice, not basic science.
And it was in this tradition that ChatGPT burst onto the scene in late 2022. Almost everything about LLMs seemed to be complicated. Yes, there were empirically some large-scale regularities (like
scaling laws). And I quickly suspected that the success of LLMs was a strong hint of general regularities in human language that hadn’t been clearly identified before. But beyond a few outlier
examples, almost nothing about “what’s going on inside LLMs” has seemed easy to decode. And efforts to put “strong guardrails” on the operation of the system—in effect so as to make it in some way
“predictable” or “understandable”—typically seem to substantially decrease its power (a point that now makes sense in the context of computational irreducibility).
My own interaction with machine learning and neural nets began in 1980 when I was developing my SMP symbolic computation system, and wondering whether it might be possible to generalize the symbolic
pattern-matching foundations of the system to some kind of “fuzzy pattern matching” that would be closer to human thinking. I was aware of neural nets but thought of them as semi-realistic models of
brains, not for example as potential sources of algorithms of the kind I imagined might “solve” fuzzy matching.
And it was partly as a result of trying to understand the essence of systems like neural nets that in 1981 I came up with what I later learned could be thought of as one-dimensional cellular
automata. Soon I was deeply involved in studying cellular automata and developing a new intuition about how complex behavior could arise even from simple rules. But when I learned about recent
efforts to make idealized models of neural nets using ideas from statistical mechanics, I was at least curious enough to set up simulations to try to understand more about these models.
But what I did wasn’t a success. I could neither get the models to do anything of significant practical interest—nor did I manage to derive any good theoretical understanding of them. I kept
wondering, though, what relationship there might be between cellular automata that “just run”, and systems like neural nets that can also “learn”. And in fact in 1985 I tried to make a minimal
cellular-automaton-based model to explore this. It was what I’m now calling a “vertically layered rule array”. And while in many ways I was already asking the right questions, this was an unfortunate
specific choice of system—and my experiments on it didn’t reveal the kinds of phenomena we’re now seeing.
Years went by. I wrote a section on “Human Thinking” in A New Kind of Science, that discussed the possibility of simple foundational rules for the essence of thinking, and even included a minimal
discrete analog of a neural net. At the time, though, I didn’t develop these ideas. By 2017, though, 15 years after the book was published—and knowing about the breakthroughs in deep learning—I had
begun to think more concretely about neural nets as getting their power by sampling programs from across the computational universe. But still I didn’t see quite how this would work.
Meanwhile, there was a new intuition emerging from practical experience with machine learning: that if you “bashed” almost any system “hard enough”, it would learn. Did that mean that perhaps one
didn’t need all the details of neural networks to successfully do machine learning? And could one perhaps make a system whose structure was simple enough that its operation would for example be
accessible to visualization? I particularly wondered about this when I was writing an exposition of ChatGPT and LLMs in early 2023. And I kept talking about “LLM science”, but didn’t have much of a
chance to work on it.
But then, a few months ago, as part of an effort to understand the relation between what science does and what AI does, I tried a kind of “throwaway experiment”—which, to my considerable surprise,
seemed to successfully capture some of the essence of what makes biological evolution possible. But what about other adaptive evolution—and in particular, machine learning? The models that seemed to
be needed were embarrassingly close to what I’d studied in 1985. But now I had a new intuition—and, thanks to Wolfram Language, vastly better tools. And the result has been my effort here.
Of course this is only a beginning. But I’m excited to be able to see what I consider to be the beginnings of foundational science around machine learning. Already there are clear directions for
practical applications (which, needless to say, I plan to explore). And there are signs that perhaps we may finally be able to understand just why—and when—the “magic” of machine learning works.
Thanks to Richard Assar of the Wolfram Institute for extensive help. Thanks also to Brad Klee, Tianyi Gu, Nik Murzin and Max Niederman for specific results, to George Morgan and others at Symbolica
for their early interest, and to Kovas Boguta for suggesting many years ago to link machine learning to the ideas in A New Kind of Science.
8 comments
1. Interesting, and great visualizations. For those of us working it machine learning, statistical inference, and such, it will be important to link the above discussion to Bayesian inference—the
provably optimal statistical inference under broadly relevant conditions.
2. I could barely understand even 1% of this post but I want you to know that humanity is thankful for you. True human treasure.
3. Absolutely glorious visualisations. Serves an aesthetic enjoyment value as much as an educational aide. Really like this framing of ML and emergent intelligence mining through Cellular Automata –
which squares with my own pattern thinking & rules based approach to complexity analysis.
4. for reasons I’m not quite sure of yet, I actually find it reassuring that we still can’t know for sure what exactly is happening in these systems/processes. As a career educator (in one form or
another) it is reassuring that this knotty experience of learning — and a computational representation of it — continues to hold mysteries.
5. Amazing article, asking fundamental questions about the limits of AI Alignment and if alignment will even turn out to be human comprehensible. Makes me more inclined to think that models are best
to align models, not humans aligning models. Maybe this is was the big idea from Superalignment?
Also amazed by the insight that ML is actually tapping into computational irreducibility.
6. I hope others will join in the research. This sounds very promising.
But it’s not boding well for explainability and alignment. This is very interesting.
7. Mr. Wolfram is a scientist of the highest caliber, and his ongoing work is profoundly relevant to numerous fundamental and applied disciplines for years to come. In many ways, he can be seen as a
modern embodiment of Ibn al-Haytham’s “Seeker after Truth.” Thank you!
“The seeker after truth is not one who studies the writings of the ancients and, following his natural disposition, puts his trust in them, but rather the one who suspects his faith in them and
questions what he gathers from them, the one who submits to argument and demonstration and not the sayings of human beings whose nature is fraught with all kinds of imperfection and deficiency.
Thus, the duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and, applying his mind to the core and
margins of its content, attack it from every side. he should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.”
8. Excellent! I recall a philosopher comment “the thing gives thought”( Heidegger??). Stephen’s insights suggest we pay attention to “prompts” as AI is searching for acceptable “responses” to a | {"url":"https://writings.stephenwolfram.com/2024/08/whats-really-going-on-in-machine-learning-some-minimal-models/?ref=gorillasun.de","timestamp":"2024-11-05T13:06:43Z","content_type":"text/html","content_length":"235919","record_id":"<urn:uuid:99bcc3a2-7de8-48a9-8121-1c7ab8ba47fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00238.warc.gz"} |
Filter by column, sort by row
Note: FILTER is a new dynamic array function in Excel 365. In other versions of Excel, there are alternatives, but they are more complex.
In this example, the goal is to filter the data shown in B5:G15 by year, then sort the results in descending order. In addition, the result should include the Group column, sorted in the same way.
The problem breaks down into two main steps:
1. Filter to select the Group and matching Year column
2. Sort the result in descending order by year values
Filter by column
To filter the data to select the Group column and data for the matching year, we use the FILTER function. Typically FILTER is used to filter data vertically, selecting rows that match provided
conditions. However, FILTER can also select data horizontally. The key is to provide logic for the include argument that will return a horizontal array with the same number of columns as the source
data. For example, to return data for the year 2017, we can use a formula like this:
The logical expression:
returns a one-row horizontal array with 5 columns:
When provided to FILTER as the include argument, FILTER returns the values for 2017 only:
FILTER(B5:G15,{FALSE,FALSE,TRUE,FALSE,FALSE,FALSE}) // 2017 only
To add in the Group column, we extend the logic using Boolean logic, a technique for working with TRUE and FALSE values as 1s and 0s. In Boolean algebra, multiplication corresponds to AND logic, and
addition corresponds to OR logic. In this case, we want FILTER to return the Group column and the matching year column. This means we need OR logic - i.e. column = "group" OR column = [year].
Using addition for OR logic, we can construct an expression like this:
This results in two arrays with TRUE and FALSE values, joined by addition:
{TRUE,FALSE,FALSE,FALSE,FALSE,FALSE} +
The math operation of addition coerces the TRUE and FALSE to numbers, and the result is a single array of 1s and 0s:
Notice the first and third columns are 1, while the other columns are 0. When this array is provided to FILTER as the include argument, FILTER returns columns 1 and 3 from the data.
Sort by row
Because the FILTER function is nested inside the SORT function. FILTER returns the two matching columns explained above directly to SORT:
We want to sort these columns by values in the year column (2017) in descending order, so sort_index is provided as 2, and sort_order is given as -1. With these inputs, the SORT function returns the
sorted as shown in the example. Notice that Group E appears first since 27% is the highest value in 2017.
When the year in J4 is changed, FILTER selects new columns, and the SORT function sorts the new data in the same way.
Dropdown menu for year
To make the year dropdown menu, you can apply a simple data validation rule to cell J4. The allowed values are based on the existing years in C4:G4, with In-cell dropdown selected:
Once data validation is in place, a dropdown menu with the years 2016-2020 will appear. If you are new to data validation, see our Data Validation Guide. | {"url":"https://exceljet.net/formulas/filter-by-column-sort-by-row","timestamp":"2024-11-07T07:32:45Z","content_type":"text/html","content_length":"60147","record_id":"<urn:uuid:15380bbc-120e-42dc-a0ec-c469fb2c0663>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00337.warc.gz"} |
flow coefficient
When is this calculator suitable?
To select a size of a gas control valve, you should calculate the required flow coefficient Cg or Kg for the given flow rate and pressure drop.
For a specific gas control valve, the flow coefficient Cg or Kg is determined experimentally by control valve manufacturers, and you can found it in manufacturers technical specifications. They
express flow coefficient Cg or Kg as the flow rate of water in g.p.m. [m3/h] for a pressure drop of 1 psi [1 bar] across a flow passage [flow coefficient: Cg-imperial, Kg-metric].
You can calculate the gas control valve capacity for given upstream and downstream pressure and known flow coefficient Cg or Kg.
With maximum flow rate calculated, you can compare capacities of control valves from different manufacturers with the same nominal size.
Standard EN334 is covering gas control valves and pressure regulators for pressures up to 100 bar. The calculator is in line with the same standard.
When is this calculator not relevant?
You should use a flow coefficient calculator for an incompressible flow of liquids.
How is the calculation executed?
The calculator is calculating flow coefficient Cg or Kg using the relationship between pressure drop and flow rate in the control valve which is for complete turbulent flow following power law where
flow coefficient is the proportional constant:
where is:
q - flow rate
Cg - flow coefficient
Δp - pressure drop | {"url":"https://www.pipeflowcalculations.com/gascontrolvalve/calculator.xhtml","timestamp":"2024-11-06T11:21:42Z","content_type":"application/xhtml+xml","content_length":"71967","record_id":"<urn:uuid:cef0c600-aef1-465c-95ce-5cfa67458401>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00745.warc.gz"} |
More on Fibonacci numbers, with equational reasoning
[ examples Fibonacci gcd Isar ]
The previous post introduced a definition of the Fibonacci function along with some simple proofs by induction. We continue our tour with examples of equational reasoning. Chains of equalities and
inequalities are common in proofs and a proof assistant should allow them to be written.
Today our objective is a result involving greatest common divisors. The lemma below is proved by cases on whether the natural number m equals zero or not. The former case is trivial and in the latter
case, m is written as Suc k:
lemma gcd_fib_add: "gcd (fib m) (fib (n + m)) = gcd (fib m) (fib n)"
proof (cases m)
case 0
then show ?thesis by simp
case (Suc k)
then have "gcd (fib m) (fib (n + m))
= gcd (fib k * fib n) (fib (Suc k))"
by (metis add_Suc_right fib_add gcd.commute gcd_add_mult mult.commute)
also have "… = gcd (fib n) (fib (Suc k))"
using coprime_commute coprime_fib_Suc gcd_mult_left_left_cancel by blast
also have "… = gcd (fib m) (fib n)"
using Suc by (simp add: ac_simps)
finally show ?thesis .
Less usual but convenient is equational reasoning within another expression. In the little proof below, the left hand side of the desired identity is transformed using the addition law proved above.
Then a subexpression of the result is simplified to n. Chaining the two steps (which is the purpose of finally) yield the desired result.
lemma gcd_fib_diff: "gcd (fib m) (fib (n - m)) = gcd (fib m) (fib n)" if "m ≤ n"
proof -
have "gcd (fib m) (fib (n - m)) = gcd (fib m) (fib (n - m + m))"
by (simp add: gcd_fib_add)
also from ‹m ≤ n› have "n - m + m = n"
by simp
finally show ?thesis .
Another example of chaining equalities involving subexpressions appears below. Operating on a subexpression eliminates the need to copy out the entire context. The ellipsis (…) refers to the previous
right-hand side. It works and is clear but I confess this is not my style.
lemma gcd_fib_mod: "gcd (fib m) (fib (n mod m)) = gcd (fib m) (fib n)" if "0 < m"
proof (induction n rule: less_induct)
case (less n)
show ?case
proof -
have "n mod m = (if n < m then n else (n - m) mod m)"
by (rule mod_if)
also have "gcd (fib m) (fib …) = gcd (fib m) (fib n)"
using gcd_fib_diff less.IH that by fastforce
finally show ?thesis .
The work that we have done here and in the previous post finally takes us to our conclusion, an amusing theorem relating Fibonacci numbers and greatest common divisors. A clever step below is the use
of gcd_nat_induct, which refers to an induction principle for the GCD function. In the induction step, in order to prove $P(m,n)$ for a given property $P$, we have the induction hypothesis $P(n, m \
bmod n)$ for all $n>0$. Here it follows immediately with the help of the previous lemma and a fact from Isabelle’s built-in GCD library.
theorem fib_gcd: "fib (gcd m n) = gcd (fib m) (fib n)"
proof (induction m n rule: gcd_nat_induct)
case (step m n)
then show ?case
by (metis gcd.commute gcd_fib_mod gcd_red_nat)
qed auto
The proofs presented in this post are due to Gertrud Bauer. [Post corrected 2024-01-08 as suggested by suzuyu1729.] | {"url":"https://lawrencecpaulson.github.io/2021/10/20/Fib-example-contd.html","timestamp":"2024-11-12T15:02:58Z","content_type":"text/html","content_length":"18362","record_id":"<urn:uuid:2ba055d9-1ce2-43f8-bdaa-5dcc60600d01>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00701.warc.gz"} |
PhysicsGP: A Genetic Programming approach to event selection
PhysicsGP: A Genetic Programming approach to event selection
Created by W.Langdon from gp-bibliography.bib Revision:1.8010
□ author = "Kyle Cranmer and R. Sean Bowman",
□ title = "{PhysicsGP}: A Genetic Programming approach to event selection",
□ journal = "Computer Physics Communications",
□ year = "2005",
□ volume = "167",
□ number = "3",
□ pages = "165--176",
□ month = "1 " # may,
□ keywords = "genetic algorithms, genetic programming, Triggering, Classification, VC dimension, Neural networks, ANN, Support vector machines, SVM",
□ ISSN = "0010-4655",
□ abstract = "We present a novel multivariate classification technique based on Genetic Programming. The technique is distinct from Genetic Algorithms and offers several advantages compared to
Neural Networks and Support Vector Machines. The technique optimises a set of human-readable classifiers with respect to some user-defined performance measure. We calculate the
Vapnik-Chervonenkis dimension of this class of learning machines and consider a practical example: the search for the Standard Model Higgs Boson at the LHC. The resulting classifier is very
fast to evaluate, human-readable, and easily portable. The software may be downloaded at: http://cern.ch/~cranmer/PhysicsGP.html.",
□ notes = "replaces oai:arXiv.org:physics/0402030 http://www.elsevier.com/wps/find/journaldescription.cws_home/505710/description#description
p171 {"}It is meaningless to calculate the VCD (Vapnik-Chervonenkis dimension) for GP in general...{"} {"}by placing a bound on either size... or the degree of the polynomial, we can
calculate a sensible VCD.{"}
GP compared with ANN (backprop + momentum) and SVM with RBF kernel (BSVM-2.0). Training data subsampled.
p174 {"}GP approach does not seem particularly sensitive to the size penalty of mutation rates{"}.",
Genetic Programming entries for Kyle S Cranmer R Sean Bowman | {"url":"http://gpbib.cs.ucl.ac.uk/gp-html/cranmer_2005_CPC.html","timestamp":"2024-11-05T20:24:08Z","content_type":"text/html","content_length":"4242","record_id":"<urn:uuid:1066227a-27c4-4d5c-ab11-130e17f6e6a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00404.warc.gz"} |
The Enigmatic Mathematician Who Reshaped Mathematics
Written on
Chapter 1: The Birth of Bourbaki
Nicolas Bourbaki is a fascinating figure in the realm of mathematics—he never existed as an individual, yet his collective contributions have significantly impacted the field for more than eight
decades. Bourbaki is credited with a variety of achievements, including a mock wedding for his daughter and an obituary upon his supposed passing, all contributing to his legendary status.
The aftermath of World War I left European academia in turmoil. Many scholars had been drafted into service, enduring disease and conflict. France faced a particularly harsh reality, with Aubin and
Goldstein noting that "over 60% of science graduates from the 1910 class did not return from the front lines." This exodus left French academia in a state of disarray, affecting the growth and
development of mathematics for a generation.
Section 1.1: The Struggles of Post-War Mathematics
The mathematical community faced a daunting task in rebuilding after the war. By the 1930s, the absence of coordinated efforts among mathematicians in France and beyond resulted in fragmented methods
and terminologies, rendering the creation of new textbooks nearly impossible. Notably, no mathematics textbooks were published following Goursat's work in 1904. As Pieronkiewicz stated, there was a
"noticeable impasse and increasing uncertainty surrounding the study of mathematics in France."
Jean Dieudonné remarked on this situation, noting, "This illustrated a spirit of democracy and patriotism we can only admire, yet the outcome was a dreadful loss of young French scientists."
Subsection 1.1.1: Forming a Collective
In light of these challenges, five mathematicians—Henri Cartan, Claude Chevalley, Jean Delsarte, André Weil, and Jean Dieudonné—came together to not only mend the fractured community but also to
establish a curriculum for the next two decades. They unwittingly initiated a multi-generational effort to rethink the foundations of mathematics, both invigorating and irritating their peers.
They humorously named their collective after Charles-Denis Bourbaki, a French general known for his failures during the Franco-Prussian War. The name Nicolas is thought by some to reference St.
Nicolas, perhaps indicating the group's intention to provide valuable contributions to the struggling mathematical landscape.
Chapter 2: The First Bourbaki Conference
The founding members, along with René de Possel, convened at Café Grill-Room A. Capoulade, a quaint spot in Paris' Latin Quarter. Their initial task was to refine the application of Stokes' theorem,
a key component in higher-dimensional calculus that had fallen out of favor in France. Over a hearty meal of cabbage soup and grilled meats, they agreed to work collaboratively, hold regular
meetings, and include the work of mathematicians from other countries, particularly Germany.
In July 1935, the inaugural Bourbaki conference was held, attended by three prominent French mathematicians. It was at this gathering that they collectively decided to publish their findings under
the Bourbaki name and broaden their scope to include areas such as topology, set theory, abstract algebra, and Lie groups. Rather than inventing new mathematics, Bourbaki aimed to streamline,
organize, and refine existing knowledge, establishing a coherent set of axioms upon which all mathematics could rely.
The video title is "How Imaginary Numbers Were Invented." This video delves into the intriguing history of imaginary numbers, exploring their origin and significance in mathematics.
Section 2.1: The Impact of Bourbaki
Jean Dieudonné stated that "What Bourbaki has accomplished is to define and generalize a concept that had been prevalent for a long time." Following this meeting, Possel's wife humorously baptized
Nicolas Bourbaki, allowing the group to publish articles under this pseudonym. Their first submission, "Sur un théorème de Carathéodory et la mesure dans les espaces topologiques," was swiftly
accepted by the mathematical community.
Within a few years, Bourbaki conferences took place three times a year, attracting some of the continent's leading mathematicians. Attendees often left these spirited gatherings with the impression
of being part of a "gathering of madmen." Dieudonné noted that the meetings had no formal structure, and it was common for members to harshly critique one another's ideas, regardless of their ages or
The only formal regulation dictated that members retire by age 50, based on the belief that older mathematicians might struggle to adapt to new methodologies. This dynamic resulted in the creation of
a vast body of work, culminating in the 1939 publication of Éléments de mathématique. Over the ensuing decades, this collection became essential reading, expanding to twelve volumes and 6,000 pages,
with the latest edition released in 2016.
However, the German invasion of France in 1940 severely hindered their progress, with some members joining the military and others fleeing from Nazi persecution, once again placing French mathematics
on shaky ground.
Chapter 3: Resurgence and Lasting Influence
Fortunately, Bourbaki resumed its conferences shortly after France's liberation in August 1944, and over the next 75 years, the group transformed the landscape of mathematics. Their insistence on
rigor over speculation drew criticism, with some arguing that Bourbaki sterilized mathematical research. They prioritized logical consistency, establishing foundational principles and reformulating
various mathematical fields.
For instance, they redefined functions, a core concept in mathematics. Traditionally viewed as machines that produce outputs based on given inputs, Bourbaki conceptualized functions as connections
between two sets, enabling the identification of logical relationships. They categorized functions into three types: injective, surjective, and bijective. Through precise definitions like these, they
constructed a more robust mathematical foundation.
Bourbaki also sought to simplify mathematical language, opting for clear and accessible terminology. They replaced complex Latin and Greek terms with more straightforward alternatives, preferring
"paving stones" over "parallelotopes" and "balls" instead of "hyperspheroids." As Jean Dieudonné noted, "We believe that ink is inexpensive enough to warrant writing things out fully, using a
carefully chosen vocabulary."
Additionally, they introduced new symbols, such as the empty set, which remain widely used today. Overall, Bourbaki aimed to establish a logical foundation for all of mathematics, with Dieudonné
describing this foundation as "a center from which all the rest unfolds."
Bourbaki's legacy extends far beyond their initial intentions. Their methods continue to shape modern mathematics, while their terminology and symbols remain in use. Éléments de mathématique endures,
and their influence reaches into fields such as philosophy, psychology, and anthropology. Notably, Bourbaki's work laid the groundwork for structuralism, a movement advocating for the reduction of
disciplines to their fundamental elements.
Jean Dieudonné likened mathematics to "a ball of wool, a tangled hank," asserting that even a minor change could impact the entire structure. Little did he realize in 1968 that the "ball" was far
larger, with Bourbaki's influence permeating all realms of academia.
Quite impressive for a fictional mathematician. | {"url":"https://forbestheatreartsoxford.com/enigmatic-mathematician-reshaped-mathematics.html","timestamp":"2024-11-11T04:35:44Z","content_type":"text/html","content_length":"16873","record_id":"<urn:uuid:3429510b-e6af-4861-bf6b-591602043fa1>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00392.warc.gz"} |
Print Free Graph Paper
Save yourself money and a trip to the store! Print graph paper free from your computer. This site is perfect for science and math homework, craft projects and other graph paper needs. All graph paper
files are optimized PDF documents requiring Adobe Reader for viewing.
Take advantage of your printing flexibility; print on transparency film for sharp graph paper overheads, or waterproof paper for field data-collecting.
Cartesian graph paper is the most popular form of graph paper in use. This type of graph paper is identified by its two perpendicular sets of lines forming a square grid. This graph paper’s grid is
used when graphing two-dimensional equations, and this versatile graph paper is useful for sketches, craft projects, layouts and other non-math activities.
Engineering graph paper lines are similar to Cartesian graph paper’s two perpendicular sets of lines forming a square grid. However, through the use of contrasting line-weights, engineering graph
paper groups the squares into clusters to show distance. Our imperial graph papers are in one square-inch clusters and our metric graph papers are in one square-centimeter clusters.
Polar graph paper is used when graphing polar coordinates. Our polar graph paper has lines radiating from a point to divide the field into 360 unmarked sections that can be labeled as degrees or
radians, while concentric circles intersect the lines by a selected interval.
Isometric graph paper is used when creating isometric images or when graphing three-dimensional functions. Isometric graph paper has three sets of parallel lines representing length, width and height
forming a grid of equilateral triangles.
Logarithmic graph paper is used for graphing rapidly increasing or decreasing quantities spread over a wide expanse. This type of graph paper uses a logarithmic scale that compresses certain sections
of the graph to accommodate a wide data set. Logarithmic graph paper is available in two classes. Semi-logarithmic graph paper, also called semi-log, uses a logarithmic and a linear scale to compose
its axes; full-logarithmic graph paper, also called log-log, uses logarithmic scales for both axes.
Hexagonal graph paper, also called hex paper, is a network of tiled hexagons that form a grid. This type of graph paper can be used when studying tessellations, but is more often used by quilt makers
and computer gamers. Quilt makers use the hexagonal graph paper to design intricate piece-work patterns; gamers use it in the creation of maps to aid game navigation. Our hexagonal graph paper is
measured by a regular hexagon inscribing a circle of a user specified diameter.
Probability graph paper is used when graphing variables along a normal distribution. This type of graph paper uses a probability scale along one axis and a linear scale along the other. This paper is
mostly used in Statistics.
Smith chart is a type of graph paper used in electrical engineering to plot variances of complex transmission impedance along its length. Smith charts also simplify the matching of the line to its
load. Smith charts are copyrighted by Analog Instruments Company and are offered on this site with permission. | {"url":"https://www.printfreegraphpaper.com/","timestamp":"2024-11-05T03:12:20Z","content_type":"text/html","content_length":"14723","record_id":"<urn:uuid:74bb2942-7ea0-4aed-94fe-308e414afd04>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00320.warc.gz"} |
convert an integer to binary KRC - Robotforum - Support and discussion community for industrial robots and cobots
Hi guys!
I need to convert an integer to binary, it would be something in the code itself, as if it had a DECL I=10 and the corresponding binary of I is returned. I didn't find the solution in the manual and
I would like to see with you some possible solution. thanks
DEF DECODE16(val1:in, val2[]:out) DECL INT val1, i, n DECL BOOL val2[] i=1 FOR n=1 to 16 val2[n]=(val1 B_AND i)<>0 i=i+i ENDFOR END DEFFCT INT ENCODE16(val1[]:out)
DECL BOOL val1[] DECL INT i, n, tmp tmp=0 i=1 FOR n=1 to 16 IF Val1[n]==TRUE THEN tmp=tmp+i ENDIF i=i+i ENDFOR RETURN tmp ENDFCT | {"url":"https://www.robot-forum.com/robotforum/thread/43281-convert-an-integer-to-binary-krc/","timestamp":"2024-11-08T23:31:39Z","content_type":"text/html","content_length":"150901","record_id":"<urn:uuid:2222001a-1d54-4c3e-81e9-b9cb9ac1833c>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00635.warc.gz"} |
The Stacks project
Definition 42.68.2. Let $R$ be a local ring with maximal ideal $\mathfrak m$ and residue field $\kappa $. Let $M$ be a finite length $R$-module. Say $l = \text{length}_ R(M)$.
1. Given elements $x_1, \ldots , x_ r \in M$ we denote $\langle x_1, \ldots , x_ r \rangle = Rx_1 + \ldots + Rx_ r$ the $R$-submodule of $M$ generated by $x_1, \ldots , x_ r$.
2. We will say an $l$-tuple of elements $(e_1, \ldots , e_ l)$ of $M$ is admissible if $\mathfrak m e_ i \subset \langle e_1, \ldots , e_{i - 1} \rangle $ for $i = 1, \ldots , l$.
3. A symbol $[e_1, \ldots , e_ l]$ will mean $(e_1, \ldots , e_ l)$ is an admissible $l$-tuple.
4. An admissible relation between symbols is one of the following:
1. if $(e_1, \ldots , e_ l)$ is an admissible sequence and for some $1 \leq a \leq l$ we have $e_ a \in \langle e_1, \ldots , e_{a - 1}\rangle $, then $[e_1, \ldots , e_ l] = 0$,
2. if $(e_1, \ldots , e_ l)$ is an admissible sequence and for some $1 \leq a \leq l$ we have $e_ a = \lambda e'_ a + x$ with $\lambda \in R^*$, and $x \in \langle e_1, \ldots , e_{a - 1}\rangle
$, then
\[ [e_1, \ldots , e_ l] = \overline{\lambda } [e_1, \ldots , e_{a - 1}, e'_ a, e_{a + 1}, \ldots , e_ l] \]
where $\overline{\lambda } \in \kappa ^*$ is the image of $\lambda $ in the residue field, and
3. if $(e_1, \ldots , e_ l)$ is an admissible sequence and $\mathfrak m e_ a \subset \langle e_1, \ldots , e_{a - 2}\rangle $ then
\[ [e_1, \ldots , e_ l] = - [e_1, \ldots , e_{a - 2}, e_ a, e_{a - 1}, e_{a + 1}, \ldots , e_ l]. \]
5. We define the determinant of the finite length $R$-module $M$ to be
\[ \det \nolimits _\kappa (M) = \left\{ \frac{\kappa \text{-vector space generated by symbols}}{\kappa \text{-linear combinations of admissible relations}} \right\} \]
Comments (2)
Comment #2609 by Ko Aoki on
Typo in the definition of an admissible tuple: "$\mathfrak m e_i \in \langle e_1, \ldots, e_{i - 1} \rangle$" should be replaced by "$\mathfrak m e_i \subset \langle e_1, \ldots, e_{i - 1} \
Comment #2632 by Johan on
Thanks, fixed here.
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 02P6. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 02P6, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/02P6","timestamp":"2024-11-07T16:36:06Z","content_type":"text/html","content_length":"16692","record_id":"<urn:uuid:be00fbf2-3a6c-47f7-a674-0517d1e42fdd>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00207.warc.gz"} |
Quantized embedding approaches for collective strong coupling—Connecting ab initio and macroscopic QED to simple models in polaritonics
Collective light–matter interactions have been used to control chemistry and energy transfer, yet accessible approaches that combine ab initio methodology with large many-body quantum optical systems
are missing due to the fast increase in computational cost for explicit simulations. We introduce an accessible ab initio quantum embedding concept for many-body quantum optical systems that allows
us to treat the collective coupling of molecular many-body systems effectively in the spirit of macroscopic quantum electrodynamics while keeping the rigor of ab initio quantum chemistry for the
molecular structure. Our approach fully includes the quantum fluctuations of the polaritonic field and yet remains much simpler and more intuitive than complex embedding approaches such as dynamical
mean-field theory. We illustrate the underlying assumptions by comparison to the Tavis–Cummings model. The intuitive application of the quantized embedding approach and its transparent limitations
offer a practical framework for the field of ab initio polaritonic chemistry to describe collective effects in realistic molecular ensembles.
May it be for catalysis, solar energy harvesting, or superconductivity, controlling the state of a material on demand and the atomistic level defines a considerable area of modern science and
technology. In addition to the intrinsic electronic and phononic degrees of freedom of the material, their interplay with the electromagnetic environment can play a critical role. Prominent examples
include Floquet engineered materials^1–3 and mode-selective chemistry.^4 Coherently driving the material directly has two major downsides: (i) it requires energy, and (ii) it results in uncontrolled
dissipation, which tends to obscure the path toward the desired target state. Electromagnetic resonators can reshape the continuous optical free-space spectrum into distinct optical modes, thus
allowing a material inside to exchange energy only with specific modes. At sufficiently strong interaction, the resonant modes of matter and cavity hybridize and result in polaritonic quasiparticle
excitations. Over the recent years, a plethora of changes in energy transfer^5–16 and chemical reactivity^17–24 have been observed and established the field of polaritonic chemistry.^25–27
The strength of hybridization between light and matter scales approximately as $NE/V$, with the number of collectively coupled emitters N[E] and the effective quantization volume V of the confined
field. Strong coupling is reached if the hybridization energy exceeds all losses and the two bright polaritonic excitations can be clearly separated from their background. Two prominent strategies
exist to reach strong coupling: (i) minimizing V by using, e.g., subwavelength confined optical modes in (meta)plasmonic systems,^28–31 and (ii) increasing the number of emitters N[E] that interact
coherently with the confined resonator mode.^5–7,18,20,32 Polaritonic chemistry has primarily utilized the latter, which imposes, unfortunately, a high burden on the theory as the description of
large ensembles becomes computationally overwhelming. As a consequence, the literature is dominated by Tavis–Cummings-like models, which replace the complex electronic structure with two levels and
the resonator structure with a single harmonic oscillator. Effective (embedding) extensions of such models have recently gained attention^33–35 as they promise to compress complexity into intuitive
expressions. While simple quantum optical models have demonstrated remarkable success in describing the collective interaction, the goal to simultaneously capture changes in the chemical structure^
24,26,27,36 calls for new approaches that are able to combine sophisticated cavities^37 with complex molecular restructuring during chemical reactions. Importantly, chemistry is foremost local while
collective interaction is intrinsically delocalized, i.e., a local reformulation would greatly ease the theoretical challenge.
In this spirit, quantizing macroscopic Maxwell’s equations for all but a few molecules, or even a single molecule, is an attractive approach. Macroscopic quantum electrodynamics (QED) is capable of
describing a complex optical environment seen by a single molecule via the linear response functions of the surrounding molecules and the cavity structure. Screening effects are accounted for via
local field corrections, the impact of which on collective strong coupling has not yet been explored. Reference 38 introduced the Embedding radiation-reaction approach (Erra), which offers a
simplified workflow compared to macroscopic QED, and demonstrated that, within mean-field theory, the collective interaction can be approximately absorbed into an effective light–matter interaction
that is trivially compatible with local ab initio methods.
In this paper, we clarify the potential and limitations of quantized embedding approaches based on macroscopic QED for collective strong coupling by connecting them to standard quantum optical
models. To do so, we introduce in Sec. II quantized embedding approaches and, specifically, in Sec. II B 2, the Quantized embedding radiation-reaction approach (Qerra). Section III demonstrates how
Qerra recovers the Tavis–Cummings Hamiltonian in a suitable limit as well as collective effects (superradiance). For simple cavity structures, we find that Qerra offers a simple and straight-forward
workflow (Sec. II C) that provides the quantized description of a macroscopic number of realistic molecules interacting with the cavity in the low-excitation regime. Our results in Sec. IV
demonstrate how embedding approaches offer a computationally feasible way to go beyond common approximations employed in standard quantum optics approaches to describe the local dynamic of a single
molecule that is collectively strongly coupled to a realistic azopyrrole in chloroform solution. Section V concludes our discussion and provides an outlook over the remaining challenges and possible
solution strategies for polaritonic embedding approaches.
We first give an overview of field quantization in general absorbing and dispersive media via macroscopic quantum electrodynamics (Sec. II A). In Sec. II B, we then demonstrate how this framework can
be employed to examine the collective strong coupling of a macroscopic number of complex emitters through embedding approaches. Section II C and Fig. 1 present a summary of the underlying workflow of
these embedding approaches.
A. Field quantization in absorbing environments
To find the electromagnetic field in the presence of linear macroscopic media,^39,40 a fruitful approach is offered by macroscopic electrodynamics. Here, the macroscopic number of matter degrees of
freedom is encompassed into a dielectric function, namely, the permittivity ɛ(r, ω).^41^, ɛ relates the linear polarization field generated by the media to the external electric field. Canonically
quantizing classical macroscopic electrodynamics^42 leads to the theoretical framework of macroscopic quantum electrodynamics,^39,40 which is capable of finding the quantized electromagnetic field in
general absorbing and dispersive optical environments given by their permittivity.
The resulting quantized electric field operator reads
is the speed of light in vacuum,
is the vacuum permittivity, and
) are bosonic creation and annihilation operators satisfying
is the dyadic Green tensor of the vector Helmholtz equation defined via
and the boundary condition
for |
′| →
. The magnetic field operator can be found from Eq.
= i∇ ×
When the macroscopic QED field is coupled to a set of charged emitters at positions
and with dipole operators
, the corresponding light–matter Hamiltonian in the multipolar coupling scheme and the long-wavelength approximation reads
Here, we have defined the matter Hamiltonian
, including the dipole self-energy term,
is the Hamiltonian of the medium-assisted field,
To use macroscopic QED, one has to determine
) for the given setup and, subsequently, determine the Green tensor
by solving Eq.
then fully determine the quantized electromagnetic field. For example, the spectral density of the field reads
which is known to determine the dynamics of a single quantum emitter coupled to the vacuum field at position
Beyond a single emitter, the dynamics of a few emitters (possibly strongly) coupled to the medium-assisted field can analogously be found using macroscopic QED. However, finding the dynamics of
macroscopically many emitters coupled to the four-dimensional continuum of field operators
) poses great computational difficulties in general. Embedding approaches, as outlined in Sec.
II B
, suggest a suitable path to compress the complexity of many-body systems to an effective single- or few-particle problem.
B. Ab initio embedding approaches
We consider the following setup, which is relevant, e.g., for polaritonic chemistry: A cloud of
≫ 1 emitters (e.g., molecules) is placed inside a cavity. The basic idea of the embedding approaches is now to separate the system of
emitters and the cavity into microscopic components—which are solved directly—and macroscopic components—which are described effectively. The microscopic part is the part of interest, and in the
following is considered to consist of just a single molecule (or a few) that can be described in detail by
ab initio
methods. The macroscopic part consists of the remaining
− 1 molecules and the cavity. Both the cavity mirrors and the remaining
molecules are assumed to dress the electromagnetic environment of the microscopic part via linear response theory, i.e., via an effective linear susceptibility
. Here,
consists of a part describing the cavity mirror (
) and the molecules (
); compare
Fig. 1
can be obtained by calculating the polarizability
of a single molecule via
ab initio
methods and then employing a Clausius–Mossotti relation to find
, e.g., in the dilute gas limit one finds
is the volume in which the emitters are located, and
is the Heaviside theta function, i.e.,
) = 1 if
) = 0 if
Having absorbed most of the emitters into the environmental susceptibility χ[mol], we can use macroscopic QED as outlined in Sec. II A to quantize the effective electromagnetic environment $G$ for
the given χ, which now represents the cavity dressed by N emitters. What hampers the direct application of such an approach is the fact that the spectral density in Eq. (7) diverges in the
coincidence limit (for r = r′). Physically, this feature is caused by the coarse graining of the medium, which neglects that the field seen locally by the emitter differs from the macroscopic one due
to the discreteness of the individual emitters.^47 To account for these effects, local field corrections have to be employed, e.g., via the so-called real cavity model^48 as outlined in Sec. II B 1.
1. Full macroscopic QED embedding
To account for local field effects, the real cavity model assumes that the space in the immediate vicinity of the emitter is empty, i.e., that the emitter is placed in a small spherical cavity with
centered around the position of the emitter inside the surrounding media; see
Fig. 1
(left). The Green tensor of the new geometry, including the small cavity surrounding the emitter, can be determined in the limit that the radius
≪ 1, where
is the wave-vector of the field. In this limit, one finds that the Green tensor inside the small cavity reads
is the free-space Green tensor,
is the scattering Green tensor of the surrounding medium without the small cavity around the emitter, and
The real cavity model has been used to study, e.g., corrections to the Purcell effect,
medium-assisted van der Waals interactions,
and strong-coupling of a single emitter in a cavity field with a background medium.
However, it is not clear how to connect the full macroscopic QED embedding approach based on the real cavity model outlined here to standard quantum optics frameworks, such as the Tavis–Cummings
Hamiltonian. Moreover, in the full macroscopic QED embedding approach, the scattering Green tensor of the cavity and the emitters
must be calculated for each type of emitter and each emitter density separately, as each new configuration of emitters inside the cavity results in slightly different mode-structures. Both
limitations can be lifted by introducing the streamlined embedding approach of Qerra.
2. Qerra: Good cavity, dilute emitter, and small volume approximation
Here we follow Ref.
and consider a scenario that closely resembles more idealized quantum optical setups. We assume that the molecules are confined to a region
that is much smaller than the resonant wavelength of the cavity. We further consider the good cavity and dilute emitter limits in which one can safely ignore the diverging bulk contribution to the
Green tensor as well as local field corrections and approximate the bare Green tensor of the empty cavity
by the scattering contribution
. This neglects all direct interactions between the emitters that are not mediated by the cavity. In this limit, we can follow Ref.
to obtain the Green tensor inside the small volume
accounting for the surrounding
emitters and the cavity via the following Lippmann–Schwinger equation:
′ ∈
, and
is the Green tensor of the empty cavity that we approximate by its scattering contribution, i.e.,
. Assuming that
is approximately constant over the spatial extension of the cloud of molecules
, this equation can be solved via the dipole approximation, i.e., assuming that
) =
), such that
This equation can be solved for
to yield
As discussed in Ref.
, this approach is sufficient to describe the classical linear response of the entire system and provides a simple way to push
ab initio
QED to macroscopically large values of
. This scheme can further be easily extended to not just a single type of molecule but, e.g., also the scenario where a molecule of interest is placed in a solvent. To this end, the polarizability
would consist of a sum of the polarizabilities of the molecules of interest and the solvent molecule weighted by their relative density. Note that Qerra provides solutions for any arbitrary number
and mixture of emitters inside the resonator structure once the Green tensor of the bare resonator structure
and the emitter polarizability
have been determined. In contrast, in the full embedding approach accounting for local field corrections outlined in Sec.
II B 1
, the Green tensor of the cavity and the emitters must be calculated for every new concentration of the emitters.
So far, Qerra’s classical equivalent has been used to find the classical electromagnetic environment of a molecule inside a cloud of N ≫ 1 molecules that is placed inside a cavity, given by a bare
Green tensor consisting of just a single Lorentzian,^38 which physically corresponds to a perfectly single mode cavity. In the following, we use macroscopic QED to quantize the field in the embedding
ansatz and show how this connects Qerra to standard quantum optics approaches in Sec. III. Before doing so, we summarize the workflow of the macroscopic QED embedding approaches for collective strong
coupling in Sec. II C.
C. Workflow of quantized embedding approaches
The workflow of the embedding ansatz is summarized in Fig. 1: In the first step, the polarizability of the individual molecules and the solvent is calculated in free-space from first principles
using, e.g., time-dependent density-functional theory. The susceptibility χ[mol] of the ensemble of N molecules is then found, for example, via a Clausius–Mossotti type relation [compare Eq. (8)].
The Green tensor $G(rNE,rNE,ω)≡G(ω)$ of the optical environment of the singled-out emitter at position $rNE$ can be obtained either via the full macroscopic QED embedding approach (see Sec. II B 1)
or via Qerra (see Sec. II B 2). Once $G(ω)$ and the susceptibility of the environment are determined, the resulting quantized field is given by macroscopic QED via Eq. (1). We have thus reduced the
problem of a molecule interacting with a cavity of arbitrary geometry and material as well as a macroscopic number of other molecules to the interaction of a single molecule with a dressed multimode
quantized field.
The resulting multimode macroscopic QED field can be utilized directly in ab initio calculations.^55,56 Alternatively, one can further reduce the complexity of the four-dimensional continuum of field
modes given by f(r, ω) to only a few dominant modes of the field using a suitable few-mode model.^57–62
The embedding approaches outlined here allow one to include realistic cavity models (described through a general Green tensor), go beyond the rotating-wave approximation, and account for the full
molecular structure provided from first principles, yet the computational complexity remains constant for any number of molecules N. The approximations of these embedding approaches are summarized
and compared to those used to derive the Tavis–Cummings Hamiltonian (see Sec. III) in Table I.
TABLE I.
Framework . Approximations .
Full MQED embedding Linear response for embedded molecules, local field corrections (real cavity model)
Qerra Linear response for embedded molecules, good cavity, dilute emitter, small volume approximation
Tavis–Cummings Two-level approximations, rotating-wave approximation, single-mode approximation
Framework . Approximations .
Full MQED embedding Linear response for embedded molecules, local field corrections (real cavity model)
Qerra Linear response for embedded molecules, good cavity, dilute emitter, small volume approximation
Tavis–Cummings Two-level approximations, rotating-wave approximation, single-mode approximation
Collective strong coupling is routinely studied via the Tavis–Cummings model and extensions thereof, such as the Holstein–Tavis–Cummings model.^63,64 In such approaches, the electronic degrees of
freedom of the emitters are approximated by two-level systems that are all located at the same position inside the cavity, and the cavity field is given by a lossless single harmonic mode.
How does the embedding approach introduced in Sec.
compare to these standard quantum optics methods? To analyze this question, we consider
two-level systems resonantly coupled to a single-mode cavity and assume that the rotating-wave approximation applies. We consider only one polarization direction, such that we include only one
diagonal element
of the Green tensor. For a single mode field, it reads
is the frequency of the cavity,
is its decay rate, and the constant
determines the resulting light–matter coupling strength; see below.
We can now follow different approaches: (i) start with a single mode approximation of the electric field resulting from Eq. (14) to find the dynamics via the resulting Tavis–Cummings Hamiltonian;
(ii) use the Qerra embedding workflow from above, i.e., find the new, polaritonic multimode field for the cavity coupled to N molecules; and (iii) we can use the Tavis–Cummings Hamiltonian as in (i),
but additionally apply the Holstein–Primakoff approximation, which is valid in the low-excitation limit. These three approaches are outlined and contrasted in the subsequent three sections.
A. Exact diagonalization of the Tavis–Cummings model
two-level systems with transition frequency
to the field given by the Green tensor in Eq.
leads to the following Tavis–Cummings model:
where the coupling strength is given by
being the dipole moment of one of the two-level systems. Here, we have neglected the dissipation of the cavity field by considering the limit
→ 0. The Hamiltonian in Eq.
preserves the total number of excitations, such that
can be decomposed into subspaces with fixed numbers of excitations. In the following, we focus on the single-excitation subspace of
given by
consists of the single excited state |1⟩ of the cavity; the bright state |
⟩ ≡
N^−1/2 ∑[i]
⟩, where |
⟩ is the excited state of the
th emitter; the dark state manifold |
⟩ that spans the subspace of the Hilbert space of the
emitters with one excitation orthogonal to |
⟩; and the excited state
of the singled-out emitter. As the dark states |
⟩ are not coupled to the other degrees of freedom, we neglect them in the following. As outlined in the supplemental material of Ref.
, the resulting single-excitation Hamiltonian can be expressed in the basis
. We see that the problem reduces to two modes with frequencies
coupled to the
th emitter with coupling strength
. This is an exact solution of the model constrained to the single excitation manifold.
B. Embedding approach
We next treat the same problem via the Qerra workflow (see the right hand side of
Fig. 1
). The polarizability of a two-level system within the rotating-wave approximation is given by (we only consider one polarization direction)
Using the Clausius–Mossotti relation in the dilute gas limit (8), we find
from the polarization in Eq.
. Inserting
and the single-mode Green tensor in Eq.
into Eq.
, we find
, this expression can be rewritten as
We see that the resulting Green tensor in Eq.
consists of two Lorentzian line shapes, which exactly correspond to two field modes with frequencies
. The coupling strengths
of these two field modes to the singled-out emitter can be obtained by comparing Eq.
with Eq.
as well as using Eq.
We thus find that the dynamics of the singled-out emitter
coupled to the embedding environment, consisting of the cavity field mode and the
other emitters, are governed by the effective Hamiltonian,
The single excitation manifold of this Hamiltonian reads
Far from the ultra-strong coupling regime, we have
, in which case we find
In this limit, we find
This shows that for any number
of the emitters, the dynamics restricted to the single-excitation manifold obtained via the embedding approach and the exact diagonalization of the Tavis–Cummings Hamiltonian are exactly the same, as
long as one does not enter the ultra-strong coupling regime (Ω
≪ 1). Note that the difference here is the renormalization of the transition, which is present in the embedding approach but missing in the Tavis–Cummings Hamiltonian.
C. Low-excitation approximation of the Tavis–Cummings model
We further simplify the Tavis–Cummings Hamiltonian in Eq. (15) using the low-excitation approximation. To this end, we employ the Holstein–Primakoff transformation.^67
We start by rewriting the Tavis–Cummings Hamiltonian in Eq.
using collective spin operators for the
two-level systems, i.e.,
such that
Expanding the
operators in terms of bosonic operators
/2 →
(in the last step we dropped a constant energy shift),
, and
we find
Note that this is still an exact transformation of the Tavis–Cummings Hamiltonian. We next apply the low-excitation approximation, assuming that only a negligible fraction of the atoms is excited,
such that
. We find
We can diagonalize the part of the Holstein–Primakoff Hamiltonian that describes the
two-level systems and the bare cavity mode by introducing new polaritonic, bosonic annihilation operators
. We find
Note that the low-excitation approximation is exact for any number of atoms
if one is restricted to the single-excitation manifold considered in Sec.
, i.e.,
Furthermore, we find that
holds in general, not only for the first excitation manifold. The embedding ansatz, therefore, leads to exactly the same model as the Tavis–Cummings model in the low-excitation approximation for any
and any number of excitations, as long as Ω
≪ 1. This correspondence shows that the embedding ansatz is valid whenever the low-excitation approximation is valid, i.e., for
. Most experiments in polaritonic chemistry are performed in this low-excitation limit, many of them even in the “dark” for
≫ 10
, such that Qerra promises reasonably accurate predictions for as long as direct intermolecular interactions are of minor relevance.
Collectively coupled polaritonic systems at low pumping, i.e., in the low-excitation regime, represent, therefore, harmonic, i.e., classical, environments. Qerra extends this statement from simple
models to the realms of ab initio QED by allowing for realistic response functions of the emitters and the cavity mirrors with only a little additional computational cost. The harmonic nature of the
environment also validates the usage of macroscopic QED, as the latter is based on the linear optical response.
D. Collectivity
We have used Qerra to single out one molecule acting as an “impurity,” which is embedded in the environment dressed by all other molecules via linear response theory. What is unclear so far is how
much collectivity remains by following this procedure.
In the case of
two-level systems, superradiance occurs when we prepare the two-level systems in a coherent superposition state
. When calculating the transition rate from | +, 0⟩ → |0, 1⟩ and from |1
, 0⟩ → |0, 1⟩, where |1
⟩ is the state where atom
is in the excited state and all other atoms are in the ground state and |0⟩ and |1⟩ correspond to the zero and one-photon states with respect to
, we find the transition matrix elements,
The enhancement of
compared to the single atom decay rate is commonly known as superradiance. We immediately see from the Holstein–Primakoff Hamiltonian
in Eq.
that | + ⟩ couples with coupling strength
to the single field mode. Since the Holstein–Primakoff Hamiltonian is equivalent to the one from our embedding approach, it immediately follows that collective effects such as superradiance are also
included in the latter approach. To show this explicitly, we have to express | + ⟩ in terms of the new eigenmodes defined by
. We find in the low-excitation limit,
Calculating the transition rate using the embedding Hamiltonian
in Eq.
[which can be connected directly to the Holstein–Primakoff Hamiltonian
in its form
], we get the same enhancement of
: The single photon state can be expressed as
and we find
where the last equality sign holds for
, compare Eq.
. This conclusion is similar to Ref.
and linear dispersion theory,
which has been shown to cover strong coupling
as well as collective effects.
Having established a connection between the Qerra embedding approach and the Tavis–Cummings Hamiltonian, we next illustrate how Qerra can be used to go beyond the limitations of the Tavis–Cummings
model with only a little extra computational effort. First, in Sec. IV A, we show how Qerra can be used to account for detuning of the two-level emitters and emitter losses. Then, we use Qerra to
study the collective strong coupling of a more realistic molecular system in Sec. IV B. In Sec. IV C, we apply the full macroscopic QED embedding approach (Sec. II B 1) to a setup in which the
emitters are not localized in the center of the cavity but are evenly distributed between the mirrors of a planar Fabry–Pérot resonator. We account for dispersion and absorption in the gold mirrors
of the cavity and further include local field corrections. For all applications, we only consider a single polarization direction of the field and focus on analyzing the spectral density of the
optical environment that is given in terms of the Green tensor via Eq. (7).
A. Qerra: Two-level emitters
We use the Qerra workflow (see right hand side of
Fig. 1
) to study the collective strong coupling of two-level emitters to a single cavity mode, i.e.,
is given by Eq.
. To this end, we show in
Figs. 2(b)
the spectral density
to which the singled-out emitter couples, consisting of the single-mode cavity with frequency
and decay rate
, and the
two-level emitters with frequency
, decay
, and polarizability
Note that the decay rate
(Purcell effect) and the frequency
(environment-induced frequency shift) are affected by the cavity and
emitters via the embedding approach. We see in
Fig. 2(a)
that for zero detuning Δ ≡
= 0, the single Lorentzian of the cavity (dashed black line) splits into two polaritonic peaks separated by the Rabi frequency Ω
. The slight difference in the height of the upper and lower polariton indicates that in general, the coupling strength of the singled-out emitter to the upper and lower polariton is not the same,
; compare Eq.
. If the two-level systems are red or blue shifted from the cavity frequency, we find that the resonance frequencies of the polaritons are also red or blue shifted, respectively. Furthermore, the
relative height of the two polaritonic peaks changes, meaning that a singled-out emitter in a blue shifted (red shifted) ensemble will couple more strongly to the lower polariton (upper polariton)
[see blue and green line in
Fig. 2(a)
When considering the impact of the decay rate γ[A] of the two-level systems on the spectral density, we find that with a decreasing decay rate of the emitter, the two polaritonic resonances in the
spectral density become narrower. This shows that with decreasing decay rates of the emitters, also the decay rate of the polaritonic modes decreases, as expected.
B. Qerra: Realistic molecular example using Qerra
Having established Qerra as a tool to study the collective strong coupling of two-level systems to a single-mode cavity, we next show how it can be used to analyze collective strong coupling of
realistic molecular ensembles. To this end, we have calculated the polarizability of ortho-azopyrrole in trans- and cis-configuration (15% and 5% of the total number of coupled molecules) in
chloroform solution (80% of coupled molecules) using time-dependent density-functional theory in combination with an implicit polarization model for the chloroform solution; see Subsection 2 of the
Appendix for details. The isotropic average Im[α[ave](ω)] is shown in Fig. 2(e).
The cavity again consists of a single linearly polarized cavity mode that is tuned in resonance with one of the resonances of the azopyrrole solution at 227.8 nm; compare the black dashed line in
Fig. 2(e). The resulting spectral density of the Qerra workflow is shown as the black line in the same figure. We can clearly see that the single resonance of the cavity splits into a lower and upper
polariton. As before, the lower polariton has a lower height, resulting in a lower coupling to the singled-out emitter. The line shape of these two polaritonic modes differs from simple Lorentzians,
indicating the necessity of multiple modes to fully describe them in an effective few-mode model.^57 On top of these two polaritonic resonances, we find that the dressed spectral density also
contains additional features corresponding to the other resonances of the azopyrrole solution in the vicinity of the cavity frequency. As all of these features only consist of a single peak per
resonance, we find that none of the other resonances couple collectively strongly to the cavity mode (the decay rate is bigger than the collective coupling strength, resulting in no visible Rabi
C. Full macroscopic QED embedding with realistic cavity and molecular system
As the last example, we consider the setup illustrated in
Fig. 2(f)
. Here, the azopyrrole solution, already considered in the last subsection, fills all of the space between the two mirrors with the permittivity
of a Fabry–Pérot cavity. The singled-out emitter is assumed to be located in the center between the two mirrors. To treat this problem, we make use of the full macroscopic QED embedding approach,
including local field corrections; see Sec.
II B 1
. We show the resulting spectral density for the singled-out emitter in
Figs. 2(g)
; see Subsection
of the
for numerical details. Local field corrections dominate the spectral density in this system, suggesting that Coulombic polarization-corrections routinely used in quantum chemistry for the description
of solute–solvent dynamics (also used in Sec.
IV B
) provide a major contribution to the dynamic of the impurity molecule. In
Fig. 2(g)
, we further distinguish the scattering part of the spectral density, given by
from the bulk part
) =
) −
; compare Eqs.
. Note that the bulk part
gives the spectral density in the absence of the cavity, such that
represents all changes in the optical environment of the singled-out molecule induced by the cavity. We see that
, indicating that the single emitter “feels” the cavity only to a small extent, and its dynamics are instead dominated by its direct chemical and free-space environment. This is no surprise, as for
Tavis–Cummings-like models of collectively coupled ensembles, each individual molecule contributes only to a minor extent and is also affected only to a minor extend. In such simple models, the
effective coupling strengths acting on a single molecule have an upper bound defined by the bare cavity coupling, as is apparent from Eq.
and following (see also Ref.
). Under which conditions the cavity is able to induce local changes remains a matter of active debate.
Still, as shown in Fig. 2(h), the local-field corrected scattering part of the spectral density J[scat] (38) shows the splitting of the cavity resonance at 227.8 nm into a lower and upper polariton.
Furthermore, as the cavity resonances of the Fabry–Pérot cavity are very broad, the influence of other emitter resonances onto J[scat] is clearly visible.
A comprehensive description of collective strong coupling from first principles is a defining challenge in the theoretical description of polaritonics. Established models require a strong
simplification of the molecular structure and even then become prohibitively expensive to solve numerically for increasing numbers of emitters.
Here, we discussed quantized embedding approaches for collective strong coupling that are based on macroscopic QED and in which the ensemble of emitters located in the cavity are split into an
“impurity” and “environment” subsystem. All environmental degrees of freedom, including their (in)homogeneous broadening, are then used to dress the electromagnetic mode-structure encoded in the
dyadic Green tensor $G[χ]$. Usage of the entire dyadic results in an embedding version of full macroscopic QED but necessitates local field corrections and the consistent treatment of free-space,
direct longitudinal interactions between the emitters, and cavity induced interactions, which complicates its direct application. If the scattering contribution to $G$ dominates the mode-structure,
i.e., working with high-Q cavities, and direct (mostly longitudinal) interactions can be ignored in the electromagnetic treatment (dilute limit), we can derive the simplified description Qerra. Qerra
is easy to combine with ab initio quantum chemistry and offers, therefore, an intuitive approach to transition into the realm of ab initio QED. In particular, many of the recently developed quantum
chemistry^75–78 and non-adiabatic dynamic^12,79–82 methodologies could be pushed closer to experimental reality by (partially) invoking quantum embedding approaches.
Following macroscopic QED allows then to quantize the dressed environment and couple it to the remaining impurity subsystem that is solved in full microscopic rigor. The impurity is coupled
effectively to a set of quantized effective oscillator modes that represent the polaritonic eigenstates of the environment. We further illustrated this by demonstrating that Qerra is identical to the
Tavis–Cummings model in the low-excitation limit and that collective (superradiant) effects are included in the single-excitation space. However, quantum embedding approaches for collective strong
coupling can be readily extended to more complex environments, such as a cavity filled with a solute–solvent mixture that overlaps spectrally. We illustrated how such a complex environment affects
the spectral density and how lossy cavities and emitters can be described with the full macroscopic QED embedding approach. Future work should focus on exploring possibilities to account for
longitudinal inter-molecular interactions, thus moving Qerra closer to full macroscopic QED embedding, anharmonic corrections to the dynamic of the embedded ensemble, and full self-consistency
between impurity and environment.
Quantized embedding approaches promise a theoretically rigorous description from first principles for large ensembles of molecules collectively coupled to electromagnetic resonators—paving a way to
interweave the intuitive description in quantum optical models with the complexity of ab initio QED in a local framework suitable for QED chemistry.
We thank Andreas Buchleitner and Edoardo Carnio for their insightful discussions. This work was funded by the Spanish Ministry for Science and Innovation—Agencia Estatal de Investigación (AEI)
through Grant No. EUR2023-143478. D.L. gratefully acknowledges the financial support from the Georg H. Endress Foundation and the DFG funded Research Training Group “Dynamics of Controlled Atomic and
Molecular Systems” (Grant No. RTG 2717). C.S. acknowledges the funding from the Horizon Europe research and innovation program of the European Union under the Marie Skłodowska-Curie Grant Agreement
No. 101065117.
This work was partially funded by the European Union. Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or REA. Neither
the European Union nor the granting authority can be held responsible for them.
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Frieder Lindel: Investigation (equal); Methodology (equal); Writing – original draft (equal); Writing – review & editing (equal). Dominik Lentrodt: Investigation (equal); Methodology (equal); Writing
– original draft (equal); Writing – review & editing (equal). Stefan Yoshi Buhmann: Supervision (equal); Writing – review & editing (equal). Christian Schäfer: Conceptualization (lead); Investigation
(equal); Methodology (equal); Supervision (equal); Writing – original draft (equal); Writing – review & editing (equal).
The data that support the findings of this study are available from the corresponding author upon reasonable request.
1. Green tensor of a gold Fabry–Pérot cavity filled with azopyrrole solution
To apply the full macroscopic QED embedding workflow to the setup shown in
Fig. 2(f)
, we use that the imaginary part of the free-space Green tensor in the coincidence limit is given by
Moreover, we have to evaluate the scattering Green tensor of the Fabry–Pérot cavity
completely filled with an azopyrrole solution with permittivity
= 1 +
; compare Eq.
. We here only consider one polarization component
of the field that is in-plane with the cavity mirrors.
in the coincidence limit in the center of the cavity is given by
is the distance between the two mirrors,
with Im[
] > 0,
account for multiple reflections between the mirrors, and
are the Fresnel reflection coefficients that read
. The permittivity of the gold mirror
is given by a Drude model,
= 2.067 × 2
PHz and
= 4.4491 × 2
We further introduce polar coordinates
) and
) and perform the resulting
integral analytically. The remaining
integral has been evaluated numerically.
2. Polarizability for azopyrrole solution
All polarizabilities have been calculated using Casida time-dependent density-functional theory with the ORCA5.0 code.^84 Structures for trans-azopyrrole, cis-azopyrrole, and chloroform have been
relaxed using the CAM-B3LYP functional^85 in a def2-TZVPD basis, including the implicit CPCM solvation model for chloroform.^86
The 20 first roots of the Casida equations under the Tamm–Dancoff approximation are then used to construct the polarizability according to
with the transition dipoles
and transition energy
between ground state and S-state
. We include an artificial broadening of
= 5⋅10
, “
Floquet engineering of quantum materials
Annu. Rev. Condens. Matter Phys.
, and
, “
Ab initio nonrelativistic quantum electrodynamics: Bridging quantum chemistry and quantum optics from weak to strong coupling
Phys. Rev. A
De Giovannini
, and
, “
Engineering quantum materials with chiral optical cavities
Nat. Mater.
A. H.
, “
Energy redistribution in isolated molecules and the question of mode-selective laser chemistry revisited
J. Phys. Chem.
D. M.
P. G.
P. G.
, and
D. G.
, “
Polariton-mediated energy transfer between organic dyes in a strongly coupled optical microcavity
Nat. Mater.
J. A.
J. F.
, and
T. W.
, “
Conductivity in organic semiconductors hybridized with the vacuum field
Nat. Mater.
J. A.
, and
T. W.
, “
Non-radiative energy transfer mediated by hybrid light-matter states
Angew. Chem., Int. Ed.
F. C.
, “
Cavity-controlled chemistry in molecular ensembles
Phys. Rev. Lett.
, and
, “
Inherent promotion of ionic conductivity via collective vibrational strong coupling of water with the vacuum electromagnetic field
J. Am. Chem. Soc.
, and
, “
Disorder enhanced vibrational entanglement and dynamics in polaritonic chemistry
Commun. Phys.
, and
, “
Modification of excitation and charge transfer in cavity quantum-electrodynamical chemistry
Proc. Natl. Acad. Sci. U. S. A.
, and
J. J.
, “
Tracking polariton relaxation with multiscale molecular dynamics simulations
J. Phys. Chem. Lett.
G. J.
, and
, “
Competition between collective and individual conical intersection dynamics in an optical cavity
New J. Phys.
K. S.
R. M. A.
, and
, “
Extraordinary electrical conductance through amorphous non-conducting polymers under vibrational strong coupling
J. Am. Chem. Soc.
M. F.
et al, “
Plasmon mediated coherent population oscillations in molecular aggregates
Nat. Commun.
, “
Polariton localization and dispersion properties of disordered quantum emitters in multimode microcavities
Phys. Rev. Lett.
D. G.
T. J.
, and
, “
Suppression of photo-oxidation of organic chromophores by strong coupling to plasmonic nanoantennas
Sci. Adv.
S. J.
J. A.
, and
T. W.
, “
Ground-state chemical reactivity under vibrational coupling to the vacuum electromagnetic field
Angew. Chem., Int. Ed.
R. M.
, and
T. W.
, “
Ground state chemistry under vibrational strong coupling: Dependence of thermodynamic parameters on the rabi splitting energy
J. F.
, and
B. S.
, “
Modification of ground-state chemical reactivity via light–matter coherence in infrared cavities
, and
, “
Cavity-enabled enhancement of ultrafast intramolecular vibrational redistribution over pseudorotation
F. J.
, and
, “
Suppressing photochemical reactions with quantized light fields
Nat. Commun.
, and
, “
Strong coupling with light enhances the photoisomerization quantum yield of azobenzene
, and
, “
Shining light on the microscopic resonant mechanism responsible for cavity-mediated chemical reactivity
Nat. Commun.
, and
T. W.
, “
Inducing new material properties with hybrid light–matter states
Phys. Today
F. J.
, and
T. W.
, “
Manipulating matter by strong coupling to vacuum fields
B. S.
A. D.
, and
J. C.
, “
Mode-specific chemistry through vibrational strong coupling (or a wish come true)
J. Phys. Chem. C
de Nijs
S. J.
O. A.
, and
J. J.
, “
Single-molecule strong coupling at room temperature in plasmonic nanocavities
, and
, “
Coherent coupling of a single molecule to a scanning fabry-perot microcavity
Phys. Rev. X
O. S.
W. D.
V. A.
U. F.
, and
J. J.
, “
Quantum electrodynamics at room temperature coupling a single vibrating molecule with a plasmonic nanocavity
Nat. Commun.
N. S.
T. G.
, and
J. J.
, “
Giant mid-ir resonant coupling to molecular vibrations in sub-nm gaps of plasmonic multilayer metafilms
Light: Sci. Appl.
V. M.
, and
D. G.
, “
Cavity polaritons in microcavities containing disordered organic semiconductors
Phys. Rev. B
F. C.
, “
Exciton–phonon polaritons in organic microcavities: Testing a simple ansatz for treating a large number of chromophores
J. Chem. Phys.
J. B.
N. P.
, and
, “
Simulating molecular polaritons in the collective regime using few-molecule models
Proc. Natl. Acad. Sci. U. S. A.
, “
Collective response in light–matter interactions: The interplay between strong coupling and local dynamics
J. Chem. Phys.
, and
, “
Machine learning for polaritonic chemistry: Accessing chemical kinetics
J. Am. Chem. Soc.
, and
, “
Light interaction with photonic and plasmonic resonances
Laser Photonics Rev.
, “
Polaritonic chemistry from first principles via embedding radiation reaction
J. Phys. Chem. Lett.
S. Y.
, “
Macroscopic quantum electrodynamics - concepts and applications
Acta Phys. Slovaca. Rev. Tutorials
S. Y.
Dispersion forces I: Macroscopic Quantum Electrodynamics and Ground-State Casimir, Casimir–Polder and Van Der Waals Forces
), Vol.
For simplicity, we assume throughout the manuscript that the permeability μ(r, ω) of the media is given by μ(r, ω) ≈ 1. The embedding approaches discussed here, however, can also be used to study
magnetically responding media.
T. G.
, “
Canonical quantization of macroscopic electromagnetism
New J. Phys.
A. I.
, and
F. J.
, “
Macroscopic qed for quantum nanophotonics: Emitter-centered modes as a minimal basis for multiemitter problems
, and
, “
Relevance of the quadratic diamagnetic and self-polarization terms in cavity quantum electrodynamics
ACS Photonics
The Theory of Open Quantum Systems
Oxford University Press
New York
, “
Electric moments of molecules in liquids
J. Am. Chem. Soc.
, and
, “
Spontaneous decay of an excited atom in an absorbing dielectric
Phys. Rev. A
H. T.
S. Y.
, and
, “
Local-field correction to the spontaneous decay rate of atoms embedded in bodies of finite size
Phys. Rev. A
, “
Local-field corrections to the decay rate of excited molecules in absorbing cavities: The onsager model
Phys. Rev. A
S. Y.
, and
, “
Local-field correction to one- and two-atom van der waals interactions
Phys. Rev. A
T. M.
H. T.
, and
, “
Local-field correction in the strong-coupling regime
Phys. Rev. A
S. Y.
, “
Born expansion of the Casimir–Polder interaction of a ground-state atom with dielectric bodies
Appl. Phys. B
Dispersion Forces II: Many-Body Effects, Excited Atoms, Finite Temperature and Quantum Friction
), Vol.
M. K.
, and
K. S.
, “
Combining density functional theory with macroscopic qed for quantum light-matter interactions in 2D materials
Nat. Commun.
M. K.
K. S.
, and
, “
Ab initio calculations of quantum light–matter interactions in general electromagnetic environments
J. Chem. Theory Comput.
F. J.
A. I.
, and
, “
Few-mode field quantization of arbitrary electromagnetic spectral densities
Phys. Rev. Lett.
, “
Ab initio few-mode theory for quantum potential scattering problems
Phys. Rev. X
, and
, “
Non-hermitian pseudomodes for strongly coupled open quantum systems: Unravelings, correlations and thermodynamics
[quant-ph] (
C. H.
, and
, “
Certifying multimode light-matter interaction in lossy resonators
Phys. Rev. Lett.
S. F.
, and
M. B.
, “
Nonperturbative treatment of non-markovian dynamics of open quantum systems
Phys. Rev. Lett.
, “
Quantized atom-field dynamics in unstable cavities
Phys. Rev. Lett.
R. F.
L. A.
, and
, “
Polariton chemistry: Controlling molecular dynamics with optical cavities
Chem. Sci.
F. J.
, and
, “
Theoretical challenges in polaritonic chemistry
ACS Photonics
P. T.
, and
, “
Modeling electromagnetic resonators using quasinormal modes
Adv. Opt. Photonics
Note, that in Eq.
the influence of the two-level systems (second term in the denominator) stays frequency dependent even for large detunings. This is an artifact of the rotating-wave approximation. The full
polarizability reads
such that
leading to a constant shift for the resonance frequency of the cavity for large detunings, i.e.,
. This has a simple interpretation: Far from any resonances the two-level systems lead to a constant refractive index in the cavity, which shifts its resonances.
, “
Field dependence of the intrinsic domain magnetization of a ferromagnet
Phys. Rev.
D. J.
S. E.
H. J.
, and
T. W.
, “
Vacuum rabi splitting as a feature of linear-dispersion theory: Analysis and experimental observations
Phys. Rev. Lett.
We only consider a single polarization direction in-plane with the cavity mirrors for simplicity.
T. S.
, and
, “
Collective strong coupling modifies aggregation and solvation
J. Phys. Chem. Lett.
, and
, “
A perspective on ab initio modeling of polaritonic chemistry: The role of non-equilibrium effects and quantum collectivity
J. Chem. Phys.
M. A.
B. M.
E. R.
, and
, “
Theoretical advances in polariton chemistry and molecular cavity quantum electrodynamics
Chem. Rev.
, and
, “
Cavity Born–Oppenheimer Hartree–Fock ansatz: Light–matter properties of strongly coupled molecular ensembles
J. Phys. Chem. Lett.
T. S.
E. F.
, and
, “
Coupled cluster theory for molecular polaritons: Changing ground and excited states
Phys. Rev. X
M. D.
A. E.
, and
, “
The orientation dependence of cavity-modified chemistry
J. Chem. Phys.
D. M.
, and
, “
Light–matter response in nonrelativistic quantum electrodynamics
ACS Photonics
, and
, “
Making ab initio QED functional(s): Nonperturbative and photon-free effective frameworks for strong light–matter coupling
Proc. Natl. Acad. Sci. U. S. A.
, “
Nonadiabatic wave packet dynamics with ab initio cavity-Born-Oppenheimer potential energy surfaces
J. Chem. Theory Comput.
N. M.
, and
, “
Benchmarking semiclassical and perturbative methods for real-time simulations of cavity-bound emission and interference
J. Chem. Phys.
, “
Coherent dynamics in cavity femtochemistry: Application of the multi-configuration time-dependent hartree method
Chem. Phys.
G. J.
, and
, “
Nonadiabatic phenomena in molecular vibrational polaritons
J. Chem. Phys.
M. G.
M. D.
, and
M. J.
, “
Search for the ideal plasmonic nanoshell: The effects of surface scattering and alternatives to gold and silver
J. Phys. Chem. C
, “
Software update: The orca program system—Version 5.0
Wiley Interdiscip. Rev.: Comput. Mol. Sci.
D. P.
, and
N. C.
, “
A new hybrid exchange–correlation functional using the coulomb-attenuating method (cam-b3lyp)
Chem. Phys. Lett.
, and
, “
Fast evaluation of geometries and properties of excited molecules in solution: A tamm-dancoff model with application to 4-dimethylaminobenzonitrile
J. Phys. Chem. A
C. A.
Time-dependent Density-Functional Theory: Concepts and Applications
OUP Oxford
© 2024 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC) license (https://creativecommons.org/ | {"url":"https://pubs.aip.org/aip/jcp/article/161/15/154111/3317554/Quantized-embedding-approaches-for-collective?searchresult=1","timestamp":"2024-11-03T16:55:29Z","content_type":"text/html","content_length":"826585","record_id":"<urn:uuid:666c97ce-f59f-47bf-97ca-463de4f7133c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00823.warc.gz"} |
Effective interventions may not be limited to changing means, but instead may also include changes to how variables affect each other over time. Continuous time models offer the opportunity to
specify differing underlying processes. A substantive example compares models that imply different underlying continuous time processes using panel data.
Empirical Bayes Derivative Estimates (2020)
This article proposes a new method for estimating derivatives on calculating the Empirical Bayes estimates of derivatives from a mixed model. Two simulations compare four derivative estimation
methods: Generalized Local Linear Approximation, Generalized Orthogonal Derivative Estimates, Functional Data Analysis, and the proposed Empirical Bayes Derivative Estimates.
Differing Perspectives on Time Alter Mediation Inferences (2018)
Time is unlike any other variable. This chapter considers the difference in perspectives offered by discrete-time and continuous-time approached to mediation. The differences in how one
conceptualizes time have the potential to alter core mediation concepts as direct and indirect effect, complete and partial mediation, and even what constitutes a “mediation” model.
Attachment Changes predicting Depression and Anxiety Changes (2017)
Two studies examined the role short-term changes in adult attachment and mindfulness play in depression and general anxiety.
Dynamical Systems Approaches (2016)
This is an introduction to dynamical systems ideas. Dynamical systems are mathematical models of one or more constructs that change over time. Approaches to dynamical systems are concerned with
describing the temporal evolution of constructs, with emphasis often placed on constructs that develop in a complex, nonlinear manner over time.
Integration of Stochastic Differential Equations (2016)
Stochastic differential equation (SDE) models are a promising method for modeling intraindividual change and variability. This method uses structural equation modeling (SEM) conventions to simplify
SDE specification, the flexibility of SEM to expand the range of SDEs that can be fit, and SEM diagram conventions to facilitate the teaching of SDE concepts.
The Modeling of Change and Variability in Nursing Research (2015)
Most methods in statistics are focused on the analysis of mean differences. Mean–difference questions, however, represent only a narrow range of the questions that can be posed. Focusing only on
these questions can overlook important questions concerning change and variability. This chapter considers methods to model change and variability.
Longitudinal Data Analysis (2015)
This essay reviews emerging trends in modeling repeated measures data. Three longitudinal models are discussed: panel model designs, growth curve models, and intensive within-person assessments.
Continuous time models for panel data are discussed. The analysis of intensive within-individual observations is also considered, including work that limits the generalizability of interindividual
studies to individual outcomes.
No Need to be Discrete: Continuous Time Mediation (2015)
Mediation models based on cross-sectional data can produce unexpected estimates, so much so that making longitudinal or causal inferences is inadvisable. Even longitudinal mediation models produce
estimates are specific to the lag between observations, leading to debate over lag selection. Using continuous time models, one can estimate lag-independent parameters.
Using Derivatives to Articulate Change Theories (2015)
A wide variety of models can be understood in terms of the level, velocity, and acceleration of constructs: the zeroth, first, and second derivatives, respectively. Conceptualizing change in terms of
derivatives allows precise translation of theory into method and highlights commonly overlooked models of change. We introduce the language of derivatives.
Stress dissipation effects on health and well-being (2014)
Daily data from the NDHWB (n = 783; age range 37–90) were analyzed to produce ‘dynamic characteristic’ estimates of stress input and dissipation. These were used in multi-level modeling (with age and
trait stress resistance) to predict depression and health trajectories.
Differential Equation Modeling Is the Language of Change (2013)
Many applied statistical problems address how change two variables are related. In this chapter derivatives are presented as a language framework for describing changes with respect to time.
Derivatives can be used to provide statisticians and substantive researchers a common language that can be used to create better matches between models and theory.
The Reservoir Model of Psychological Capacity (2013)
This article describes a model based on a reservoir. This model might be useful for constructs (e.g., stress), where events might “add up” (e.g., life stressors), but individuals simultaneously take
action to “blow off steam” (e.g., engage coping resources). The model is applied to daily self-reports of negative affect and stress from older adults.
Dynamical systems and models of continuous time (2013)
Historically, it has been easier to describe differences between groups of people, rather than describe the dynamic ways that people change. This chapter introduces dynamical systems and of
continuous time models. Two methods are introduced for the fitting of continuous time models to observed data: the approximate discrete model and latent differential equations.
Gauging Driver Performance (2011)
This paper examines the estimation of derivatives from in-vehicle measures of vehicle control such as steering wheel angle. At very short time scales many in-vehicle measures may indicate
characteristics such as fatigue and inattention. This paper models changes in momentary derivative estimates in a 90-minute simulation with 19 participants.
Maternal Depressive Symptomatology and Child Behavior (2011)
This study investigated reciprocal relationships between adolescent mothers and their children’s well-being through an analysis of the coupling relationship of mothers’ depressive symptomatology and
children’s internalizing and externalizing behaviors. The present study used dynamical systems to model time continuously, which allowed for the study of dynamic, transactional effects between
members of each dyad.
Perceived Emotion Control During Later Life (2011)
The relationship between global emotion control beliefs and daily affect reports across 56 days were assessed in a sample of 298 older adults. Variability analyses investigating multiple time scales
revealed global beliefs were related to lower variance in negative affect and less variable speeds of negative affect change across a range of time scales.
Modeling Non-Linear Dynamics (2011)
This chapter introduces one approach to describing constructs that consist of easily reversed, short–term changes — data that is often dismissed as “error.” This chapter begins with calculating rates
of change on many small segments of a time series, and how the current state of a variable can be related to how it is changing.
Resilience-As-Process: Negative Affect & Stress (2010)
Resilience is often considered both a trait and a process. The current study proposes a new way to conceptualize resilience-as-process based on dynamical systems modeling, which allows researchers to
capture the process of stress management in real time using coupled damped linear oscillator models.
Modeling Noisy Data with Observed and Expected Matrices (2010)
Using embedded and observed data matrices, a statistical approach to differential equation modeling is presented. This approach appears robust to short, poorly sampled time series with large
proportions of measurement and dynamic error, as is common in psychological research.
Generalized Orthogonal Derivative Estimates (GOLD) (2010)
The fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed
method and a generalized form of Local Linear Approximation when used to estimate derivatives and when used to estimate differential equation model parameters.
Derivative Variability Analysis (DVA) (2009)
Intraindividual measures, such as intraindividual standard deviation or coefficient of variation, are incomplete representations variability. Studying variability can be made more productive by
examining specific time scales. Furthermore, examination of variance in observed scores may not be sufficient. A method is introduced that uses estimated derivatives to examine variability at
multiple time scales.
Smoothing–Independent Estimation of Oscillators (2009)
This article considers the estimation of damped linear oscillators with psychological data. The formulas of the new method presented in this chapter, which related the τ-conditional and τ-corrected
estimates of model parameters, are described.
Generalized Local Linear Approximation (GLLA) (2009)
Brief summary of post contents Generalized Local Linear Approximation is a generalization of local linear approximation of derivatives (LLA, Boker & Graham, 1998). While LLA allows for estimation up
to second-order derivatives (acceleration) with three observations, GLLA allows for any order of derivative with additional flexibility as to the number of observations used.
Modeling Oscillators Using Surrogate Data Analysis (2008)
Methods for fitting the damped linear oscillator model using differential equation modeling can yield biased parameter estimates when applied to univariate time series. The bias depends on a
researcher-selected, smoothing-like parameter. This article explores a technique that uses surrogate data analysis to select such a parameter, thereby producing approximately unbiased parameter
Mood Oscillations Weather and RCBD (2008)
Rapid Cycling Bipolar Disorder outpatients completed twice-daily mood self-ratings for 3 consecutive months. These ratings were matched with local measurements of atmospheric pressure, cloud cover,
and temperature. Several alternative second order differential equation models were fit to the data in which mood oscillations in RCBD were allowed to be linearly coupled with daily weather patterns. | {"url":"http://intraindividual.com/author/chuckdeboeck/","timestamp":"2024-11-02T06:00:39Z","content_type":"text/html","content_length":"107025","record_id":"<urn:uuid:673f326b-c916-40c4-a3b9-c873787abaee>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00026.warc.gz"} |
The Quadrilateral
is a polygon of four sides and four vertices. It is also called and . In the triangle, the sum of the interior angles is 180°; for quadrilaterals the sum of the interior angles is always equal to
$A + B + C + D = 360^\circ$
Classifications of Quadrilaterals
There are two broad classifications of quadrilaterals; simple and complex. The sides of simple quadrilaterals do not cross each other while two sides of complex quadrilaterals cross each other.
Simple quadrilaterals are further classified into two: convex and concave. Convex if none of the sides pass through the quadrilateral when prolonged while concave if the prolongation of any one side
will pass inside the quadrilateral.
The following formulas are applicable only to convex quadrilaterals.
General Quadrilateral
Any convex quadrilateral can use the following formulas:
Perimeter, P (applicable to all quadrilaterals, simple and complex)
$P = a + b + c + d$
Area, A
$A = \sqrt{(s - a)(s - b)(s - c)(s - d) - abcd \cos^2 \varphi \,}$
s = semi perimeter = ½P
$\varphi$ = ½ (A + C) or $\varphi$ = ½ (B + D)
The area can also be expressed in terms of diagonals d[1] and d[2]
$A = \frac{1}{2}d_1 d_2 \sin \theta$
Recent comments
• Hello po! Question lang po…
1 week 3 days ago
• 400000=120[14π(D2−10000)]
1 month 2 weeks ago
• Use integration by parts for…
2 months 1 week ago
• need answer
2 months 1 week ago
• Yes you are absolutely right…
2 months 2 weeks ago
• I think what is ask is the…
2 months 2 weeks ago
• $\cos \theta = \dfrac{2}{…
2 months 2 weeks ago
• Why did you use (1/SQ root 5…
2 months 2 weeks ago
• How did you get the 300 000pi
2 months 2 weeks ago
• It is not necessary to…
2 months 2 weeks ago | {"url":"https://mathalino.com/reviewer/plane-geometry/the-quadrilateral","timestamp":"2024-11-06T20:22:16Z","content_type":"text/html","content_length":"50895","record_id":"<urn:uuid:1ee29a5a-e9e2-484b-afef-55fbde2f9464>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00578.warc.gz"} |
Calculation of Interest and Cash Price in Installment System
Calculation of Interest and Cash Price in Installment System (With Formula and Illustration)!
Calculation of Interest:
The calculation of interest is the following two cases are given below:
(i) When Cash Price and Rate of Interest are Given:
It is simple if the cash price and the rate of interest are given; the only thing to remember is that since instalments are generally round sums, the interest in the final instalment will be the
difference between the instalment payable and the amount remaining unpaid by way of principal.
(1) On 1st April, 2009 machinery is purchased on the hire purchase system, the payment to be made Rs 10,000 down (i.e., on the signing of the contract) and Rs 10,000 annually for three years;
(2) The cash price or the machinery is Rs 34,860 and
(3) The rate of interest is 10% per annum.
The following statement can be prepared to calculate interest:
Another method of calculating interest is to prepare the account of either the hire purchaser or the hire vendor. In the above mentioned example, to calculate interest Hire Vendor’s Account may be
prepared as follows:
(ii) When the Rate of Interest is not Given:
The rate not being given. If the rate of interest is not given (the cash price and amount of each instalment being given), interest will be calculated on the basis that the interest for each year
will be in the ratio of amounts outstanding. Suppose, the rate of interest is not given in the example given above.
The amounts outstanding for the three years are as follows:
Suppose, a colour T.V. set whose cash price is Rs 14,100 is sold on hire purchase basis for 12 quarterly payments of Rs 1,500 each, the first payment being made at the end of the first quarter. The
total of all payments is Rs 18,000. The cash price is Rs 14,100. Hence Rs 3,900 is the interest for all the 12 quarters. To ascertain the interest for every quarter, Rs 3,900 will be allocated in the
ratio of amounts outstanding during each quarter. Thus
If all numbers from 1 to 20 are to be added, die sum will be 20 x 21 / 2 or 210,
Calculation of Cash Price:
In some cases, die cash price is not given. Since the assets purchased cannot be capitalized at more than the cash price, it will be necessary to find out what it is. The way to proceed is to take up
die final instalment first and to deduct interest from it. Interest for one year can be found out by multiplying the sum due at the end of the year by the formula Rate of Interest / 100 + Rate of
Suppose A owes B Rs 100 the interest being 15%. At the end of one year B will have to pay Rs 115 out of which Rs 15 is for interest. Hence, 15/115 of the sum due at the end of the year will be
interest. Deducting interest, the sum due in the beginning of the year can be ascertained. This will also be the amount due at the end of the last but one year after paying the annual instalment. The
total of these two will give the total sum due at the end of the last but one year.
That year’s interest can again be ascertained by multiplying the total amount due by the formula:
Rate of Interest/100 + Rate of Interest
The cash price can also be calculated, if the annual payments are uniform by the formula:
Where r is the rate of interest per cent per annum and n is the number of years over which payment is to be made. This really amounts to finding out the present value of the amount to be paid or
received, taking into account the concerned rate of interest. Tables are available for ready calculation.
Illustration 1:
On 1st April, 2008, Bihar Collieries obtained a machine on the hire purchase system, the total amount payable being Rs 2, 50,000. Payment was to be made Rs 50,000 down and the balance in four annual
instalments of Rs 50,000 each. Interest charged was at the rate of 15 per cent. At what value should the machine be capitalized?
If amount due in the beginning of a year is Rs 100, interest for the year will be Rs 15 and the amount of instalment due at the end of the year will be Rs 115. Thus, interest is 15/115 or 3/23 of the
amount due at the end of each year.
Keeping this in mind, the cash price of the machine can be calculated in the following manner:
Alternatively, the present value at 15% per annum of one rupee received annually at the end of four years is Rs 2-85498. Thus, the present value of Rs 50,000 is Rs 50,000 x 2.85498 = Rs 1, 42,749. To
this, we add down payment of Rs 50,000. Therefore, the cash price is Rs 1, 42,749 + Rs 50,000 = Rs 1, 92,749.
Illustration 2:
G acquired a plant delivered on April 1, 2010 on the following terms:
(i) Initial payment of Rs 40,000 immediately; and
(ii) 4 half-yearly instalments of Rs 30,000 each commencing September 30, 2010.
Interest is 10% with yearly rests. What is the cash price?
The present value of the instalments is Rs 1, 04,132; adding the cash down payment of Rs 40,000, the total cash price is Rs 1, 44,132. | {"url":"https://www.yourarticlelibrary.com/accounting/hire-purchase/calculation-of-interest-and-cash-price-in-installment-system/54850","timestamp":"2024-11-03T13:07:39Z","content_type":"text/html","content_length":"73453","record_id":"<urn:uuid:cf4b97a4-751c-4c2a-b730-a447e60b9050>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00491.warc.gz"} |
13 Comparing qualitative data between individuals | Scientific Research and Methodology
13 Comparing qualitative data between individuals
So far, you have learnt to ask a RQ, design a study, collect the data, describe the data and summarise the data. In this chapter, you will learn to:
• compare qualitative data between groups of individuals using the appropriate graphs.
• compare qualitative data between groups of individuals using the difference in proportions, odds ratios and summary tables.
13.1 Introduction
Relational RQs compare groups. This chapter considers how to compare qualitative variables in different groups. Tables and graphs are useful this purpose.
13.2 Two-way tables
When more than one qualitative variable is recorded for each individual, the data can be collated into table. When two qualitative variables are cross-tabulated, the resulting table is called a
two-way table. As always, the categories for each variable should be exhaustive (cover all levels) and mutually exclusive (observations belong to one and only one level).
Example 13.1 (Two-way tables) Charig et al. (1986) compared two treatments for kidney stones to determine which had a higher success rate. Data were collected from \(700\) UK patients, on two
qualitative variables:
• the treatment method ('A' or 'B'): the explanatory variable.
• the result (procedure 'success' or 'failure'): the response variable.
Both variables are qualitative with two levels, and each treatment was used on \(350\) patients. Treatment A was used from 1972--1980, and Treatment B from 1980--1985; that is, treatments were not
randomly allocated, and so confounding may be present. For this reason, the researchers also recorded the size of the kidney stone ('small' or 'large') as one possible confounding variable. Firstly,
consider just the small stones (Julious and Mullee 1994), displayed in the two-way table in Table 13.1.
TABLE 13.1: Numbers for small kidney stones.
Success Failure Total
Method A \(\phantom{0}81\) \(\phantom{0}6\) \(\phantom{0}87\)
Method B \(234\) \(36\) \(270\)
Total \(315\) \(42\) \(357\)
13.3 Summary tables by rows and columns
Each variable in a two-way table can be analysed separately, using percentages or proportions (Sect. 12.4) or odds (Sect. 12.5). For example, the two variables in Table 13.1 (Method; Result) can be
analysed separately. For instance:
• the percentage of procedures that were successful is \(315/357\times 100 = 88.2\)%.
• the odds that a procedure was successful is \(315/42 = 7.5\); that is, there were \(7.5\) times as many successful procedures as unsuccessful procedures.
However, to compare Methods A and B, these odds and percentages (or proportions) can be computed for each row (or column) separately.
Example 13.2 (Large kidney stones) The data in Table 13.1 can be summarised by computing proportions or percentages by row. The rows refer to the different Methods, so this will compare success
percentages for the two methods.
For the small kidney stones (Table 13.1), the row percentages (Table 13.2 give the proportion of successes for each Method, since the rows represent the counts for Methods A and B. Row proportions
allow the proportions within the rows (i.e., for each Method) to be compared:
• Method A: \(81 \div 87 = 0.931\) (or \(93.1\)%) of operations in the sample were successful.
• Method B: \(234\div 270 = 0.867\) (or \(86.7\)%) of operations in the sample were successful.
For small kidney stones, Method A is slightly more successful (\(93.1\)%) than Method B (\(86.7\)%) in the sample. These percentages are collated in Table 13.2.
Odds can also be computed:
• Method A: The odds of success is \(81/6 = 13.5\): there are \(13.5\) more successful procedures than failures for Method A.
• Method B: The odds of success is \(234/36 = 6.5\): there are \(6.5\) more successful procedures than failures for Method B.
The odds of a success is far greater for Method A than Method B in the sample.
TABLE 13.2: Row percentages for small
kidney stones (from Table 13.1). Row
proportions could also be used.
Success Failure Total
Method A \(93.1\) \(6.9\) \(100\)
Method B \(86.7\) \(13.3\) \(100\)
TABLE 13.3: Column percentages
for small kidney stones (from
Table 13.1). Column
proportions could also be
Success Failure
Method A \(25.7\) \(14.3\)
Method B \(74.3\) \(85.7\)
Total \(100.0\) \(100.0\)
Rather than comparing methods (in the rows), the procedure results can be compared (i.e., the columns).
Example 13.3 (Comparing by column) For the small kidney stones (Table 13.1), the column proportions (Table 13.3 give the proportion of successes within each column (i.e., for successes and for
failures), since the columns contain the procedure results. Column proportions allow the proportions (or percentages) within columns to be compared:
• Successful procedures: \(81 \div 315 = 0.257\) (or \(25.7\)%) in the sample were with Method A.
• Unsuccessful procedures: \(234\div 315 = 0.143\) (or \(14.3\)%) in the sample were with Method A.
Odds can also be computed:
• Successes: the odds of a success coming from Method A is \(81/234 = 0.346\): there are \(0.346\) more Method A procedures than Method B procedures among the successes.
• Failures: the odds of failure coming from Method A is \(6/36 = 0.167\): there are \(0.167\) more Method A procedures than Method B procedures among the failures.
The odds of a success being a Method A procedure is quite different than the odds of a success being a Method B procedure.
Comparing rows (i.e., using row percentages and row odds) seem more intuitive than column percentages here: they compare the success percentage for each method.
13.4 Graphs
When a qualitative variable is compared across different groups (i.e., comparing between individuals), options for plotting include:
• Stacked bar charts (Sect. 13.4.1);
• Side-by-side bar charts (Sect. 13.4.2); or
• Dot charts (Sect. 13.4.3).
13.4.1 Stacked bar charts
The data can be graphed by using a bar for each level of one variable, and stacking the bars for the levels of the second variable. Bars indicate the counts (or percentages) in each category. The
levels can be on the horizontal or vertical axis, but placing the level names on the vertical axis often makes for easier reading, and room for long labels.
The axis displaying the counts (or percentages) should start from zero, since the height of the bars visually implies the frequency of those observations (see Example 17.3).
Example 13.4 (Stacked bar charts) For the kidney-stone data in Example 13.1, a stacked bar chart can be created by producing a bar for each method, and stacking the successes and failures for each
method (Fig. 13.1, top left panel).
Rather than using numbers, the percentages separately within each group can be used too (Fig. 13.1, bottom left panel).
13.4.2 Side-by-side bar charts
Instead of stacking the success and failures bars on top of each other, these bars can be placed side-by-side for each method. Bars indicate the counts (or percentages) in each category. The levels
can be on the horizontal or vertical axis, but placing the level names on the vertical axis often makes for easier reading, and room for long labels.
The axis displaying the counts (or percentages) should start from zero, since the height of the bars visually implies the frequency of those observations (see Example 17.3).
Example 13.5 (Side-by-side bar charts) For the kidney-stone data in Example 13.1, a side-by-side bar chart can be created by producing two bars for each method (one for failures; one for successes),
and placing these side-by-side (Fig. 13.1, centre panels). Again, numbers or percentages within each method can be graphed.
13.4.3 Dot charts
Instead of bars, dots (or other symbols) can be used in place of the bars in a side-by-side bar chart.
The axis displaying the counts (or percentages) should start from zero, since the distance of the dots from the axis visually implies the frequency of those observations (see Example 17.3).
Example 13.6 (Side-by-side bar charts) For the data in Example 13.1, a dot chart can be created by placing plotting symbols for each result (one for failures; one for successes) side-by-side for each
method (Fig. 13.1, right panels). Again, numbers or percentages can be used.
13.4.4 Other variations
Many variations of these charts are possible, by making certain choices:
• use a stacked bar chart, side-by-side bar chart, or dot chart.
• use percentages or counts on one of the axis. (The percentages can be percentages of the total, or within the total for each level of the variable, as in the centre plots in Fig. 13.1.)
• use the counts (or percentage) on either the horizontal or vertical axis.
• decide which variable can be used as the first division of the data.
The guiding principle remains: the purpose of a graph is to display the information in the clearest, simplest possible way, to facilitate understanding the message(s) in the data.
Using a computer to create graphs is recommended, and using a computer makes it easy to try different variations to find the graph that best displays the message in the data.
13.5 Summarising the comparison: difference between proportions
The small kidney stone data (Table 13.1) can be summarised using proportions (or percentages):
• Method A: the proportion of successful procedures is \(0.931\) (or, the percentages of successful procedures is \(93.1\)%).
• Method B: the proportion of successful procedures is \(0.867\) (or, the percentages of successful procedures is \(86.7\)%).
The difference between these proportions (or percentages) is \(0.064\) (or \(6.4\) percentage points). The difference between the proportions is a statistic, and the (unknown) difference between the
population proportiobs is a parameter.
13.6 Summarising the comparison: odds ratios
The small kidney stone data (Table 13.1) can be summarised using odds:
• Method A: the odds of success are \(13.5\) (\(13.5\) times as many successes as failures).
• Method B: the odds of success are \(6.5\) (\(6.5\) times as many successes as failures).
The odds of success for Method A and Method B are very different. In the sample, the odds of success for Method A is many times greater than for Method B. In fact, in the sample, the odds of success
for Method A is \(13.5\div 6.5 = 2.08\) times the odds of a success for Method B. This value is the odds ratio (OR). The sample odds ratio is a statistic, and the (unknown) population odds ratio is a
Definition 13.1 (Odds Ratio (OR)) The odds ratio (often written OR) is the ratio of the odds of an result of interest in one group, compared to the odds of the same result in a different group: \[ \
text{Odds ratio} = \frac{\text{Odds of a result in Group A}} {\text{Odds of the same result in Group B}}. \]
Example 13.7 (Odds ratios) For the small kidney stone data, the odds of a success for Method A is \(81\div6 = 13.5\). The odds of a success for Method B is \(234\div36 = 6.5\). The odds ratio is then
computed as \(13.5\div 6.5 = 2.08\). The odds have been computed with the rows.
This means that the odds of a success for Method A is about \(2.08\) times the odds of a success for Method B.
Most software computes the odds ratio from a two-way table by using the values in the first row and first column on the top of the fractions when computing the odds and the odds ratio. In Example
13.7, for instance, the odds for both methods were computed with the Column 1 values on the top of the fraction (\(81\) and \(234\)), and the odds ratio comparing the rows was computed with the Row 1
odds (\(13.5\)) on top of the fraction.
However, the odds ratio could also be computed using the odds within the columns (i.e., comparing the columns), rather than within the rows (as in Example 13.8).
Example 13.8 (Odds ratios) For the small kidney stone data, the odds of a success coming from Method A (i.e., Column 1) is \(81/234 = 0.3462\). Likewise, the odds of a failure (i.e., Column 2) coming
from Method A is \(6\div36 = 0.1667\). The odds ratio is \(0.3462\div 0.1667 = 2.08\), as in Example 13.7. This means that the odds of Method A producing a success is about \(2.08\) times the odds of
Method A producing a failure.
The two odds ratio calculations produce the same value. The odds ratio can be interpreted in either way: as in this example or as in Example 13.7. Both interpretations are correct.
The odds ratio can be interpreted in either of these ways (i.e., both are correct):
• The odds compare Row 1 counts to Row 2 counts, for both columns. The odds ratio then compares the Column 1 odds to the Column 2 odds.
• The odds compare Column 1 counts to Column 2 counts. The odds ratio then compares the Row 1 odds to the Row 2 odds.
Odds and odds ratios are computed with the first row and first column values on the top of the fraction. While both are correct, one way usually makes more sense.
The OR compares the odds of the same result (e.g., success) in two different groups (e.g., Method A and Method B). This means that a \(2\times 2\) table can be summarised using one number: the odds
ratio (OR).
When interpreting odds ratios (or ORs):
• odds ratios greater than \(1\) mean the odds of the result is larger for the group on top of the division compared to the group in the bottom.
• odds ratios equal to \(1\) mean the odds of the result is the same for both groups (on the top and the bottom of the division).
• odds ratios is less than \(1\) mean the odds of the result is smaller for the group on the top of the division compared to the group in the bottom.
The following short video may help explain some of these concepts:
The numerical summary information for comparing qualitative variables can be collated in a table. The data should be summarised by one of the qualitative variables, producing percentages and odds for
the other.
Example 13.9 (Numerical summary table) For the small kidney-stone data, the summary of the data can be tabulated as in Table 13.4, using percentages and odds.
TABLE 13.4: Numerical summary of the small kidney-stone data: Odds
and percentage of a successful procedure.
Percentage success Odds of success Sample size
Method A \(93.1\) \(13.500\) \(\phantom{0}87\)
Method B \(86.7\) \(\phantom{-}6.500\) \(270\)
\(6.4\) \(2.08\)
13.7 Example: large kidney stones
The data in Table 13.1 are for small kidney stones. Data were also recorded for the large kidney stones (Table 13.5). As for small kidney stones, the success percentages can be computed for both
• Method A: Success proportion for large kidney stones: \(192/263 = 0.730\), or \(73.0\)%.
• Method B: Success proportion for large kidney stones: \(55/80 = 0688\), or \(68.8\)%.
For large kidney stones, then, Method A has a higher success proportion than Method B, just as with the small kidney stones.
TABLE 13.5: numbers for large kidney stones.
Success Failure Total
Method A \(192\) \(71\) \(263\)
Method B \(\phantom{0}55\) \(25\) \(\phantom{0}80\)
So, could the data for small (Table 13.1) and large kidney stones (Table 13.5) be combined, to produce a single two-way table of just Method and Result (Table 13.6), without separating by size?
TABLE 13.6: Numbers for all
kidney stones combined, without
separating by the size of the
kidney stone.
Success Failure Total
Method A \(273\) \(77\) \(350\)
Method B \(289\) \(61\) \(350\)
To summarise:
• Method A is more successful for small stones (\(93.1\)% vs \(86.7\)%);
• Method A is more successful for large stones (\(73.0\)% vs \(68.8\)%); but
• Method B is more successful for all stones combined (\(78.0\)% vs \(82.6\)%).
That seems strange: Method A performs better for small and large kidney stones, but Method B performs better when ignoring size.
The size of the stone is a confounding variable (Fig. 13.2). Size is associated with the method (small stones are treated more often with Method B) and with the result (small stones have a higher
success proportion for both methods).
This confounding could have been avoided by randomly allocating a treatment method to patients. However, random allocation was not possible in this study, so the researchers used a different method
to manage confounding: recording the size of the kidney stones (see Sect. 7.2).
In this example, incorporating information about a potential confounder (the size of the kidney stone) is important, otherwise the wrong (opposite) conclusion is reached: Method B would be
incorrectly considered better if the size of the stones was ignored, when the better method really is Method A.
This is called Simpson's paradox. If the size of the kidney stone had not been recorded, size would be a lurking variable, and the incorrect conclusion would have been reached.
13.8 Example: water access
López-Serrano et al. (2022) recorded data about access to water for three rural communities in Cameroon (see Sects. 11.10 and 12.7). The study could be used to determine contributors to the incidence
of diarrhoea in young children (\(85\) households had children under \(5\)). A cross-tabulation (Table 13.7) shows the relationship with keeping livestock; the numerical summary table (Table 13.8)
may suggest a difference due to keeping livestock. The comparison in Fig. 13.3 includes some categories with small sample sizes, so the percentages shown may not be precise estimates of the
population values.
As usual, the data come from one of countless possible samples, but the RQ is about the population, so making a definitive decision is difficult.
TABLE 13.7: Cross-tabulation of having livestock in
the household, and children under \(5\) years of age
having diarrhoea in the household in the last two
No diarrhoea Diarrhoea
Does not have livestock \(17\) \(\phantom{0}3\)
Has livestock \(42\) \(23\)
TABLE 13.8: Numerical summary of the water-access data: odds and percentage
of children with diarrhoea in the last two weeks.
Percentage Odds Sample size
Household does not have livestock \(\phantom{-}15.0\) \(0.176\) \(20\)
Household has livestock \(\phantom{-}35.4\) \(0.548\) \(65\)
\(-20.4\) \(0.322\)
13.9 Chapter summary
Qualitative data can be compared between different groups (between individuals comparisons) using a stacked bar chart, side-by-side bar chart or a dot chart. The data can be displayed in a two-way
table, then summarised numerically by comparing proportions, percentages and odds. The odds ratio (OR) and the difference between the proportions can be used to compare the two different groups.
13.10 Quick revision questions
A study (Alley et al. 2017) examined social media use (Table 13.9), using a representative sample of Queenslanders at least \(18\) years of age (from the \(2013\) Queensland Social Survey).
1. Compute the sample proportion of urban residents who use social media.
2. Compute the sample proportion of rural residents who use social media.
3. Compute the sample odds of urban residents who use social media.
4. Compute the sample odds of rural residents who use social media.
5. Compute the sample odds ratio of using social media, comparing urban to rural residents.
6. Compute the sample difference between the proportions using social media, comparing urban to rural residents.
TABLE 13.9: The number of Queenslanders using and not using
social media (SM) in rural and urban locations in 2013 in a
Doesn't use SM Uses SM Total
Rural residents \(\phantom{0}78\) \(\phantom{0}89\) \(167\)
Urban residents \(416\) \(568\) \(984\)
13.11 Exercises
Answers to odd-numbered exercises are available in App. E.
Exercise 13.1 Köchling et al. (2019) studied hangovers and recorded, among other information, when people vomited after consuming alcohol. Table 13.10 shows how many people vomited after consuming
beer followed by wine, and how many people vomited after consuming only wine.
1. Compute the row proportions. What do these mean?
2. Compute the column percentages. What do these mean?
3. Compute the overall percentage of drinkers who vomited.
4. Compute the sample odds that a wine-only drinker vomited.
5. Compute the sample odds that a beer-then-wine drinker vomited.
6. Compute the sample odds ratio, comparing the odds of vomiting for wine-only drinkers to beer-then-wine drinkers.
7. Compute the sample odds ratio, comparing the odds of vomiting for beer-then-wine drinkers to wine-only drinkers.
8. Compute the difference between the sample proportions of people vomiting, comparing beer-then-wine drinkers to wine-only drinkers.
9. What do the data suggest about the relationship?
TABLE 13.10: How many people vomited and did not
vomit, by type of alcohol consumed.
Beer then wine Wine only
Vomited \(\phantom{0}6\) \(\phantom{0}6\)
Didn't vomit \(62\) \(22\)
Exercise 13.2 Stirrat (2008) recorded the sex of adult and young wallabies at the East Point Reserve, Darwin. In December 1993, \(91\) males and \(188\) female adult wallabies were recorded, and \(13
\) male and \(22\) female young wallabies were recorded.
1. Create the two-way table of counts.
2. For adult wallabies, what proportion of adult wallabies were males?
3. For adult wallabies, what are the odds that a female was observed?
4. For young wallabies, what percentage of wallabies were males?
5. For young wallabies, what are the odds that a female was observed?
6. What is the odds ratio of observing an adult wallaby, comparing females to males?
7. What is the difference between the sample proportions of females wallabies, comparing adults to young?
8. Create a summary table.
9. Sketch a graph to display the data.
10. What do the data suggest about the relationship?
Exercise 13.3 [Dataset: EmeraldAug] The Southern Oscillation Index (SOI) is a standardised measure of the air pressure difference between Tahiti and Darwin, shown to be related to rainfall in some
parts of the world (Stone, Hammer, and Marcussen 1996), and especially Queensland, Australia (Stone and Auliciems 1992; P. K. Dunn 2001).
The rainfall at Emerald (Queensland) was recorded for Augusts between 1889 to 2002 inclusive (P. K. Dunn and Smyth 2018), for months when the monthly average SOI was positive and non-positive (zero
or negative); see Table 13.11.
1. Compute the percentage of Augusts with no rainfall.
2. Compute the percentage of Augusts with no rainfall, in Augusts with a non-positive SOI.
3. Compute the percentage of Augusts with no rainfall, in Augusts with a positive SOI.
4. Compute the odds of no August rainfall.
5. Compute the odds of no August rainfall, in Augusts with a non-positive SOI.
6. Compute the odds of no August rainfall, in Augusts with a positive SOI.
7. Compute the odds ratio of no August rainfall, comparing Augusts with non-positive SOI to Augusts with a positive SOI.
8. Interpret this OR.
9. Create a summary table.
10. Sketch a graph to display the data.
TABLE 13.11: The SOI, and whether rainfall was recorded
in Augusts between 1889 and 2002 inclusive.
Non-positive SOI Positive SOI
No rainfall recorded \(14\) \(\phantom{0}7\)
Rainfall recorded \(40\) \(53\)
Exercise 13.4 Haselgrove et al. (2008) asked boys and girls in Western Australia about back pain from carrying school bags (Table 13.12).
1. Compute the percentage of boys reporting back pain from carrying school bags.
2. Compute the percentage of girls reporting back pain from carrying school bags.
3. Compute the odds of boys reporting back pain from carrying school bags.
4. Compute the odds of girls reporting back pain from carrying school bags.
5. Compute the odds of a child reporting back pain.
6. Compute the odds ratio of reporting back pain, comparing boys to girls.
7. Interpret this OR.
8. Create a summary table.
9. Sketch a graph to display the data.
TABLE 13.12: The number of
boys and girls reporting back
pain from carrying school
Males Females
No back pain \(330\) \(226\)
Back pain \(280\) \(359\)
Exercise 13.5 Using the information in Table 12.2, create a stacked bar chart to compare the responses to the three questions.
Exercise 13.6 T. C. Russell, Herbert, and Kohen (2009) studied road-kill possums (Table 13.13).
1. Identify the two variables, and classify them as nominal or ordinal.
2. Sketch some graphs to display the data.
3. What is the main message in the data? What graph shows this best?
TABLE 13.13: The number of
possums found as road kill, by
sex and season.
Unknown sex Male Female
Autumn \(75\) \(25\) \(21\)
Winter \(74\) \(27\) \(22\)
Spring \(71\) \(10\) \(18\)
Summer \(58\) \(10\) \(12\)
Exercise 13.7 The data in Table 13.14 come from a study of Iranian children aged \(6\)--\(18\) years old (Kelishadi et al. 2017).
1. Compute the proportion of females who skipped breakfast.
2. Compute the proportion of males who skipped breakfast.
3. Compute the odds of a female skipping breakfast.
4. Compute the odds of a male skipping breakfast.
5. Compute the odds ratio comparing the odds of skipping breakfast, comparing females to males.
6. Interpret this OR.
7. Construct a summary table.
TABLE 13.14: The number of Iranian children aged \(6\)
to \(18\) who skip and do not skip breakfast.
Skips breakfast Doesn't skip breakfast Total
Females \(2383\) \(4257\) \(6640\)
Males \(1944\) \(4902\) \(6846\)
Exercise 13.8 Yonekura et al. (2020) studied Japanese women and their coffee drinking habits (Table 13.15).
1. Compute the proportion of coffee drinkers who are also smokers.
2. Compute the proportion of non-coffee drinkers who are also smokers.
3. Compute the odds of a coffee drinker being a smoker.
4. Compute the odds of a non-coffee drinker being a smoker.
5. Compute the odds ratio comparing the odds of being a smoker, comparing coffee drinkers to non-coffee drinkers.
6. Interpret this OR.
7. Construct a summary table.
TABLE 13.15: The number of Japanese women who
smoked, and drank at least one cup of coffee per
Smokers Non-smokers
Coffee drinkers \(10\) \(66\)
Non-coffee drinkers \(\phantom{0}2\) \(84\)
Exercise 13.9 In a study of how well emergency dispatchers recognised signs of stroke (Oostema, Chassee, and Reeves 2018), the data shown below were collected.
Sex of patients Dispatcher suspected stroke Dispatcher missed stroke
Male 67 43
Female 97 39
1. Sketch a side-by-side or stacked bar chart to display the data.
2. Of the male patients, what percentage had their stroke symptoms missed by the dispatcher?
3. Of the female patients, what percentage had their stroke symptoms missed by the dispatcher?
4. For the male patients, what are the odds that they had their stroke symptoms missed by the dispatcher?
5. For the female patients, what are the odds that they had their stroke symptoms missed by the dispatcher?
6. What is the odds ratio that a patients had their stroke symptoms missed by the dispatcher, comparing males to females?
7. Construct a numerical summary table.
Exercise 13.10 Soccer is a unique in that one aspect is 'the purposeful use of the unprotected head for controlling and advancing the ball' (Kirkendall, Jordan, and Garrett 2001). Some researchers
suspect that repeatedly 'heading' the ball may impair brain function. A study (Kirkendall, Jordan, and Garrett 2001) was conducted to determine (p. 157)
...whether long-term or chronic neuropsychological dysfunction (i.e., concussion) was present in collegiate soccer players
Data were collected from \(240\) college students for two variables:
• The student type: One of 'soccer player' (\(63\) students), 'non-soccer athlete' (\(96\) students), or 'non-athlete' (\(81\) students).
• The number of head concussions: Each student was asked about the number of head concussions they had experienced; 'zero' (\(158\) students), 'one' (\(45\) students), or 'two or more' (\(37\)
students) concussions.
Use the study data (Table 13.16) to answer the following questions.
TABLE 13.16: Data on the number of
concussions experienced by college students.
0 1 2 or more Total
Soccer players 45 5 13 63
Non-soccer athletes 68 25 3 96
Non-athletes 45 15 21 81
Total 158 45 37 240
1. Classify the two variables.
2. Compute the percentage of college students in the sample overall that have received exactly one concussion.
3. Among the non-athletes, compute the odds of receiving two or more concussions. Interpret what this means.
4. Among the soccer players, compute the odds of receiving two or more concussions. Interpret what this means.
5. Compute the odds ratio comparing the odds of a non-athlete player receiving two or more concussions to the odds of a soccer player receiving two or more concussions.
6. Create a table of column percentages. What do these tell you?
7. Create a table of row percentages. What do these tell you?
8. Which one of these tables is probably more sensible, and why? | {"url":"https://bookdown.org/pkaldunn/SRM-Textbook/CompareQualData.html","timestamp":"2024-11-06T11:08:41Z","content_type":"text/html","content_length":"94538","record_id":"<urn:uuid:f926bc2e-d279-4e34-aa61-dd3bdf60db79>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00751.warc.gz"} |
48 inches in cm
To convert 48 inches to centimeters, follow these steps:
Step 1: Write down the given value.
48 inches
Step 2: Identify the conversion factor.
1 inch = 2.54 centimeters
Step 3: Set up a proportion using the given value and the conversion factor.
48 inches / 1 = x centimeters / 2.54
Step 4: Cross multiply and solve for x.
48 inches * 2.54 = x centimeters
122.112 centimeters = x centimeters
Step 5: Round the answer to the desired number of decimal places, if necessary.
Since centimeters is a smaller unit than inches, it is common to round the answer to the nearest hundredth.
Thus, the final conversion is:
48 inches = 122.11 centimeters
Therefore, 48 inches is equivalent to 122.11 centimeters.
Note: To check your work, you can also use an online unit converter or a calculator to verify the conversion.
Visited 3 times, 1 visit(s) today | {"url":"https://unitconvertify.com/length/48-inches-in-cm/","timestamp":"2024-11-03T09:59:36Z","content_type":"text/html","content_length":"42885","record_id":"<urn:uuid:7e2ad917-ac46-40e7-84b4-4e5145b2f6fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00411.warc.gz"} |
Chapter 31. Introduction to Red Hat build of OptaPlanner
OptaPlanner is a lightweight, embeddable planning engine that optimizes planning problems. It helps normal Java programmers solve planning problems efficiently, and it combines optimization
heuristics and metaheuristics with very efficient score calculations.
For example, OptaPlanner helps solve various use cases:
● Employee/Patient Rosters: It helps create timetables for nurses and keeps track of patient bed management.
● Educational Timetables: It helps schedule lessons, courses, exams, and conference presentations.
● Shop Schedules: It tracks car assembly lines, machine queue planning, and workforce task planning.
● Cutting Stock: It minimizes waste by reducing the consumption of resources such as paper and steel.
Every organization faces planning problems; that is, they provide products and services with a limited set of constrained resources (employees, assets, time, and money).
OptaPlanner is open source software under the Apache Software License 2.0. It is 100% pure Java and runs on most Java virtual machines (JVMs).
A planning problem has an optimal goal, based on limited resources and under specific constraints. Optimal goals can be any number of things, such as:
● Maximized profits - the optimal goal results in the highest possible profit.
● Minimized ecological footprint - the optimal goal has the least amount of environmental impact.
● Maximized satisfaction for employees or customers - the optimal goal prioritizes the needs of employees or customers.
The ability to achieve these goals relies on the number of resources available. For example, the following resources might be limited:
● Number of people
● Amount of time
● Budget
● Physical assets, for example, machinery, vehicles, computers, buildings
You must also take into account the specific constraints related to these resources, such as the number of hours a person works, their ability to use certain machines, or compatibility between pieces
of equipment.
Red Hat build of OptaPlanner helps Java programmers solve constraint satisfaction problems efficiently. It combines optimization heuristics and metaheuristics with efficient score calculation.
31.2. NP-completeness in planning problems
The provided use cases are probably NP-complete or NP-hard, which means the following statements apply:
● It is easy to verify a specific solution to a problem in reasonable time.
● There is no simple way to find the optimal solution of a problem in reasonable time.
The implication is that solving your problem is probably harder than you anticipated, because the two common techniques do not suffice:
● A brute force algorithm (even a more advanced variant) takes too long.
● A quick algorithm, for example in the bin packing problem, putting in the largest items first returns a solution that is far from optimal.
By using advanced optimization algorithms, OptaPlanner finds a good solution in reasonable time for such planning problems.
31.3. Solutions to planning problems
A planning problem has a number of solutions.
Several categories of solutions are:
Possible solution
A possible solution is any solution, whether or not it breaks any number of constraints. Planning problems often have an incredibly large number of possible solutions. Many of those solutions are
not useful.
Feasible solution
A feasible solution is a solution that does not break any (negative) hard constraints. The number of feasible solutions are relative to the number of possible solutions. Sometimes there are no
feasible solutions. Every feasible solution is a possible solution.
Optimal solution
Optimal solutions are the solutions with the highest scores. Planning problems usually have a few optimal solutions. They always have at least one optimal solution, even in the case that there
are no feasible solutions and the optimal solution is not feasible.
Best solution found
The best solution is the solution with the highest score found by an implementation in a specified amount of time. The best solution found is likely to be feasible and, given enough time, it’s an
optimal solution.
Counterintuitively, the number of possible solutions is huge (if calculated correctly), even with a small data set.
In the examples provided in the planner-engine distribution folder, most instances have a large number of possible solutions. As there is no guaranteed way to find the optimal solution, any
implementation is forced to evaluate at least a subset of all those possible solutions.
OptaPlanner supports several optimization algorithms to efficiently wade through that incredibly large number of possible solutions.
Depending on the use case, some optimization algorithms perform better than others, but it is impossible to know in advance. Using OptaPlanner, you can switch the optimization algorithm by changing
the solver configuration in a few lines of XML or code.
31.4. Constraints on planning problems
Usually, a planning problem has minimum two levels of constraints:
● A (negative) hard constraint must not be broken.
For example, one teacher can not teach two different lessons at the same time.
● A (negative) soft constraint should not be broken if it can be avoided.
For example, Teacher A does not like to teach on Friday afternoons.
Some problems also have positive constraints:
● A positive soft constraint (or reward) should be fulfilled if possible.
For example, Teacher B likes to teach on Monday mornings.
Some basic problems only have hard constraints. Some problems have three or more levels of constraints, for example, hard, medium, and soft constraints.
These constraints define the score calculation (otherwise known as the fitness function) of a planning problem. Each solution of a planning problem is graded with a score. With OptaPlanner, score
constraints are written in an object oriented language such as Java, or in Drools rules.
This type of code is flexible and scalable.
31.5. Examples provided with Red Hat build of OptaPlanner
Several Red Hat build of OptaPlanner examples are shipped with Red Hat Process Automation Manager. You can review the code for examples and modify it as necessary to suit your needs.
Red Hat does not provide support for the example code included in the Red Hat Process Automation Manager distribution.
Some of the OptaPlanner examples solve problems that are presented in academic contests. The Contest column in the following table lists the contests. It also identifies an example as being either
realistic or unrealistic for the purpose of a contest. A realistic contest is an official, independent contest that meets the following standards:
● Clearly defined real-world use cases
● Real-world constraints
● Multiple real-world datasets
● Reproducible results within a specific time limit on specific hardware
● Serious participation from the academic and/or enterprise Operations Research community.
Realistic contests provide an objective comparison of OptaPlanner with competitive software and academic research.
Table 31.1. Examples overview
Example Domain Size Contest Directory name
N queens 1 entity class Entity ⇐ 256 Pointless (cheatable) nqueens
(1 variable) Value ⇐ 256
Search space ⇐ 10^616
Cloud balancing 1 entity class Entity ⇐ 2400 No (Defined by us) cloudbalancing
(1 variable) Value ⇐ 800
Search space ⇐ 10^6967
Traveling salesman 1 entity class Entity ⇐ 980 Unrealistic TSP web tsp
(1 chained variable) Value ⇐ 980
Search space ⇐ 10^2504
Tennis club scheduling 1 entity class Entity ⇐ 72 No (Defined by us) tennis
(1 variable) Value ⇐ 7
Search space ⇐ 10^60
Meeting scheduling 1 entity class Entity ⇐ 10 No (Defined by us) meetingscheduling
(2 variables) Value ⇐ 320 and ⇐ 5
Search space ⇐ 10^320
Course timetabling 1 entity class Entity ⇐ 434 Realistic ITC 2007 track 3 curriculumCourse
(2 variables) Value ⇐ 25 and ⇐ 20
Search space ⇐ 10^1171
Machine reassignment 1 entity class Entity ⇐ 50000 Nearly realistic ROADEF 2012 machineReassignment
(1 variable) Value ⇐ 5000
Search space ⇐ 10^184948
Vehicle routing 1 entity class Entity ⇐ 2740 Unrealistic VRP web vehiclerouting
(1 chained variable) Value ⇐ 2795
1 shadow entity class Search space ⇐ 10^8380
(1 automatic shadow variable)
Vehicle routing with time windows All of Vehicle routing Entity ⇐ 2740 Unrealistic VRP web vehiclerouting
(1 shadow variable) Value ⇐ 2795
Search space ⇐ 10^8380
Project job scheduling 1 entity class Entity ⇐ 640 Nearly realistic MISTA 2013 projectjobscheduling
(2 variables) Value ⇐ ? and ⇐ ?
(1 shadow variable) Search space ⇐ ?
Task assigning 1 entity class Entity ⇐ 500 No Defined by us taskassigning
(1 chained variable) Value ⇐ 520
(1 shadow variable) Search space ⇐ 10^1168
1 shadow entity class
(1 automatic shadow variable)
Exam timetabling 2 entity classes (same hierarchy) Entity ⇐ 1096 Realistic ITC 2007 track 1 examination
(2 variables) Value ⇐ 80 and ⇐ 49
Search space ⇐ 10^3374
Nurse rostering 1 entity class Entity ⇐ 752 Realistic INRC 2010 nurserostering
(1 variable) Value ⇐ 50
Search space ⇐ 10^1277
Traveling tournament 1 entity class Entity ⇐ 1560 Unrealistic TTP travelingtournament
(1 variable) Value ⇐ 78
Search space ⇐ 10^2301
Cheap time scheduling 1 entity class Entity ⇐ 500 Nearly realistic ICON Energy cheaptimescheduling
(2 variables) Value ⇐ 100 and ⇐ 288
Search space ⇐ 10^20078
Investment 1 entity class Entity ⇐ 11 No Defined by us investment
(1 variable) Value = 1000
Search space ⇐ 10^4
Conference scheduling 1 entity class Entity ⇐ 216 No Defined by us conferencescheduling
(2 variables) Value ⇐ 18 and ⇐ 20
Search space ⇐ 10^552
Rock tour 1 entity class Entity ⇐ 47 No Defined by us rocktour
(1 chained variable) Value ⇐ 48
(4 shadow variables) Search space ⇐ 10^59
1 shadow entity class
(1 automatic shadow variable)
Flight crew scheduling 1 entity class Entity ⇐ 4375 No Defined by us flightcrewscheduling
(1 variable) Value ⇐ 750
1 shadow entity class Search space ⇐ 10^12578
(1 automatic shadow variable)
Place n number of queens on an n sized chessboard so that no two queens can attack each other. The most common n queens puzzle is the eight queens puzzle, with n = 8:
● Use a chessboard of n columns and n rows.
● Place n queens on the chessboard.
● No two queens can attack each other. A queen can attack any other queen on the same horizontal, vertical, or diagonal line.
This documentation heavily uses the four queens puzzle as the primary example.
A proposed solution could be:
Figure 31.1. A wrong solution for the four queens puzzle
The above solution is wrong because queens A1 and B0 can attack each other (so can queens B0 and D0). Removing queen B0 would respect the "no two queens can attack each other" constraint, but would
break the "place n queens" constraint.
Below is a correct solution:
Figure 31.2. A correct solution for the Four queens puzzle
All the constraints have been met, so the solution is correct.
Note that most n queens puzzles have multiple correct solutions. We will focus on finding a single correct solution for a specific n, not on finding the number of possible correct solutions for a
specific n.
Problem size
4queens has 4 queens with a search space of 256.
8queens has 8 queens with a search space of 10^7.
16queens has 16 queens with a search space of 10^19.
32queens has 32 queens with a search space of 10^48.
64queens has 64 queens with a search space of 10^115.
256queens has 256 queens with a search space of 10^616.
The implementation of the n queens example has not been optimized because it functions as a beginner example. Nevertheless, it can easily handle 64 queens. With a few changes it has been shown to
easily handle 5000 queens and more.
31.6.1. Domain model for N queens
This example uses the domain model to solve the four queens problem.
● Creating a Domain Model
A good domain model will make it easier to understand and solve your planning problem.
This is the domain model for the n queens example:
public class Column {
private int index;
// ... getters and setters
public class Row {
private int index;
// ... getters and setters
public class Queen {
private Column column;
private Row row;
public int getAscendingDiagonalIndex() {...}
public int getDescendingDiagonalIndex() {...}
// ... getters and setters
● Calculating the Search Space.
A Queen instance has a Column (for example: 0 is column A, 1 is column B, …) and a Row (its row, for example: 0 is row 0, 1 is row 1, …).
The ascending diagonal line and the descending diagonal line can be calculated based on the column and the row.
The column and row indexes start from the upper left corner of the chessboard.
public class NQueens {
private int n;
private List<Column> columnList;
private List<Row> rowList;
private List<Queen> queenList;
private SimpleScore score;
// ... getters and setters
● Finding the Solution
A single NQueens instance contains a list of all Queen instances. It is the Solution implementation which will be supplied to, solved by, and retrieved from the Solver.
Notice that in the four queens example, the NQueens getN() method will always return four.
Figure 31.3. A solution for Four Queens
Table 31.2. Details of the solution in the domain model
columnIndex rowIndex ascendingDiagonalIndex (columnIndex + rowIndex) descendingDiagonalIndex (columnIndex - rowIndex)
A1 0 1 1 (**) -1
B0 1 0 (*) 1 (**) 1
C2 2 2 4 0
D0 3 0 (*) 3 3
When two queens share the same column, row or diagonal line, such as (*) and (**), they can attack each other.
31.8. Traveling salesman (TSP - Traveling Salesman Problem)
Given a list of cities, find the shortest tour for a salesman that visits each city exactly once.
The problem is defined by Wikipedia. It is one of the most intensively studied problems in computational mathematics. Yet, in the real world, it is often only part of a planning problem, along with
other constraints, such as employee shift rostering constraints.
Problem size
dj38 has 38 cities with a search space of 10^43.
europe40 has 40 cities with a search space of 10^46.
st70 has 70 cities with a search space of 10^98.
pcb442 has 442 cities with a search space of 10^976.
lu980 has 980 cities with a search space of 10^2504.
Problem difficulty
Despite TSP’s simple definition, the problem is surprisingly hard to solve. Because it is an NP-hard problem (like most planning problems), the optimal solution for a specific problem dataset can
change a lot when that problem dataset is slightly altered:
31.9. Tennis club scheduling
Every week the tennis club has four teams playing round robin against each other. Assign those four spots to the teams fairly.
Hard constraints:
● Conflict: A team can only play once per day.
● Unavailability: Some teams are unavailable on some dates.
Medium constraints:
● Fair assignment: All teams should play an (almost) equal number of times.
Soft constraints:
● Evenly confrontation: Each team should play against every other team an equal number of times.
Problem size
munich-7teams has 7 teams, 18 days, 12 unavailabilityPenalties and 72 teamAssignments with a search space of 10^60.
Figure 31.4. Domain model
31.10. Meeting scheduling
Assign each meeting to a starting time and a room. Meetings have different durations.
Hard constraints:
● Room conflict: Two meetings must not use the same room at the same time.
● Required attendance: A person cannot have two required meetings at the same time.
● Required room capacity: A meeting must not be in a room that doesn’t fit all of the meeting’s attendees.
● Start and end on same day: A meeting shouldn’t be scheduled over multiple days.
Medium constraints:
● Preferred attendance: A person cannot have two preferred meetings at the same time, nor a preferred and a required meeting at the same time.
Soft constraints:
● Sooner rather than later: Schedule all meetings as soon as possible.
● A break between meetings: Any two meetings should have at least one time grain break between them.
● Overlapping meetings: To minimize the number of meetings in parallel so people don’t have to choose one meeting over the other.
● Assign larger rooms first: If a larger room is available any meeting should be assigned to that room in order to accommodate as many people as possible even if they haven’t signed up to that
● Room stability: If a person has two consecutive meetings with two or less time grains break between them they better be in the same room.
Problem size
50meetings-160timegrains-5rooms has 50 meetings, 160 timeGrains and 5 rooms with a search space of 10^145.
100meetings-320timegrains-5rooms has 100 meetings, 320 timeGrains and 5 rooms with a search space of 10^320.
200meetings-640timegrains-5rooms has 200 meetings, 640 timeGrains and 5 rooms with a search space of 10^701.
400meetings-1280timegrains-5rooms has 400 meetings, 1280 timeGrains and 5 rooms with a search space of 10^1522.
800meetings-2560timegrains-5rooms has 800 meetings, 2560 timeGrains and 5 rooms with a search space of 10^3285.
31.11. Course timetabling (ITC 2007 Track 3 - Curriculum Course Scheduling)
Schedule each lecture into a timeslot and into a room.
Hard constraints:
● Teacher conflict: A teacher must not have two lectures in the same period.
● Curriculum conflict: A curriculum must not have two lectures in the same period.
● Room occupancy: Two lectures must not be in the same room in the same period.
● Unavailable period (specified per dataset): A specific lecture must not be assigned to a specific period.
Soft constraints:
● Room capacity: A room’s capacity should not be less than the number of students in its lecture.
● Minimum working days: Lectures of the same course should be spread out into a minimum number of days.
● Curriculum compactness: Lectures belonging to the same curriculum should be adjacent to each other (so in consecutive periods).
● Room stability: Lectures of the same course should be assigned to the same room.
The problem is defined by the International Timetabling Competition 2007 track 3.
Problem size
comp01 has 24 teachers, 14 curricula, 30 courses, 160 lectures, 30 periods, 6 rooms and 53 unavailable period constraints with a search space of 10^360.
comp02 has 71 teachers, 70 curricula, 82 courses, 283 lectures, 25 periods, 16 rooms and 513 unavailable period constraints with a search space of 10^736.
comp03 has 61 teachers, 68 curricula, 72 courses, 251 lectures, 25 periods, 16 rooms and 382 unavailable period constraints with a search space of 10^653.
comp04 has 70 teachers, 57 curricula, 79 courses, 286 lectures, 25 periods, 18 rooms and 396 unavailable period constraints with a search space of 10^758.
comp05 has 47 teachers, 139 curricula, 54 courses, 152 lectures, 36 periods, 9 rooms and 771 unavailable period constraints with a search space of 10^381.
comp06 has 87 teachers, 70 curricula, 108 courses, 361 lectures, 25 periods, 18 rooms and 632 unavailable period constraints with a search space of 10^957.
comp07 has 99 teachers, 77 curricula, 131 courses, 434 lectures, 25 periods, 20 rooms and 667 unavailable period constraints with a search space of 10^1171.
comp08 has 76 teachers, 61 curricula, 86 courses, 324 lectures, 25 periods, 18 rooms and 478 unavailable period constraints with a search space of 10^859.
comp09 has 68 teachers, 75 curricula, 76 courses, 279 lectures, 25 periods, 18 rooms and 405 unavailable period constraints with a search space of 10^740.
comp10 has 88 teachers, 67 curricula, 115 courses, 370 lectures, 25 periods, 18 rooms and 694 unavailable period constraints with a search space of 10^981.
comp11 has 24 teachers, 13 curricula, 30 courses, 162 lectures, 45 periods, 5 rooms and 94 unavailable period constraints with a search space of 10^381.
comp12 has 74 teachers, 150 curricula, 88 courses, 218 lectures, 36 periods, 11 rooms and 1368 unavailable period constraints with a search space of 10^566.
comp13 has 77 teachers, 66 curricula, 82 courses, 308 lectures, 25 periods, 19 rooms and 468 unavailable period constraints with a search space of 10^824.
comp14 has 68 teachers, 60 curricula, 85 courses, 275 lectures, 25 periods, 17 rooms and 486 unavailable period constraints with a search space of 10^722.
Figure 31.5. Domain model
31.12. Machine reassignment (Google ROADEF 2012)
Assign each process to a machine. All processes already have an original (unoptimized) assignment. Each process requires an amount of each resource (such as CPU or RAM). This is a more complex
version of the Cloud Balancing example.
Hard constraints:
● Maximum capacity: The maximum capacity for each resource for each machine must not be exceeded.
● Conflict: Processes of the same service must run on distinct machines.
● Spread: Processes of the same service must be spread out across locations.
● Dependency: The processes of a service depending on another service must run in the neighborhood of a process of the other service.
● Transient usage: Some resources are transient and count towards the maximum capacity of both the original machine as the newly assigned machine.
Soft constraints:
● Load: The safety capacity for each resource for each machine should not be exceeded.
● Balance: Leave room for future assignments by balancing the available resources on each machine.
● Process move cost: A process has a move cost.
● Service move cost: A service has a move cost.
● Machine move cost: Moving a process from machine A to machine B has another A-B specific move cost.
The problem is defined by the Google ROADEF/EURO Challenge 2012.
Figure 31.6. Value proposition
Problem size
model_a1_1 has 2 resources, 1 neighborhoods, 4 locations, 4 machines, 79 services, 100 processes and 1 balancePenalties with a search space of 10^60.
model_a1_2 has 4 resources, 2 neighborhoods, 4 locations, 100 machines, 980 services, 1000 processes and 0 balancePenalties with a search space of 10^2000.
model_a1_3 has 3 resources, 5 neighborhoods, 25 locations, 100 machines, 216 services, 1000 processes and 0 balancePenalties with a search space of 10^2000.
model_a1_4 has 3 resources, 50 neighborhoods, 50 locations, 50 machines, 142 services, 1000 processes and 1 balancePenalties with a search space of 10^1698.
model_a1_5 has 4 resources, 2 neighborhoods, 4 locations, 12 machines, 981 services, 1000 processes and 1 balancePenalties with a search space of 10^1079.
model_a2_1 has 3 resources, 1 neighborhoods, 1 locations, 100 machines, 1000 services, 1000 processes and 0 balancePenalties with a search space of 10^2000.
model_a2_2 has 12 resources, 5 neighborhoods, 25 locations, 100 machines, 170 services, 1000 processes and 0 balancePenalties with a search space of 10^2000.
model_a2_3 has 12 resources, 5 neighborhoods, 25 locations, 100 machines, 129 services, 1000 processes and 0 balancePenalties with a search space of 10^2000.
model_a2_4 has 12 resources, 5 neighborhoods, 25 locations, 50 machines, 180 services, 1000 processes and 1 balancePenalties with a search space of 10^1698.
model_a2_5 has 12 resources, 5 neighborhoods, 25 locations, 50 machines, 153 services, 1000 processes and 0 balancePenalties with a search space of 10^1698.
model_b_1 has 12 resources, 5 neighborhoods, 10 locations, 100 machines, 2512 services, 5000 processes and 0 balancePenalties with a search space of 10^10000.
model_b_2 has 12 resources, 5 neighborhoods, 10 locations, 100 machines, 2462 services, 5000 processes and 1 balancePenalties with a search space of 10^10000.
model_b_3 has 6 resources, 5 neighborhoods, 10 locations, 100 machines, 15025 services, 20000 processes and 0 balancePenalties with a search space of 10^40000.
model_b_4 has 6 resources, 5 neighborhoods, 50 locations, 500 machines, 1732 services, 20000 processes and 1 balancePenalties with a search space of 10^53979.
model_b_5 has 6 resources, 5 neighborhoods, 10 locations, 100 machines, 35082 services, 40000 processes and 0 balancePenalties with a search space of 10^80000.
model_b_6 has 6 resources, 5 neighborhoods, 50 locations, 200 machines, 14680 services, 40000 processes and 1 balancePenalties with a search space of 10^92041.
model_b_7 has 6 resources, 5 neighborhoods, 50 locations, 4000 machines, 15050 services, 40000 processes and 1 balancePenalties with a search space of 10^144082.
model_b_8 has 3 resources, 5 neighborhoods, 10 locations, 100 machines, 45030 services, 50000 processes and 0 balancePenalties with a search space of 10^100000.
model_b_9 has 3 resources, 5 neighborhoods, 100 locations, 1000 machines, 4609 services, 50000 processes and 1 balancePenalties with a search space of 10^150000.
model_b_10 has 3 resources, 5 neighborhoods, 100 locations, 5000 machines, 4896 services, 50000 processes and 1 balancePenalties with a search space of 10^184948.
Figure 31.7. Domain model
Using a fleet of vehicles, pick up the objects of each customer and bring them to the depot. Each vehicle can service multiple customers, but it has a limited capacity.
Besides the basic case (CVRP), there is also a variant with time windows (CVRPTW).
Hard constraints:
● Vehicle capacity: a vehicle cannot carry more items then its capacity.
● Time windows (only in CVRPTW):
○ Travel time: Traveling from one location to another takes time.
○ Customer service duration: A vehicle must stay at the customer for the length of the service duration.
○ Customer ready time: A vehicle may arrive before the customer’s ready time, but it must wait until the ready time before servicing.
○ Customer due time: A vehicle must arrive on time, before the customer’s due time.
Soft constraints:
● Total distance: Minimize the total distance driven (fuel consumption) of all vehicles.
The capacitated vehicle routing problem (CVRP) and its time-windowed variant (CVRPTW) are defined by the VRP web.
Figure 31.8. Value proposition
Problem size
CVRP instances (without time windows):
belgium-n50-k10 has 1 depots, 10 vehicles and 49 customers with a search space of 10^74.
belgium-n100-k10 has 1 depots, 10 vehicles and 99 customers with a search space of 10^170.
belgium-n500-k20 has 1 depots, 20 vehicles and 499 customers with a search space of 10^1168.
belgium-n1000-k20 has 1 depots, 20 vehicles and 999 customers with a search space of 10^2607.
belgium-n2750-k55 has 1 depots, 55 vehicles and 2749 customers with a search space of 10^8380.
belgium-road-km-n50-k10 has 1 depots, 10 vehicles and 49 customers with a search space of 10^74.
belgium-road-km-n100-k10 has 1 depots, 10 vehicles and 99 customers with a search space of 10^170.
belgium-road-km-n500-k20 has 1 depots, 20 vehicles and 499 customers with a search space of 10^1168.
belgium-road-km-n1000-k20 has 1 depots, 20 vehicles and 999 customers with a search space of 10^2607.
belgium-road-km-n2750-k55 has 1 depots, 55 vehicles and 2749 customers with a search space of 10^8380.
belgium-road-time-n50-k10 has 1 depots, 10 vehicles and 49 customers with a search space of 10^74.
belgium-road-time-n100-k10 has 1 depots, 10 vehicles and 99 customers with a search space of 10^170.
belgium-road-time-n500-k20 has 1 depots, 20 vehicles and 499 customers with a search space of 10^1168.
belgium-road-time-n1000-k20 has 1 depots, 20 vehicles and 999 customers with a search space of 10^2607.
belgium-road-time-n2750-k55 has 1 depots, 55 vehicles and 2749 customers with a search space of 10^8380.
belgium-d2-n50-k10 has 2 depots, 10 vehicles and 48 customers with a search space of 10^74.
belgium-d3-n100-k10 has 3 depots, 10 vehicles and 97 customers with a search space of 10^170.
belgium-d5-n500-k20 has 5 depots, 20 vehicles and 495 customers with a search space of 10^1168.
belgium-d8-n1000-k20 has 8 depots, 20 vehicles and 992 customers with a search space of 10^2607.
belgium-d10-n2750-k55 has 10 depots, 55 vehicles and 2740 customers with a search space of 10^8380.
A-n32-k5 has 1 depots, 5 vehicles and 31 customers with a search space of 10^40.
A-n33-k5 has 1 depots, 5 vehicles and 32 customers with a search space of 10^41.
A-n33-k6 has 1 depots, 6 vehicles and 32 customers with a search space of 10^42.
A-n34-k5 has 1 depots, 5 vehicles and 33 customers with a search space of 10^43.
A-n36-k5 has 1 depots, 5 vehicles and 35 customers with a search space of 10^46.
A-n37-k5 has 1 depots, 5 vehicles and 36 customers with a search space of 10^48.
A-n37-k6 has 1 depots, 6 vehicles and 36 customers with a search space of 10^49.
A-n38-k5 has 1 depots, 5 vehicles and 37 customers with a search space of 10^49.
A-n39-k5 has 1 depots, 5 vehicles and 38 customers with a search space of 10^51.
A-n39-k6 has 1 depots, 6 vehicles and 38 customers with a search space of 10^52.
A-n44-k7 has 1 depots, 7 vehicles and 43 customers with a search space of 10^61.
A-n45-k6 has 1 depots, 6 vehicles and 44 customers with a search space of 10^62.
A-n45-k7 has 1 depots, 7 vehicles and 44 customers with a search space of 10^63.
A-n46-k7 has 1 depots, 7 vehicles and 45 customers with a search space of 10^65.
A-n48-k7 has 1 depots, 7 vehicles and 47 customers with a search space of 10^68.
A-n53-k7 has 1 depots, 7 vehicles and 52 customers with a search space of 10^77.
A-n54-k7 has 1 depots, 7 vehicles and 53 customers with a search space of 10^79.
A-n55-k9 has 1 depots, 9 vehicles and 54 customers with a search space of 10^82.
A-n60-k9 has 1 depots, 9 vehicles and 59 customers with a search space of 10^91.
A-n61-k9 has 1 depots, 9 vehicles and 60 customers with a search space of 10^93.
A-n62-k8 has 1 depots, 8 vehicles and 61 customers with a search space of 10^94.
A-n63-k9 has 1 depots, 9 vehicles and 62 customers with a search space of 10^97.
A-n63-k10 has 1 depots, 10 vehicles and 62 customers with a search space of 10^98.
A-n64-k9 has 1 depots, 9 vehicles and 63 customers with a search space of 10^99.
A-n65-k9 has 1 depots, 9 vehicles and 64 customers with a search space of 10^101.
A-n69-k9 has 1 depots, 9 vehicles and 68 customers with a search space of 10^108.
A-n80-k10 has 1 depots, 10 vehicles and 79 customers with a search space of 10^130.
F-n45-k4 has 1 depots, 4 vehicles and 44 customers with a search space of 10^60.
F-n72-k4 has 1 depots, 4 vehicles and 71 customers with a search space of 10^108.
F-n135-k7 has 1 depots, 7 vehicles and 134 customers with a search space of 10^240.
CVRPTW instances (with time windows):
belgium-tw-d2-n50-k10 has 2 depots, 10 vehicles and 48 customers with a search space of 10^74.
belgium-tw-d3-n100-k10 has 3 depots, 10 vehicles and 97 customers with a search space of 10^170.
belgium-tw-d5-n500-k20 has 5 depots, 20 vehicles and 495 customers with a search space of 10^1168.
belgium-tw-d8-n1000-k20 has 8 depots, 20 vehicles and 992 customers with a search space of 10^2607.
belgium-tw-d10-n2750-k55 has 10 depots, 55 vehicles and 2740 customers with a search space of 10^8380.
belgium-tw-n50-k10 has 1 depots, 10 vehicles and 49 customers with a search space of 10^74.
belgium-tw-n100-k10 has 1 depots, 10 vehicles and 99 customers with a search space of 10^170.
belgium-tw-n500-k20 has 1 depots, 20 vehicles and 499 customers with a search space of 10^1168.
belgium-tw-n1000-k20 has 1 depots, 20 vehicles and 999 customers with a search space of 10^2607.
belgium-tw-n2750-k55 has 1 depots, 55 vehicles and 2749 customers with a search space of 10^8380.
Solomon_025_C101 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40.
Solomon_025_C201 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40.
Solomon_025_R101 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40.
Solomon_025_R201 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40.
Solomon_025_RC101 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40.
Solomon_025_RC201 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40.
Solomon_100_C101 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185.
Solomon_100_C201 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185.
Solomon_100_R101 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185.
Solomon_100_R201 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185.
Solomon_100_RC101 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185.
Solomon_100_RC201 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185.
Homberger_0200_C1_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429.
Homberger_0200_C2_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429.
Homberger_0200_R1_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429.
Homberger_0200_R2_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429.
Homberger_0200_RC1_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429.
Homberger_0200_RC2_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429.
Homberger_0400_C1_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978.
Homberger_0400_C2_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978.
Homberger_0400_R1_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978.
Homberger_0400_R2_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978.
Homberger_0400_RC1_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978.
Homberger_0400_RC2_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978.
Homberger_0600_C1_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571.
Homberger_0600_C2_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571.
Homberger_0600_R1_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571.
Homberger_0600_R2_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571.
Homberger_0600_RC1_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571.
Homberger_0600_RC2_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571.
Homberger_0800_C1_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195.
Homberger_0800_C2_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195.
Homberger_0800_R1_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195.
Homberger_0800_R2_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195.
Homberger_0800_RC1_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195.
Homberger_0800_RC2_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195.
Homberger_1000_C110_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840.
Homberger_1000_C210_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840.
Homberger_1000_R110_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840.
Homberger_1000_R210_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840.
Homberger_1000_RC110_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840.
Homberger_1000_RC210_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840.
31.13.1. Domain model for Vehicle routing
The vehicle routing with time windows domain model makes heavy use of the shadow variable feature. This allows it to express its constraints more naturally, because properties such as arrivalTime and
departureTime, are directly available on the domain model.
Road Distances Instead of Air Distances
In the real world, vehicles cannot follow a straight line from location to location: they have to use roads and highways. From a business point of view, this matters a lot:
For the optimization algorithm, this does not matter much, as long as the distance between two points can be looked up (and are preferably precalculated). The road cost does not even need to be a
distance. It can also be travel time, fuel cost, or a weighted function of those. There are several technologies available to precalculate road costs, such as GraphHopper (embeddable, offline Java
engine), Open MapQuest (web service) and Google Maps Client API (web service).
There are also several technologies to render it, such as Leaflet and Google Maps for developers.
It is even possible to render the actual road routes with GraphHopper or Google Map Directions, but because of route overlaps on highways, it can become harder to see the standstill order:
Take special care that the road costs between two points use the same optimization criteria as the one used in OptaPlanner. For example, GraphHopper will by default return the fastest route, not the
shortest route. Don’t use the km (or miles) distances of the fastest GPS routes to optimize the shortest trip in OptaPlanner: this leads to a suboptimal solution as shown below:
Contrary to popular belief, most users do not want the shortest route: they want the fastest route instead. They prefer highways over normal roads. They prefer normal roads over dirt roads. In the
real world, the fastest and shortest route are rarely the same.
31.14. Project job scheduling
Schedule all jobs in time and execution mode to minimize project delays. Each job is part of a project. A job can be executed in different ways: each way is an execution mode that implies a different
duration but also different resource usages. This is a form of flexible job shop scheduling.
Hard constraints:
● Job precedence: a job can only start when all its predecessor jobs are finished.
● Resource capacity: do not use more resources than available.
○ Resources are local (shared between jobs of the same project) or global (shared between all jobs)
○ Resources are renewable (capacity available per day) or nonrenewable (capacity available for all days)
Medium constraints:
● Total project delay: minimize the duration (makespan) of each project.
Soft constraints:
● Total makespan: minimize the duration of the whole multi-project schedule.
The problem is defined by the MISTA 2013 challenge.
Problem size
Schedule A-1 has 2 projects, 24 jobs, 64 execution modes, 7 resources and 150 resource requirements.
Schedule A-2 has 2 projects, 44 jobs, 124 execution modes, 7 resources and 420 resource requirements.
Schedule A-3 has 2 projects, 64 jobs, 184 execution modes, 7 resources and 630 resource requirements.
Schedule A-4 has 5 projects, 60 jobs, 160 execution modes, 16 resources and 390 resource requirements.
Schedule A-5 has 5 projects, 110 jobs, 310 execution modes, 16 resources and 900 resource requirements.
Schedule A-6 has 5 projects, 160 jobs, 460 execution modes, 16 resources and 1440 resource requirements.
Schedule A-7 has 10 projects, 120 jobs, 320 execution modes, 22 resources and 900 resource requirements.
Schedule A-8 has 10 projects, 220 jobs, 620 execution modes, 22 resources and 1860 resource requirements.
Schedule A-9 has 10 projects, 320 jobs, 920 execution modes, 31 resources and 2880 resource requirements.
Schedule A-10 has 10 projects, 320 jobs, 920 execution modes, 31 resources and 2970 resource requirements.
Schedule B-1 has 10 projects, 120 jobs, 320 execution modes, 31 resources and 900 resource requirements.
Schedule B-2 has 10 projects, 220 jobs, 620 execution modes, 22 resources and 1740 resource requirements.
Schedule B-3 has 10 projects, 320 jobs, 920 execution modes, 31 resources and 3060 resource requirements.
Schedule B-4 has 15 projects, 180 jobs, 480 execution modes, 46 resources and 1530 resource requirements.
Schedule B-5 has 15 projects, 330 jobs, 930 execution modes, 46 resources and 2760 resource requirements.
Schedule B-6 has 15 projects, 480 jobs, 1380 execution modes, 46 resources and 4500 resource requirements.
Schedule B-7 has 20 projects, 240 jobs, 640 execution modes, 61 resources and 1710 resource requirements.
Schedule B-8 has 20 projects, 440 jobs, 1240 execution modes, 42 resources and 3180 resource requirements.
Schedule B-9 has 20 projects, 640 jobs, 1840 execution modes, 61 resources and 5940 resource requirements.
Schedule B-10 has 20 projects, 460 jobs, 1300 execution modes, 42 resources and 4260 resource requirements.
Assign each task to a spot in an employee’s queue. Each task has a duration which is affected by the employee’s affinity level with the task’s customer.
Hard constraints:
● Skill: Each task requires one or more skills. The employee must possess all these skills.
Soft level 0 constraints:
● Critical tasks: Complete critical tasks first, sooner than major and minor tasks.
Soft level 1 constraints:
● Minimize makespan: Reduce the time to complete all tasks.
○ Start with the longest working employee first, then the second longest working employee and so forth, to create fairness and load balancing.
Soft level 2 constraints:
● Major tasks: Complete major tasks as soon as possible, sooner than minor tasks.
Soft level 3 constraints:
● Minor tasks: Complete minor tasks as soon as possible.
Figure 31.9. Value proposition
Problem size
24tasks-8employees has 24 tasks, 6 skills, 8 employees, 4 task types and 4 customers with a search space of 10^30.
50tasks-5employees has 50 tasks, 5 skills, 5 employees, 10 task types and 10 customers with a search space of 10^69.
100tasks-5employees has 100 tasks, 5 skills, 5 employees, 20 task types and 15 customers with a search space of 10^164.
500tasks-20employees has 500 tasks, 6 skills, 20 employees, 100 task types and 60 customers with a search space of 10^1168.
Figure 31.10. Domain model
31.16. Exam timetabling (ITC 2007 track 1 - Examination)
Schedule each exam into a period and into a room. Multiple exams can share the same room during the same period.
Hard constraints:
● Exam conflict: Two exams that share students must not occur in the same period.
● Room capacity: A room’s seating capacity must suffice at all times.
● Period duration: A period’s duration must suffice for all of its exams.
● Period related hard constraints (specified per dataset):
○ Coincidence: Two specified exams must use the same period (but possibly another room).
○ Exclusion: Two specified exams must not use the same period.
○ After: A specified exam must occur in a period after another specified exam’s period.
● Room related hard constraints (specified per dataset):
○ Exclusive: One specified exam should not have to share its room with any other exam.
Soft constraints (each of which has a parametrized penalty):
● The same student should not have two exams in a row.
● The same student should not have two exams on the same day.
● Period spread: Two exams that share students should be a number of periods apart.
● Mixed durations: Two exams that share a room should not have different durations.
● Front load: Large exams should be scheduled earlier in the schedule.
● Period penalty (specified per dataset): Some periods have a penalty when used.
● Room penalty (specified per dataset): Some rooms have a penalty when used.
It uses large test data sets of real-life universities.
The problem is defined by the International Timetabling Competition 2007 track 1. Geoffrey De Smet finished 4th in that competition with a very early version of OptaPlanner. Many improvements have
been made since then.
Problem Size
exam_comp_set1 has 7883 students, 607 exams, 54 periods, 7 rooms, 12 period constraints and 0 room constraints with a search space of 10^1564.
exam_comp_set2 has 12484 students, 870 exams, 40 periods, 49 rooms, 12 period constraints and 2 room constraints with a search space of 10^2864.
exam_comp_set3 has 16365 students, 934 exams, 36 periods, 48 rooms, 168 period constraints and 15 room constraints with a search space of 10^3023.
exam_comp_set4 has 4421 students, 273 exams, 21 periods, 1 rooms, 40 period constraints and 0 room constraints with a search space of 10^360.
exam_comp_set5 has 8719 students, 1018 exams, 42 periods, 3 rooms, 27 period constraints and 0 room constraints with a search space of 10^2138.
exam_comp_set6 has 7909 students, 242 exams, 16 periods, 8 rooms, 22 period constraints and 0 room constraints with a search space of 10^509.
exam_comp_set7 has 13795 students, 1096 exams, 80 periods, 15 rooms, 28 period constraints and 0 room constraints with a search space of 10^3374.
exam_comp_set8 has 7718 students, 598 exams, 80 periods, 8 rooms, 20 period constraints and 1 room constraints with a search space of 10^1678.
31.16.1. Domain model for exam timetabling
The following diagram shows the main examination domain classes:
Figure 31.11. Examination domain class diagram
Notice that we’ve split up the exam concept into an Exam class and a Topic class. The Exam instances change during solving (this is the planning entity class), when their period or room property
changes. The Topic, Period and Room instances never change during solving (these are problem facts, just like some other classes).
31.17. Nurse rostering (INRC 2010)
For each shift, assign a nurse to work that shift.
Hard constraints:
● No unassigned shifts (built-in): Every shift need to be assigned to an employee.
● Shift conflict: An employee can have only one shift per day.
Soft constraints:
● Contract obligations. The business frequently violates these, so they decided to define these as soft constraints instead of hard constraints.
○ Minimum and maximum assignments: Each employee needs to work more than x shifts and less than y shifts (depending on their contract).
○ Minimum and maximum consecutive working days: Each employee needs to work between x and y days in a row (depending on their contract).
○ Minimum and maximum consecutive free days: Each employee needs to be free between x and y days in a row (depending on their contract).
○ Minimum and maximum consecutive working weekends: Each employee needs to work between x and y weekends in a row (depending on their contract).
○ Complete weekends: Each employee needs to work every day in a weekend or not at all.
○ Identical shift types during weekend: Each weekend shift for the same weekend of the same employee must be the same shift type.
○ Unwanted patterns: A combination of unwanted shift types in a row, for example a late shift followed by an early shift followed by a late shift.
● Employee wishes:
○ Day on request: An employee wants to work on a specific day.
○ Day off request: An employee does not want to work on a specific day.
○ Shift on request: An employee wants to be assigned to a specific shift.
○ Shift off request: An employee does not want to be assigned to a specific shift.
● Alternative skill: An employee assigned to a skill should have a proficiency in every skill required by that shift.
The problem is defined by the International Nurse Rostering Competition 2010.
Figure 31.12. Value proposition
Problem size
There are three dataset types:
● Sprint: must be solved in seconds.
● Medium: must be solved in minutes.
● Long: must be solved in hours.
toy1 has 1 skills, 3 shiftTypes, 2 patterns, 1 contracts, 6 employees, 7 shiftDates, 35 shiftAssignments and 0 requests with a search space of 10^27.
toy2 has 1 skills, 3 shiftTypes, 3 patterns, 2 contracts, 20 employees, 28 shiftDates, 180 shiftAssignments and 140 requests with a search space of 10^234.
sprint01 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint02 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint03 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint04 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint05 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint06 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint07 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint08 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint09 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint10 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint_hint01 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint_hint02 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint_hint03 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint_late01 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint_late02 has 1 skills, 3 shiftTypes, 4 patterns, 3 contracts, 10 employees, 28 shiftDates, 144 shiftAssignments and 139 requests with a search space of 10^144.
sprint_late03 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 160 shiftAssignments and 150 requests with a search space of 10^160.
sprint_late04 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 160 shiftAssignments and 150 requests with a search space of 10^160.
sprint_late05 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint_late06 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint_late07 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
sprint_late08 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 0 requests with a search space of 10^152.
sprint_late09 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 0 requests with a search space of 10^152.
sprint_late10 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152.
medium01 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906.
medium02 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906.
medium03 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906.
medium04 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906.
medium05 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906.
medium_hint01 has 1 skills, 4 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632.
medium_hint02 has 1 skills, 4 shiftTypes, 7 patterns, 3 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632.
medium_hint03 has 1 skills, 4 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632.
medium_late01 has 1 skills, 4 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 424 shiftAssignments and 390 requests with a search space of 10^626.
medium_late02 has 1 skills, 4 shiftTypes, 7 patterns, 3 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632.
medium_late03 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632.
medium_late04 has 1 skills, 4 shiftTypes, 7 patterns, 3 contracts, 30 employees, 28 shiftDates, 416 shiftAssignments and 390 requests with a search space of 10^614.
medium_late05 has 2 skills, 5 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 452 shiftAssignments and 390 requests with a search space of 10^667.
long01 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250.
long02 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250.
long03 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250.
long04 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250.
long05 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250.
long_hint01 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257.
long_hint02 has 2 skills, 5 shiftTypes, 7 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257.
long_hint03 has 2 skills, 5 shiftTypes, 7 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257.
long_late01 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277.
long_late02 has 2 skills, 5 shiftTypes, 9 patterns, 4 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277.
long_late03 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277.
long_late04 has 2 skills, 5 shiftTypes, 9 patterns, 4 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277.
long_late05 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257.
Figure 31.13. Domain model
31.18. Traveling tournament problem (TTP)
Schedule matches between n number of teams.
Hard constraints:
● Each team plays twice against every other team: once home and once away.
● Each team has exactly one match on each timeslot.
● No team must have more than three consecutive home or three consecutive away matches.
● No repeaters: no two consecutive matches of the same two opposing teams.
Soft constraints:
● Minimize the total distance traveled by all teams.
The problem is defined on Michael Trick’s website (which contains the world records too).
Problem size
1-nl04 has 6 days, 4 teams and 12 matches with a search space of 10^5.
1-nl06 has 10 days, 6 teams and 30 matches with a search space of 10^19.
1-nl08 has 14 days, 8 teams and 56 matches with a search space of 10^43.
1-nl10 has 18 days, 10 teams and 90 matches with a search space of 10^79.
1-nl12 has 22 days, 12 teams and 132 matches with a search space of 10^126.
1-nl14 has 26 days, 14 teams and 182 matches with a search space of 10^186.
1-nl16 has 30 days, 16 teams and 240 matches with a search space of 10^259.
2-bra24 has 46 days, 24 teams and 552 matches with a search space of 10^692.
3-nfl16 has 30 days, 16 teams and 240 matches with a search space of 10^259.
3-nfl18 has 34 days, 18 teams and 306 matches with a search space of 10^346.
3-nfl20 has 38 days, 20 teams and 380 matches with a search space of 10^447.
3-nfl22 has 42 days, 22 teams and 462 matches with a search space of 10^562.
3-nfl24 has 46 days, 24 teams and 552 matches with a search space of 10^692.
3-nfl26 has 50 days, 26 teams and 650 matches with a search space of 10^838.
3-nfl28 has 54 days, 28 teams and 756 matches with a search space of 10^999.
3-nfl30 has 58 days, 30 teams and 870 matches with a search space of 10^1175.
3-nfl32 has 62 days, 32 teams and 992 matches with a search space of 10^1367.
4-super04 has 6 days, 4 teams and 12 matches with a search space of 10^5.
4-super06 has 10 days, 6 teams and 30 matches with a search space of 10^19.
4-super08 has 14 days, 8 teams and 56 matches with a search space of 10^43.
4-super10 has 18 days, 10 teams and 90 matches with a search space of 10^79.
4-super12 has 22 days, 12 teams and 132 matches with a search space of 10^126.
4-super14 has 26 days, 14 teams and 182 matches with a search space of 10^186.
5-galaxy04 has 6 days, 4 teams and 12 matches with a search space of 10^5.
5-galaxy06 has 10 days, 6 teams and 30 matches with a search space of 10^19.
5-galaxy08 has 14 days, 8 teams and 56 matches with a search space of 10^43.
5-galaxy10 has 18 days, 10 teams and 90 matches with a search space of 10^79.
5-galaxy12 has 22 days, 12 teams and 132 matches with a search space of 10^126.
5-galaxy14 has 26 days, 14 teams and 182 matches with a search space of 10^186.
5-galaxy16 has 30 days, 16 teams and 240 matches with a search space of 10^259.
5-galaxy18 has 34 days, 18 teams and 306 matches with a search space of 10^346.
5-galaxy20 has 38 days, 20 teams and 380 matches with a search space of 10^447.
5-galaxy22 has 42 days, 22 teams and 462 matches with a search space of 10^562.
5-galaxy24 has 46 days, 24 teams and 552 matches with a search space of 10^692.
5-galaxy26 has 50 days, 26 teams and 650 matches with a search space of 10^838.
5-galaxy28 has 54 days, 28 teams and 756 matches with a search space of 10^999.
5-galaxy30 has 58 days, 30 teams and 870 matches with a search space of 10^1175.
5-galaxy32 has 62 days, 32 teams and 992 matches with a search space of 10^1367.
5-galaxy34 has 66 days, 34 teams and 1122 matches with a search space of 10^1576.
5-galaxy36 has 70 days, 36 teams and 1260 matches with a search space of 10^1801.
5-galaxy38 has 74 days, 38 teams and 1406 matches with a search space of 10^2042.
5-galaxy40 has 78 days, 40 teams and 1560 matches with a search space of 10^2301.
31.19. Cheap time scheduling
Schedule all tasks in time and on a machine to minimize power cost. Power prices differ in time. This is a form of job shop scheduling.
Hard constraints:
● Start time limits: Each task must start between its earliest start and latest start limit.
● Maximum capacity: The maximum capacity for each resource for each machine must not be exceeded.
● Startup and shutdown: Each machine must be active in the periods during which it has assigned tasks. Between tasks it is allowed to be idle to avoid startup and shutdown costs.
Medium constraints:
● Power cost: Minimize the total power cost of the whole schedule.
○ Machine power cost: Each active or idle machine consumes power, which infers a power cost (depending on the power price during that time).
○ Task power cost: Each task consumes power too, which infers a power cost (depending on the power price during its time).
○ Machine startup and shutdown cost: Every time a machine starts up or shuts down, an extra cost is incurred.
Soft constraints (addendum to the original problem definition):
● Start early: Prefer starting a task sooner rather than later.
The problem is defined by the ICON challenge.
Problem size
sample01 has 3 resources, 2 machines, 288 periods and 25 tasks with a search space of 10^53.
sample02 has 3 resources, 2 machines, 288 periods and 50 tasks with a search space of 10^114.
sample03 has 3 resources, 2 machines, 288 periods and 100 tasks with a search space of 10^226.
sample04 has 3 resources, 5 machines, 288 periods and 100 tasks with a search space of 10^266.
sample05 has 3 resources, 2 machines, 288 periods and 250 tasks with a search space of 10^584.
sample06 has 3 resources, 5 machines, 288 periods and 250 tasks with a search space of 10^673.
sample07 has 3 resources, 2 machines, 288 periods and 1000 tasks with a search space of 10^2388.
sample08 has 3 resources, 5 machines, 288 periods and 1000 tasks with a search space of 10^2748.
sample09 has 4 resources, 20 machines, 288 periods and 2000 tasks with a search space of 10^6668.
instance00 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^595.
instance01 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^599.
instance02 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^599.
instance03 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^591.
instance04 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^590.
instance05 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^667.
instance06 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^660.
instance07 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^662.
instance08 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^651.
instance09 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^659.
instance10 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1657.
instance11 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1644.
instance12 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1637.
instance13 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1659.
instance14 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1643.
instance15 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1782.
instance16 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1778.
instance17 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1764.
instance18 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1769.
instance19 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1778.
instance20 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3689.
instance21 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3678.
instance22 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3706.
instance23 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3676.
instance24 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3681.
instance25 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3774.
instance26 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3737.
instance27 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3744.
instance28 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3731.
instance29 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3746.
instance30 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7718.
instance31 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7740.
instance32 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7686.
instance33 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7672.
instance34 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7695.
instance35 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7807.
instance36 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7814.
instance37 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7764.
instance38 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7736.
instance39 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7783.
instance40 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15976.
instance41 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15935.
instance42 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15887.
instance43 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15896.
instance44 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15885.
instance45 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20173.
instance46 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20132.
instance47 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20126.
instance48 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20110.
instance49 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20078.
31.20. Investment asset class allocation (Portfolio Optimization)
Decide the relative quantity to invest in each asset class.
Hard constraints:
● Risk maximum: the total standard deviation must not be higher than the standard deviation maximum.
● Region maximum: Each region has a quantity maximum.
● Sector maximum: Each sector has a quantity maximum.
Soft constraints:
● Maximize expected return.
Problem size
de_smet_1 has 1 regions, 3 sectors and 11 asset classes with a search space of 10^4.
irrinki_1 has 2 regions, 3 sectors and 6 asset classes with a search space of 10^3.
Larger datasets have not been created or tested yet, but should not pose a problem. A good source of data is this Asset Correlation website.
31.21. Conference scheduling
Assign each conference talk to a timeslot and a room. Timeslots can overlap. Read and write to and from an *.xlsx file that can be edited with LibreOffice or Excel.
Hard constraints:
● Talk type of timeslot: The type of a talk must match the timeslot’s talk type.
● Room unavailable timeslots: A talk’s room must be available during the talk’s timeslot.
● Room conflict: Two talks can’t use the same room during overlapping timeslots.
● Speaker unavailable timeslots: Every talk’s speaker must be available during the talk’s timeslot.
● Speaker conflict: Two talks can’t share a speaker during overlapping timeslots.
● Generic purpose timeslot and room tags:
○ Speaker required timeslot tag: If a speaker has a required timeslot tag, then all of his or her talks must be assigned to a timeslot with that tag.
○ Speaker prohibited timeslot tag: If a speaker has a prohibited timeslot tag, then all of his or her talks cannot be assigned to a timeslot with that tag.
○ Talk required timeslot tag: If a talk has a required timeslot tag, then it must be assigned to a timeslot with that tag.
○ Talk prohibited timeslot tag: If a talk has a prohibited timeslot tag, then it cannot be assigned to a timeslot with that tag.
○ Speaker required room tag: If a speaker has a required room tag, then all of his or her talks must be assigned to a room with that tag.
○ Speaker prohibited room tag: If a speaker has a prohibited room tag, then all of his or her talks cannot be assigned to a room with that tag.
○ Talk required room tag: If a talk has a required room tag, then it must be assigned to a room with that tag.
○ Talk prohibited room tag: If a talk has a prohibited room tag, then it cannot be assigned to a room with that tag.
● Talk mutually-exclusive-talks tag: Talks that share such a tag must not be scheduled in overlapping timeslots.
● Talk prerequisite talks: A talk must be scheduled after all its prerequisite talks.
Soft constraints:
● Theme track conflict: Minimize the number of talks that share a theme tag during overlapping timeslots.
● Sector conflict: Minimize the number of talks that share a same sector tag during overlapping timeslots.
● Content audience level flow violation: For every content tag, schedule the introductory talks before the advanced talks.
● Audience level diversity: For every timeslot, maximize the number of talks with a different audience level.
● Language diversity: For every timeslot, maximize the number of talks with a different language.
● Generic purpose timeslot and room tags:
○ Speaker preferred timeslot tag: If a speaker has a preferred timeslot tag, then all of his or her talks should be assigned to a timeslot with that tag.
○ Speaker undesired timeslot tag: If a speaker has an undesired timeslot tag, then none of his or her talks should be assigned to a timeslot with that tag.
○ Talk preferred timeslot tag: If a talk has a preferred timeslot tag, then it should be assigned to a timeslot with that tag.
○ Talk undesired timeslot tag: If a talk has an undesired timeslot tag, then it should not be assigned to a timeslot with that tag.
○ Speaker preferred room tag: If a speaker has a preferred room tag, then all of his or her talks should be assigned to a room with that tag.
○ Speaker undesired room tag: If a speaker has an undesired room tag, then none of his or her talks should be assigned to a room with that tag.
○ Talk preferred room tag: If a talk has a preferred room tag, then it should be assigned to a room with that tag.
○ Talk undesired room tag: If a talk has an undesired room tag, then it should not be assigned to a room with that tag.
● Same day talks: All talks that share a theme tag or content tag should be scheduled in the minimum number of days (ideally in the same day).
Figure 31.14. Value proposition
Problem size
18talks-6timeslots-5rooms has 18 talks, 6 timeslots and 5 rooms with a search space of 10^26.
36talks-12timeslots-5rooms has 36 talks, 12 timeslots and 5 rooms with a search space of 10^64.
72talks-12timeslots-10rooms has 72 talks, 12 timeslots and 10 rooms with a search space of 10^149.
108talks-18timeslots-10rooms has 108 talks, 18 timeslots and 10 rooms with a search space of 10^243.
216talks-18timeslots-20rooms has 216 talks, 18 timeslots and 20 rooms with a search space of 10^552.
Drive the rock bank bus from show to show, but schedule shows only on available days.
Hard constraints:
● Schedule every required show.
● Schedule as many shows as possible.
Medium constraints:
● Maximize revenue opportunity.
● Minimize driving time.
● Visit sooner than later.
Soft constraints:
● Avoid long driving times.
Problem size
47shows has 47 shows with a search space of 10^59.
31.23. Flight crew scheduling
Assign flights to pilots and flight attendants.
Hard constraints:
● Required skill: each flight assignment has a required skill. For example, flight AB0001 requires 2 pilots and 3 flight attendants.
● Flight conflict: each employee can only attend one flight at the same time
● Transfer between two flights: between two flights, an employee must be able to transfer from the arrival airport to the departure airport. For example, Ann arrives in Brussels at 10:00 and
departs in Amsterdam at 15:00.
● Employee unavailability: the employee must be available on the day of the flight. For example, Ann is on PTO on 1-Feb.
Soft constraints:
● First assignment departing from home
● Last assignment arriving at home
● Load balance flight duration total per employee
Problem size
175flights-7days-Europe has 2 skills, 50 airports, 150 employees, 175 flights and 875 flight assignments with a search space of 10^1904.
700flights-28days-Europe has 2 skills, 50 airports, 150 employees, 700 flights and 3500 flight assignments with a search space of 10^7616.
875flights-7days-Europe has 2 skills, 50 airports, 750 employees, 875 flights and 4375 flight assignments with a search space of 10^12578.
175flights-7days-US has 2 skills, 48 airports, 150 employees, 175 flights and 875 flight assignments with a search space of 10^1904. | {"url":"https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/optimizer-about-optimizer-con_getting-started-optaplanner","timestamp":"2024-11-13T20:08:14Z","content_type":"text/html","content_length":"654554","record_id":"<urn:uuid:b559b064-5933-44a9-9cd9-1b99fcfceee5>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00272.warc.gz"} |
Prediction of Uranium Adsorption Capacity in Radioactive Wastewater Treatment with Biochar
College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin 150040, China
Author to whom correspondence should be addressed.
Submission received: 9 January 2024 / Revised: 21 January 2024 / Accepted: 29 January 2024 / Published: 30 January 2024
Recently, Japan’s discharge of wastewater from the Fukushima nuclear disaster into the ocean has attracted widespread attention. To effectively address the challenge of separating uranium, the focus
is on finding a healthy and environmentally friendly way to adsorb uranium using biochar. In this paper, a BP neural network is combined with each of the four meta-heuristic algorithms, namely
Particle Swarm Optimization (PSO), Differential Evolution (DE), Cheetah Optimization (CO) and Fick’s Law Algorithm (FLA), to construct four prediction models for the uranium adsorption capacity in
the treatment of radioactive wastewater with biochar: PSO-BP, DE-BP, CO-BP, FLA-BP. The coefficient of certainty (R^2), error rate and CEC test set are used to judge the accuracy of the model based
on the BP neural network. The results show that the Fick’s Law Algorithm (FLA) has a better search ability and convergence speed than the other algorithms. The importance of the input parameters is
quantitatively assessed and ranked using XGBoost in order to analyze which parameters have a greater impact on the predictions of the model, which indicates that the parameters with the greatest
impact are the initial concentration of uranium (C[0], mg/L) and the mass percentage of total carbon (C, %). To sum up, four prediction models can be applied to study the adsorption of uranium by
biochar materials during actual experiments, and the advantage of Fick’s Law Algorithm (FLA) is more obvious. The method of model prediction can significantly reduce the radiation risk caused by
uranium to human health during the actual experiment and provide some reference for the efficient treatment of uranium wastewater by biochar.
1. Introduction
On 24 August 2023, Japan’s Fukushima Daiichi nuclear power plant initiated the discharge of nuclear-contaminated water into the sea. The wastewater, which originated from the operation and
maintenance processes of the plant, contained more than 60 substances, such as radioactive uranium, tritium and plutonium. Uranium is the most essential fuel for nuclear reactors. Uranium mining,
processing reactor operation and maintenance inevitably produce much radioactive uranium containing wastewater. Uranium is a substance with significant radiological, biological and chemical toxicity,
and has been confirmed as a carcinogen. Its radioactive characteristics can destroy human DNA, meaning that the production of blood cells is affected and nerve cells are abnormal or dead, and then
cause serious damage to the human hematopoietic organs, nervous system and reproductive system. Long-term exposure to uranium may increase the risk of malignant diseases such as lung cancer and blood
cancer [
]. The World Health Organization (WHO) guidelines for drinking water quality specify a guideline value of 30 μg/L for uranium in drinking water [
], and the concentration of uranium in Fukushima nuclear wastewater can reach 5 mg/L, more than 160 times the World Health Organization’s guideline value for uranium in drinking water. The impact of
radioactive wastewater on the ecosystem and human health is self-evident [
Therefore, it has become urgent to remove uranium from radioactive wastewater effectively. Various techniques such as adsorption [
], chemical precipitation [
], membrane separation [
] and ion exchange [
] have been used to treat uranium in aqueous solutions. Among all of these technologies, adsorption technologies tend to overcome several positive and negative aspects, especially when aligned with
the waste material and the high-efficiency nature of the process. Among them, the adsorption approach is extensively utilized in the treatment of uranium-containing wastewater because it has the
benefits of a low cost, easy operation and comprehensive source of materials. Mellah et al. [
] showed that activated carbon, a high-temperature biochar, is the oldest and most widely used adsorbent. In the last three years, researchers have been working on the invention of adsorbents for
uranium. Cui et al. [
] synthesized a semiconductor covalent organic framework (NDA-TN-AO) characterized by outstanding photocatalytic and photovoltaic activity, enhancing uranium extraction by producing biotoxic reactive
oxygen species. However, the preparation of this new material is both complex and expensive. In contrast, materials such as natural minerals and biochar are less costly and have a strong adsorption
Biochar is rich in carbon, and its surface pores and active functional groups can adsorb pollutants, playing a significant role in wastewater treatment [
]. The use of biochar for the efficient and practicable removal of uranium from radioactive wastewater has attracted a great deal of attention from scholars. The cost of biochar preparation is low,
and its raw materials come from various sources, such as plant roots and stems, municipal and industrial organic wastes, etc. By converting these wastes into biochar, the effective utilization and
recycling of resources can be realized, which has a positive significance for environmental protection. The different raw materials are composed of elements with varying ratios, resulting in
differences in the physical and chemical features of biochar generated under the same experimental conditions, which can lead to different adsorption effects [
]. For example, wood biomass is high in carbon and hydrogen, while crop residues may be high in oxygen. Biochar generated from certain oxygen-rich biomasses may have a smaller specific surface area
and less developed pore structure, resulting in weaker adsorption properties. Moreover, the adsorption effect of biochar is also affected by physical properties, chemical properties and experimental
conditions. Parab et al. [
] adsorbed uranium in water with waste coconut shell fibers and studied the effect of parameters such as solution pH, adsorption time and temperature on the adsorption effect to find the optimum
conditions for uranium adsorption. Guilhen et al. [
] investigated the impact of pyrolysis temperature on removing U(VI) from biochar. The results revealed that the carbon content grew with increasing pyrolysis temperature and the biochar produced at
350 °C reached 80% of the optimum removal efficiency of uranium from wastewater.
Improving the uranium adsorption capacity of biochar and its relationship with the performance parameters of biochar production are hot research topics nowadays. Traditional experiments on uranium
adsorption in wastewater are complicated, time-consuming, and costly. However, researchers have accumulated a large amount of experimental data, enabling us to reveal these relationships through big
data, artificial intelligence and algorithmic improvements, thereby decreasing the depletion of uranium and the radiation damage to the human body during the actual experiment. Generally speaking,
many scholars use a single prediction model to predict adsorption capacity. This paper tries to predict the adsorption performance of biochar with multiple algorithmic models, and four algorithms
with a BP neural network are selected to construct a model to predict the capacity of biochar to adsorb uranium.
Numerous scholars have chosen to use meta-heuristic algorithms (MHAs) in the study of adsorption capacity because this type of algorithm is a refinement of the heuristic algorithm and a product of
integrating randomized algorithms with local search algorithms [
]. This type of algorithm is highly efficient and flexible, and is able to find the solution near the optimal answer in a shorter period, so in this paper, we also chose meta-heuristic algorithms.
After checking many studies, it was found that the BP neural network is a frequently used artificial neural network model, which can adapt to different problems and data, and the neurons in it can be
computed in parallel, so it has a better processing ability when dealing with large-scale data and tasks [
]. Compared with other methods, the combination of a meta-heuristic algorithm and the BP neural network can not only deal with complex problems better, but also further improve the reliability and
practicability of the model. In this paper, we attempt to combine metaheuristic algorithms with the BP neural network, which can fully use their advantages, improve the performance of optimization
algorithms, and be better applied to neural network training and optimization problems. Moreover, classical meta-heuristic algorithms are compared with new meta-heuristic algorithms, and so two
classical algorithms with global search ability, namely Particle Swarm Optimization (PSO) and Differential Evolution (DE), and two new algorithms with global search ability:, namely Cheetah
Optimization (CO) and Fick’s Law Algorithm (FLA), are selected. Among them, Particle Swarm Optimization (PSO) is simple and easy to implement, Differential Evolution (DE) has strong robustness,
Cheetah Optimization (CO) has a wide range of applications, and Fick’s Law Algorithm (FLA) has strong interpretable ability. In this paper, the uranium adsorption capacities of the BP neural network
and the different meta-heuristic algorithms are combined to forecast the uranium adsorption capacity of different biochar materials with numerous experimental data, and we quantitatively assess and
rank the characteristic importance of the input parameters to provide some reference for the development of more efficient and environmentally friendly biochar treatments of uranium wastewater
technology and products.
2. Materials and Methods
2.1. Data Collection and Preprocessing
In this paper, 777 sets of experimental data on uranium adsorption by biochar were collected from numerous studies [
], with 70% of the data being used as the training set and 30% of the data being used as the test set. Firstly, the data were preprocessed, including the identification and replacement of outliers,
the inspection and supplementation of missing values, and data normalization to improve the quality of the dataset. The outliers were detected using the box-graph method, which distributed the data
in boxes with different quantiles to show the distribution characteristics of the data. Outliers were considered as data points that fell outside or away from the box. Once an outlier was detected,
we treated those outliers as missing values. To deal with outliers, we used the Newton interpolation method to construct a local interpolation polynomial, and replaced the outliers with the estimate
of the polynomial at that point. Secondly, the data were normalized, and the min–max normalization method was adopted. Formula (1) transforms the original data linearly to map the result value to the
specified range [
$X ∗ = X i − X m i n X m a x − X m i n$
In Formula (1), X
represents the original data to be normalized, X* is the normalized value, X
represents the maximum value of the initial data, and X
is the minimum value. The normalization process scales the data so that the data are uniformly mapped to the [0, 1] interval and the original data distribution pattern can be maintained [
According to the experiments on uranium adsorption with biochar in many articles, the physical and chemical properties of biochar and the experimental conditions were taken as input parameters, and
the uranium adsorption capacity of biochar was considered as the output parameter. The physical properties in the input parameters included the specific surface area (SA, m
/g), average pore size (Dav, nm), and total pore volume (VTot, cm
/g) of biochar. Chemical properties included the mass percentage of total carbon (C, %), the molar ratio of oxygen to carbon (O/C), and the molar ratio of oxygen to nitrogen ((O+N)/C). Experimental
conditions included pH, temperature (T, K), initial concentration of uranium (C
, mg/L) and solid–liquid ratio (SLR, g/L) [
2.2. Meta-Heuristic Algorithms
2.2.1. Particle Swarm Optimization Algorithm (PSO)
The concept of the Particle Swarm Optimization Algorithm originates from birds’ swarm foraging behavior. The solution of the optimization problem is abstracted into particles, and all the particles
follow the current optimal particles, beginning from a series of initial solutions to seek the best solution, which is similar to the bird flocks constantly adjusting their direction to find food [
]. Assuming that the problem’s solution to be optimized is N-dimensional, i.e., the solution is related to all N elements, each particle has a position vector X
= (X
, X
… X
) that changes as the number of iterations grows, corresponding to possible solutions to the question. At the same time, particle i also has a velocity vector V
= (V
, V
… V
) of the same dimension as X that changes continuously as the iteration number becomes higher, which is used to determine which direction particle i moves from the current position and how far it
moves, and the speed of all dimensions have an upper limit V
]. The particle’s velocity as well as position update formulas are as in (2) and (3):
$V i k + 1 = w V i k + c 1 r 1 p b e s t i k − X i k + c 2 r 2 g b e s t i k − X i k$
$X i k + 1 = X i k + V i k + 1$
where w denotes the inertia weight, which regulates the search of the solution space. c
and c
are learning factors that regulate the maximum step length of learning. r
and r
are random functions taking values in the scope of [0, 1] to increase the randomness. pbest represents the individual optimal position of the second particle, while gbest represents the optimal
position search of the population. The inertia weight w indicates the effect of the previous speed vector on a new one. Referring to the parameter settings in many literatures, the parameter settings
in this paper are as follows: inertia weight = 0.8, learning factor c
= 0 for the self-knowledge part, and learning factor c
= 0 for the social experience part. The maximum number of iterations is 100 and the algorithm terminates when it reaches the preset maximum number of iterations.
2.2.2. Differential Evolutionary Algorithm (DE)
Differential Evolutionary Algorithm (DE) is an evolutionary algorithm similar to all other evolutionary algorithms; DE achieves the global search by means of a process of mutation, crossover and
selection at each generation [
At the beginning of the optimization, a randomly initialized number of NP D-dimensional parameter vectors X is used, while X
denotes the ith solution, X
= (X
, X
… X
). After initialization, the individuals of each parent obtain their offspring through a mutation strategy, and for each solution vector X
, the corresponding mutation vector V is denoted as Equation (4):
$V i = X r 0 + F X r 1 − X r 2$
In this formula, r
, r
and r
are three dissimilar stochastic numbers belonging to [1, …, NP]. F is the variation operator, which takes the value in the range of [0, 2]. If F is too small, it might be trapped in a local optimum,
while if F is too big, it is not prone to convergence. After the mutation is completed, the crossover operation is performed on the generated individuals as follows (5):
$u i , j = v i , j , if r a n d ≤ C r x i , j , if r a n d > C r$
In this formula, Cr is the crossover operator, a random number within [0, 1], which is used to control the selection of the variant vector value or the original vector value. After the crossover
operation, according to the fitness of the individual, the better individual is selected to enter the next generation, so that the better adapted offspring or parent individuals form a new population
and continue to loop iterations until the stopping requirement is met. Referring to the parameter settings in many studies, the parameters are established as follows in this paper: variation operator
F = 0.5, crossover operator Cr = 0.7. The maximum number of iterations is 100, and when the algorithm reaches the maximum number of iterations, the algorithm is terminated.
2.2.3. Optimization Algorithm (CO)
The concept of the Cheetah Optimization Algorithm comes from the hunting behavior of cheetahs, which can detect prey when they patrols or scan their surroundings. Upon seeing its prey, the cheetah
may stay in its original position and begin to attack when the prey approaches it. In short, it is divided into three strategies: searching, sitting and waiting, and attacking [
]. During hunting, the search or attack strategy is deployed randomly. Assume that the search probability is r
, the vector update probability is r
, and the position update probability is r
; r
, r
and r
are homogeneous arbitrary numbers from [0, 1]. If r
≥ r
, the cheetah chooses the motionless waiting strategy; otherwise, it chooses the find or offensive strategy.
The cheetah searches according to the location of the prey and the surrounding environment, assuming that the problem’s solution to be optimized is D-dimensional and the number of cheetahs is n. The
location of the cheetah is based on Equation (6), which has been updated.
$X i , j t + 1 = X i , j t + r ^ i , j − 1 · α i , j t$
In this equation, t indicates the hunting time,
$X i , j t$
denotes the current location of cheetah i in alignment j(j = 1, 2, …, D),
$r ^ i , j − 1$
denotes the randomized parameter of cheetah i listed as j, and
$α i , j t$
denotes the step size of cheetah i with arrangement j. In order to prevent the prey from noticing its presence, the cheetah then adopts a sit-and-wait strategy, i.e., it keeps its position unchanged
and waits until the prey approaches. When the time is right for hunting, the cheetah will adjust its moving direction in time according to the position of the prey and then capture the prey; the
cheetah’s position in the attack updates the formula as follows (7):
$X i , j t + 1 = X B , j t + r ˇ i , j · β i , j t$
In this equation, t indicates the hunting time, $X B , j t$ denotes the current location of cheetah i in arrangement j (j = 1, 2, …, D) in the current position, $r ˇ i , j$ denotes the steering
related to cheetah i in alignment j, and $β i , j t$ denotes the correlation interaction factor with cheetah i in alignment j. Referring to the parameter settings in many studies, the parameter
settings in this paper are r[1] = 0.8, r[2] = 0.5, r[3] = 0.9. The maximum number of iterations is 100. When the algorithm reaches the preset maximum number of iterations, the algorithm terminates.
2.2.4. Fick’s Law Algorithm
Fick’s Law Algorithm is a meta-heuristic algorithm motivated by Fick’s Law. Fick’s law is a fundamental law describing the diffusion process of substances, i.e., molecules tend to spread from areas
of high density to areas of low density. FLA takes advantage of the diffusion property to optimize the search. The fundamental idea of the algorithm is to consider the problem space as a
finite-dimensional space in which the optimal solution is searched by modeling the process of substance diffusion [
Assuming that the problem’s solution to be optimized is D-dimensional and the number of solutions is N, the randomly generated initial population is first divided into two equal groups. According to
Fick’s Law, the diffusion operator updates the individuals’ position, including transferring between three phases: the detective phase, the transition phase from exploration to exploitation, and the
exploitation phase. The updated fitness value is obtained by calculating from the new position. Then, the global optimal solution is updated according to the fitness value until the iteration is
stopped when the termination condition is reached. The maximum number of iterations is 100 and the algorithm terminates when it reaches the preset maximum number of iterations.
2.3. BP Neural Network
The BP neural network, also called back propagation neural network, is a typical neural network training algorithm. The BP neural network consists of forward propagation and back propagation, in
which forward propagation is when the input layer outputs the result and error after inputting samples through the transfer function of the hidden layer and the output layer. Back propagation is when
the error is assigned to each neuron, and then each neuron adjusts the weights and thresholds so that the input and output error reach the target value [
]. The BP neural network model is highly error-tolerant, self-learning and self-adapting, which overcomes the difficulties of model building and parameter estimation in traditional prediction
The BP neural network mainly consists of three layers, including the input layer, the hidden layer, and the output layer of the BP neural network. The propagation principle is shown in
Figure 1
. In
Figure 1
, X
, X
, …, X
are the input samples of the BP neural network and y
, y
, …, y
are the output variables of the BP neural network [
2.4. Model Construction
Four prediction models are constructed in Python 3.10: PSO-BP, DE-BP, CO-BP and FLA-BP. Firstly, the construction of the BP neural network needs to define the structure of the network, including the
number of neurons in the input layer, the hidden layer and the output layer, the number of iterations and other parameters, and establish the initial weight and bias. The BP neural network uses a
3-layer hidden layer structure to capture the important features of the input data without over-complexity. It has an input dimension of 10, an output dimension of 1, a learning rate of 0.01, and a
number of iterations of 1000. Forward propagation is carried out according to the training data, the output value is calculated, and then the error is calculated according to the output value and the
label of the training data. Based on the error value, the error is backpropagated from the output layer, and the weights and biases are updated continuously until the network reaches an acceptable
error range or the training reaches a predetermined number of rounds. Second, in order to make the BP neural network converge faster and achieve higher accuracy, this paper used the Particle Swarm
Optimization algorithm (PSO), the Differential Evolution algorithm (DE), the Cheetah Optimization algorithm (CO) and Fick’s Law Algorithm (FLA) to optimize the BP neural network, respectively, to
replace the original gradient descent method, and constructed four prediction models, such as PSO-BP. According to the search results of the four algorithms, the weights and biases of the BP neural
network were updated, the parameters of the BP neural network were adjusted, and several iterations were performed to improve the model’s performance.
2.5. Performance Assessment Measures
In this paper, the accuracy of the BP neural network-based model was determined by the coefficient of determination (R
) and the error rate. The error rate includes the mean square error (MSE), mean absolute error (MAE) and mean bias error (MBE) [
]. The specific formulas are shown in (8)–(11):
$R 2 = S S R S S T = 1 − S S E S S T = 1 − ∑ i = 1 N y i − y 1 ^ 2 ∑ i = 1 N y i − y ¯ 2$
$M S E = 1 N ∑ i = 1 N y i − y 1 ^ 2$
$M A E = 1 N ∑ i = 1 N y i − y 1 ^$
$M B E = 1 N ∑ i = 1 N y i − y 1 ^$
In Equations (8)–(11), SSE is the sum of squares of residuals, SST is the total sum of squares and the actual value, y[1], y[2], …, y[n] are the true values, $y ¯$ is the average of all the true
values, $y ^ 1 , y ^ 2 … y ^ n$ are the predicted values, and $y i − y 1 ^$ are the residuals of the ith sample, which indicate the difference between predicted and valid values, and provide a
greater reflection of the reality of the error in the predicted value.
3. Results and Discussion
3.1. Model Prediction Results
Figure 2
shows the fitting results of the training set data after the training of the BP neural network using four optimization algorithms. The number of iterations is 1000 and the learning rate is 0.01.
As illustrated in
Figure 2
, the particle swarm optimization algorithm and differential evolution algorithm cannot fully fit data with significant mutation and variance. The main reason for this is that the Particle Swarm
Optimization and Differential Evolution Algorithms tend to fall into local optimal solutions when facing high-dimensional optimization problems, which also leads to instability of the optimization
results. The Cheetah Optimization Algorithm and Fick’s Law Optimization Algorithm proposed in the past two years show strong performance in jumping out of the local optimal solution. As can be seen
Figure 2
, both algorithms exhibit extremely high fitting ability in the face of extreme data, mainly due to their intelligent search strategy, their ability to avoid falling into local optimality, their
ability to handle extreme data, and their adaptive parameter adjustment mechanism. The convex optimization ability of the Fick’s Law Optimization Algorithm can reach the same level as that of the
Adam optimizer, showing obvious superiority.
Figure 3
shows the schematic diagram of fitting results of the trained BP neural network to the test set. As opposed to the training set, the Fick’s Law Optimization Algorithm has more obvious advantages,
showing strong generalization performance and adaptability to various complex scenarios. After analyzing the experimental results, it is concluded that the Fick’s Law Algorithm is stronger than the
other three algorithms in terms of its robustness, generalization and capacity to go beyond the local optimum, and can achieve the optimization performance of the mainstream optimizer and can be used
as an alternative to the mainstream optimizer. Its randomness can provide more feasible solutions for BP neural networks. At present, the only shortcoming is that the Fick’s Law Optimization
Algorithm has a shorter optimization time compared with mainstream optimizers. In the future, studies will concentrate on solving the problem of insufficient convergence speed. The leading solution
can be to improve its initialization method to have a lower initial fitness value, accelerating its convergence speed.
3.2. Comparative Analysis of Model Performance
The optimization behavior of the algorithm is quantitatively analyzed by analyzing the evaluation indicators. The changes in each indicator in the training process are shown in
Figure 4
. The mean square error (MSE) represents the mean of the square of the difference between the original and predicted values in the dataset to measure the variance of the residual. The mean absolute
error (MAE) represents the mean of the absolute difference between the actual and predicted values in the dataset, as measured by the mean of the residual in the dataset. The mean bias error (MBE)
allows one to know the size of the deviation between the predicted result and the actual value, with positive deviation indicating that the data error is overestimated and negative deviation
indicating that the error is underestimated.
From the MSE index in
Figure 4
, we can establish that the minimum value of the Fick’s Law Optimization Algorithm (FLA) is 0.00424, indicating the best fitting effect, followed by the Cheetah Optimization Algorithm (CO), whose
value is 0.00555. Since MSE is used as a loss function to perform optimization of the BP neural network in this experiment, all indexes except for MSE exhibit the characteristics of oscillating
convergence. As can be seen from
Figure 4
, the Fick’s Law Optimization Algorithm (FLA) and the Cheetah Optimization Algorithm (CO) have better convergence accuracy in terms of the MAE and MBE indexes. The Fick’s Law Optimization Algorithm
(FLA) is the best, followed by the Cheetah Optimization Algorithm (CO).
explains the variance score of the regression model. According to
Table 1
, the R
after optimization of the Fick’s Law Optimization Algorithm (FLA) is 0.90. The R
after the Cheetah Optimization Algorithm (CO) is 0.85. An R
closer to 1 indicates that the independent variable explains the variance change in the dependent variable better. It can be concluded from
Figure 4
that the models of the Fick’s Law Optimization Algorithm (FLA) and the Cheetah Optimization Algorithm (CO) outperform the Particle Swarm Optimization Algorithm (PSO) model and the Differential
Evolutionary Algorithm (DE) model. But when predictive models are in pursuit of higher accuracy, they often encounter some tradeoffs or limitations. For example, overfitting can reduce a model’s
ability to generalize because it makes the model too dependent on specific details in the training data.
3.3. Performance Analysis in CEC Tests
The test results of CEC2005 are widely recognized and can be used as a standard to assess the performance of optimization algorithms with high levels of authority. The CEC2005 test set contains 25
test problems, i.e.,
$f 1 x − f 25 x$
, contains single-peak test functions
$f 1 x − f 5 x$
, basic multi-peak test functions
$f 6 x − f 12 x$
, extended multi-peak test functions
$f 13 x − f 14 x$
, and hybrid composite test functions
$f 6 x − f 12 x$
]. The CEC2005 test can provide some reference value, but it cannot completely replace the need for performance evaluation in real applications. On the one hand, the set of functions in the CEC2005
test is designed for evaluating the performance of optimization algorithms, not for simulating problems in real applications. On the other hand, the constraints, boundary conditions and objective
functions of real problems may be more complex and may involve more decision variables and influencing factors.
The single-peak test functions, $f 1 x − f 5 x$, are utilized to examine further the algorithm’s ability to develop on the single-peak test function and the convergence precision of the algorithm.
Unlike the single-peak test function, the multi-peak test function has more than one local optimal solution, and the function values corresponding to each local optimal solution may differ
significantly. The prediction model constructed in this study is assigned 10 input parameters, so the test functions $f 1 x − f 14 x$ applicable to low-dimensional optimization problems are selected
for analysis, and some of the analysis results are as follows:
As shown in
Figure 5
, the convergence speed of the Fick’s Law Algorithm (FLA) is substantially and significantly better than that of the Particle Swarm Optimization Algorithm (PSO), Differential Evolutionary Algorithm
(DE) and Cheetah Optimization Algorithm (CO). Moreover, from the trend of curve fitting, FLA has a stronger local development ability than other models and a better search ability than other
algorithms when dealing with high-dimensional complex problems. Through comprehensive comparative analysis and verification, it is concluded that the FLA solution’s accuracy and stability are high,
it has a faster convergence rate, and its advantages are significant.
3.4. Important Feature Visualization
Feature importance assessment quantifies feature importance during training by documenting the total number of splits and the average message yield of the characteristics. This critical task in
machine learning helps to analyze which input parameters significantly impact the model’s prediction results [
]. This study uses feature importance assessment to investigate which features have a greater impact on the model’s predictive performance, thereby optimizing data collection and analysis strategies.
In this study, the input feature X and the target variable Y are subjected to data normalization, i.e., the data are transformed into a regular normal distribution with mean 0 and variance 1. Then,
the importance scores of the features are calculated using XGBoost for the trained model, and the score range is chosen from 0 to 2000 to show the degree of importance clearly. Finally, the feature
scores are plotted from the most significant to the most minor using Matplotlib, as shown in
Figure 6
Among the experimental conditions, the most influential on the ability of biochar to adsorb uranium is the initial concentration of uranium (C[0], mg/L), which has a value of 1968, and the adsorption
rises with the preliminary concentration of uranium. It begins to slow down when the initial concentration of uranium is too large. The second most influential is pH, with a value of 424. pH
indicates the acidity or alkalinity of the solution in the environment. Under different pH conditions, the charge state of the biochar surface changes. When the pH is low (acidic environment), the
surface of biochar is positively charged and attracted to the negative charge of uranium. In contrast, at a higher pH (alkaline environment), the surface of the biochar is negatively charged and
repelled by the negative charge of uranium, resulting in lower adsorption capacity. In practical applications, selecting appropriate pH conditions can enhance the adsorption effect of biochar on
The most influential chemical property is the mass percentage of total carbon (C, %), which has a value of 439, and the mass percentage of carbon in biochar indicates the degree of carbonization of
the biomass. A higher mass percentage of carbon in biochar indicates a higher number of adsorbable groups, thus increasing the adsorption capacity. Next is the molar ratio of oxygen to carbon (O/C),
which has a value of 416. When the molar ratio of oxygen to carbon is high, it indicates that the oxygen content in the biochar is relatively high, which may compete with uranium adsorption and
reduce the adsorption effect. Therefore, lower molar ratios of oxygen to carbon and higher mass percentages of carbon are usually favorable for increasing the adsorption capacity of biochar for
The most influential of the physical properties is the specific surface area (SA, m^2/g) of the biochar, with a value of 299, whose magnitude visually indicates the magnitude of the adsorption
capacity of the adsorbent, and a greater specific surface area implies more reactive adsorption sites, which can increase the contact area between the biochar and the uranium, and thus enhance the
adsorption effect. Next is the total pore volume (VTot), which has a value of 80. A larger total pore volume means more adsorption space, providing more sites to accommodate uranium ions. This makes
the biochar more effective in adsorbing uranium. Therefore, a larger specific surface area and total pore volume can improve the adsorption ability of biochar for uranium when manufacturing
adsorbents. These properties make biochar an effective adsorbent material with many applications in water treatment and environmental remediation.
4. Conclusions
In this paper, four meta-heuristic optimization algorithm models based on the BP neural network are constructed to predict the uranium adsorption capacity of biochar, providing some lessons for the
efficient management of uranium wastewater.
• A prediction model of the uranium adsorption capacity is constructed using Python 3.10. Four meta-heuristic optimization algorithms for model searching based on the BP neural network, namely
Particle Swarm Optimization (PSO), Differential Evolution (DE), Cheetah Optimization (CO) and Fick’s Law Algorithm (FLA), are used to establish four prediction models: PSO-BP, DE-BP, CO-BP and
FLA-BP. Predictive models are available to foresee the uranium adsorption capacity of biochar in actual situations, significantly reducing the experimental effort and safety risks associated with
• The accuracy of the four models is verified by the coefficient of determination (R^2) and error rate. After training and validation, the Fick’s Law Algorithm (FLA) is optimized with R^2 of 0.90,
showing more obvious advantages and a strong generalization performance. This algorithm is more robust than the other three regarding its robustness, generalization, and ability to go beyond the
local optimum. When predictive models are in pursuit of higher accuracy, there are often trade-offs or limitations that require continued research;
• The influence of the input performance parameters in the prediction model on the adsorption capacity is analyzed using XGBoost to search for the optimal performance parameters. The analysis
showed that the most influential experimental conditions on the ability of biochar to adsorb uranium are the initial concentration of uranium (C[0], mg/L) and pH; the most influential chemical
properties are the mass percentage of total carbon (C, %) and the molar ratio of oxygen to carbon (O/C); and the most influential physical properties are the specific surface area of biochar (SA,
m^2/g) and the total pore volume (VTot), providing some important lessons for the study of the uranium adsorption capacity of biochar.
Author Contributions
Conceptualization, Z.Q.; methodology, Z.Q.; software, Z.Q.; validation, Z.Q., W.W. and Y.H.; formal analysis, Z.Q.; investigation, Y.H.; resources, Z.Q.; data curation, Z.Q.; writing—original draft
preparation, Z.Q.; writing—review and editing, Z.Q.; visualization, Z.Q.; supervision, Y.H.; project administration, W.W.; funding acquisition, W.W. All authors have read and agreed to the published
version of the manuscript.
This research was supported by the Natural Scientific Foundation of Heilongjiang Province, grant number LC201407.
Data Availability Statement
Data are available upon request from the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
Figure 2. Training set fitting results of biochar for predicting the uranium adsorption capacity: (a) PSO-BP; (b) DE-BP; (c) CO-BP; (d) FLA-BP.
Figure 3. Test set fitting results for predicting uranium adsorption capacity of biochar: (a) PSO-BP; (b) DE-BP; (c) CO-BP; (d) FLA-BP.
MSE MBE MAE R^2
PSO-BP 0.01106 0.00097 0.07655 0.59230
DE-BP 0.01348 0.00172 0.08292 0.50359
CO-BP 0.00555 0.00098 0.05355 0.79556
FLA-BP 0.00424 0.00082 0.04816 0.90125
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Qu, Z.; Wang, W.; He, Y. Prediction of Uranium Adsorption Capacity in Radioactive Wastewater Treatment with Biochar. Toxics 2024, 12, 118. https://doi.org/10.3390/toxics12020118
AMA Style
Qu Z, Wang W, He Y. Prediction of Uranium Adsorption Capacity in Radioactive Wastewater Treatment with Biochar. Toxics. 2024; 12(2):118. https://doi.org/10.3390/toxics12020118
Chicago/Turabian Style
Qu, Zening, Wei Wang, and Yan He. 2024. "Prediction of Uranium Adsorption Capacity in Radioactive Wastewater Treatment with Biochar" Toxics 12, no. 2: 118. https://doi.org/10.3390/toxics12020118
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2305-6304/12/2/118","timestamp":"2024-11-11T01:54:54Z","content_type":"text/html","content_length":"452353","record_id":"<urn:uuid:66adda73-941e-4037-8884-fd94e3dc9fb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00857.warc.gz"} |
W.S. Gosset (aka Student) and t-distribution
Whilst attending an Eurachem Scientific Workshop on June 14-15, 2018 at Dublin of Ireland, the workshop organizer arranged the Workshop Banquet at the renowned Guinness St. James Gat Brewery, whose
one of the employees was William Sealy Gosset, a chemist cum statistician.
Gosset was interested in analyzing quality data obtained small sample size in his routine work on quality control of raw materials, as he noticed that it was neither practical nor economical in
analyzing hundreds of data.
At that time, making statistical inferences from small sample-sized data to their population was unthinkable. The general accepted idea was that if you were to have a large sample size, say well over
30 observations, you could use the Gaussian’s normal distribution to describe your data.
In 1906, Gosset was sent to Karl Pearson’s laboratory at the University College London on sabbatical. Pearson then was one of the well known scientific figures of his time, who was later credited
with establishing the field of statistics.
At the laboratory, Gosset discovered the “Student’s t-distribution”, which is an important pillar of modern statistics to use small sample-sized data to infer what we could expect from the population
out there. It is the origin of the concept of “statistical significance testing”.
Why didn’t Gosset name the distribution as Gosset’s instead of Student’s?
It is interesting to note that it was because his employer, Guinness objected to his proposal to publish the findings as it did not want the competitors to know their gained advantage in using this
unique procedure to select the best varieties of barley and hops for their popular beer in a way that no other business could do
So finally Gosset published his article on Pearson’s journal Biometrika in 1908 under the pseudonym “Student”, leading to the famous “Student’s t-distribution”.
In statistics and probability studies, the t-distribution is a probability distribution in dealing with a normally distributed population whilst the sample size is not large. It uses sample standard
deviation (s) to estimate the population standard deviation (s) which is unknown. For small samples, the confidence limits of the population mean are given by:
As the story goes, Gosset’s published paper was then mostly ignored by the statistical researchers until a young mathematician called R.A, Fisher discovered its importance and popularized it,
particularly in estimating the random chance for considering a result “significant”.
Today, the t-distribution is routinely used as t- statistic tests for checking results for significance bias from true value, or for comparing measurements two sets of results and their means, and is
also important for calculating confidence intervals.
This t-distribution is symmetric and resembles the normal distribution except for rather stronger “tails” due to more spread out because of the extra variability in smaller sample size. | {"url":"https://consultglp.com/2018/june","timestamp":"2024-11-06T21:52:44Z","content_type":"text/html","content_length":"41515","record_id":"<urn:uuid:696a3d5f-6f4e-4640-b346-9c70ea6a4215>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00587.warc.gz"} |
Mittens Beamer Template
\documentclass[aspectratio=169,usenames,dvipsnames]{beamer} \usepackage[T1]{fontenc} \usetheme{mittens} % Possible color schemes: \colorschemered, \colorschemeblue, \colorschemegreen, \
colorschemebrown, \colorschemepurple \colorschemered % Comment this line out to disable section & subsection names at the top of slides \sectionnamestrue % Comment this line out to remove new section
slides \sectionslidestrue \title{Title of Talk} \author[M. Torrence]{Matthew Torrence} \date[2021]{Week $n$, February 2021} \begin{document} \titleslide % You typically want to include "fragile" on
every slide \begin{frame}[fragile] \frametitle{An Example Slide} \framesubtitle{optionally with a subtitle} \begin{block}{Definition} This slide is showing off various elements of a slide you might
want to use. Do delete this slide before presenting this template. \end{block} $$x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$$ \begin{enumerate} \item This is an enumerated list \item This is a second
item \end{enumerate} \begin{itemize} \item This is an itemized list \end{itemize} \end{frame} \section{Definitions and Examples} \begin{frame}[fragile]{What are you talking about?} This slide is a
great spot to introduce the definitions of what you will be talking about. % The \s here is short for \vspace{1em}, which I use super often. It puts space before this paragraph, which looks very
readable! \s Let $G$ be a finite abelian group, $h$ be a non-negative integer, and $m$ be a positive integer. $\nu(G, h, m)$ is defined to be the \textit{largest size} of a $h$-fold sumset of an
$m$-subset of $G$. \s Here (or on the next slide) is a great time to go over an example of how to use this definition. \end{frame} \begin{frame}{Problem Statement} Here's a good place to write down
the specific problems that use the definitions you've given in the previous slide. \s It's possible to include an example for the problem as well! \end{frame} \section{Results Known} \begin{frame}
[fragile]{Results known about my problem} Here's where you can talk about what Theorems exist about whatever you've defined in the previous slides. \s $\nu(G, h, m)$ is trivially bounded above by $|G
|$... Other bounds include... \s Be sure to reference the authors of these results, and feel free to spend more time on this part catching others up on what might be obvious to you! \end{frame} \
section{My Results} \begin{frame}[fragile]{My Goals} Here it might be pertinent to remind / introduce others to what you think \textit{you} might be able to prove about your problem. \s What
assumptions do the previous results make, and do you think you can extend their techniques to lessen these assumptions? \end{frame} \begin{frame}[fragile]{New This Week} Here's where you can finally
talk and summarize what you did in the past week. Have you gotten closer to proving your goals? \end{frame} \section{Next Steps} \begin{frame}[fragile]{Considerations and next steps} What will you be
working on in the next week? Do you have a good idea of what you will be doing, or are you stuck and looking for help? \s Might you be able to relate what you're doing to ideas of someone else? Do
you want to review another proof from the book before continuing? \end{frame} \section{References} \begin{frame}{Slide Template Credit} I (Matt Torrence) who has been speaking to you through these
slides designed this template myself. Feel free to delete this slide in your presentation. \s I was inspired by the following stack overflow response and used it as a starting point for this
template: \texttt{tex.stackexchange.com/a/146682/188835} \s Typography used includes 4 typefaces: \begin{itemize} \item Charis SIL, for the default serif type \item \texttt{IBM Plex Mono} for
monospace text \item {\sansstyle Carlito} for the default sans-serif type \item TeX Gyre Termes Math for all math mode text \end{itemize} \end{frame} \end{document} | {"url":"https://es.overleaf.com/latex/templates/mittens-beamer-template/rchmxwyvhhzn","timestamp":"2024-11-07T10:46:01Z","content_type":"text/html","content_length":"40513","record_id":"<urn:uuid:b8d19613-8620-4f63-807d-0f8334108e4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00502.warc.gz"} |
Realizing Flexible Broadcast Encryption: How to Broadcast to a Public-Key Directory
Paper 2023/1583
Realizing Flexible Broadcast Encryption: How to Broadcast to a Public-Key Directory
Suppose a user wants to broadcast an encrypted message to $K$ recipients. With public-key encryption, the sender would construct $K$ different ciphertexts, one for each recipient. The size of the
broadcasted message then scales linearly with $K$. A natural question is whether the sender can encrypt the message with a ciphertext whose size scales sublinearly with the number of recipients.
Broadcast encryption offers one solution to this problem, but at the cost of introducing a central trusted party who issues keys to different users (and correspondingly, has the ability to decrypt
all ciphertexts). Recently, several works have introduced notions like distributed broadcast encryption and flexible broadcast encryption, which combine the decentralized, trustless model of
traditional public-key encryption with the efficiency guarantees of broadcast encryption. In the specific case of a flexible broadcast encryption scheme, users generate their own public/private keys
and can then post their public key in any public-key directory. Subsequently, a user can encrypt to an arbitrary set of user public keys with a ciphertext whose size scales polylogarithmically with
the number of public keys in the broadcast set. A distributed broadcast encryption scheme is a more restrictive primitive where each public key is also associated with an index, and one can only
encrypt to a set of public keys corresponding to different indices. In this work, we introduce a generic compiler that takes any distributed broadcast encryption scheme and produces a flexible
broadcast encryption scheme. Moreover, whereas existing concretely-efficient constructions of distributed broadcast encryption have public keys whose size scales with the maximum number of users in
the system, our resulting flexible broadcast encryption scheme has the appealing property that the size of each public key scales with the size of the maximum broadcast set. We provide an
implementation of the flexible broadcast encryption scheme obtained by applying our compiler to the distributed broadcast encryption scheme of Kolonelos, Malavolta, and Wee (ASIACRYPT 2023). With our
scheme, a sender can encrypt a 128-bit symmetric key to a set of over 1000 recipients (from a directory with a million users) with a 2 KB ciphertext. This is 16$\times$ smaller than separately
encrypting to each user using standard ElGamal encryption. The cost is that the user public keys in flexible broadcast encryption are much larger (50 KB) compared to standard ElGamal public keys (32
bytes). Compared to the similarly-instantiated distributed broadcast encryption scheme, we achieve a 32$\times$ reduction in the user's public key size (50 KB vs. 1.6 MB) without changing the
ciphertext size. Thus, flexible broadcast encryption provides an efficient way to encrypt messages to large groups of users at the cost of larger individual public keys (relative to vanilla
public-key encryption).
Available format(s)
Publication info
Published elsewhere. Major revision. ACM CCS
broadcast encryptionflexible broadcast encryptiondistributed broadcast encryptionregistration-based cryptography
Contact author(s)
rachg96 @ cs utexas edu
gclu @ cs utexas edu
bwaters @ cs utexas edu
dwu4 @ cs utexas edu
2023-10-13: approved
2023-10-13: received
Short URL
author = {Rachit Garg and George Lu and Brent Waters and David J. Wu},
title = {Realizing Flexible Broadcast Encryption: How to Broadcast to a Public-Key Directory},
howpublished = {Cryptology {ePrint} Archive, Paper 2023/1583},
year = {2023},
url = {https://eprint.iacr.org/2023/1583} | {"url":"https://eprint.iacr.org/2023/1583","timestamp":"2024-11-10T14:30:29Z","content_type":"text/html","content_length":"19822","record_id":"<urn:uuid:11095d1d-0022-4ca3-8e21-c47271779b39>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00111.warc.gz"} |
perche vf alla seconda= vi2+ 2ad calculation for Calculations
23 Mar 2024
Popularity: ⭐⭐⭐
Final Velocity Calculation
This calculator provides the calculation of final velocity (vf) using the formula vf = vi^2 + 2 * a * d.
Calculation Example: The final velocity (vf) of an object can be calculated using the formula vf = vi^2 + 2 * a * d, where vi is the initial velocity, a is the acceleration, and d is the distance
travelled. This formula is derived from the equations of motion in one dimension.
Related Questions
Q: What is the significance of final velocity in physics?
A: Final velocity is an important quantity in physics as it provides information about the motion of an object. It can be used to calculate the distance travelled, the acceleration of the object, and
the time taken to travel a certain distance.
Q: How is final velocity used in real-world applications?
A: Final velocity is used in a variety of real-world applications, such as calculating the speed of a car, determining the trajectory of a projectile, and designing roller coasters.
| —— | —- | —- |
Calculation Expression
vf2 Function: The formula for calculating the final velocity (vf) is vf = vi^2 + 2 * a * d.
Calculated values
Considering these as variable values: vf=10.0, a=2.0, vi=0.0, d=100.0, the calculated value(s) are given in table below
| —— | —- |
Similar Calculators
Calculator Apps
Matching 3D parts for perche vf alla seconda= vi2+ 2ad calculation for Calculations
App in action
The video below shows the app in action. | {"url":"https://blog.truegeometry.com/calculators/perche_vf_alla_seconda_vi2_2ad_calculation_for_Calculations.html","timestamp":"2024-11-11T11:35:06Z","content_type":"text/html","content_length":"25041","record_id":"<urn:uuid:38cad992-39a8-47f6-86c4-e38c1f9644d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00561.warc.gz"} |
What do we mean by credit data? This post is a discussion around mathematical terminology and concepts that are useful in the context of working with credit data, taking us from network graph
representations of credit systems to commonly used reference data sets
Course Objective
Digging into the meaning of credit data collections, the logic that binds them together towards understanding what they can be used for and what limitations and issues they may be affected by, this
new course in the Credit Portfolio Management category explores a new angle to look at an old practice.
The course is now live at the Academy.
Familiarity with credit provision in general (lending products, banking processes and credit risk) is required for getting the most out of the course. Affinity with mathematical notation and language
is also important. | {"url":"https://www.openriskmanagement.com/categories/solstice/","timestamp":"2024-11-02T21:57:23Z","content_type":"text/html","content_length":"71056","record_id":"<urn:uuid:db7f02f3-9bb5-43fc-9c1b-bd1a6b95befc>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00863.warc.gz"} |
Business Jargons
Definition: The Population Distribution is a form of probability distribution that measures the frequency with which the items or variables that make up the population are drawn or expected to be
drawn for a given research study. The characteristics or attributes of the population, i.e. the value of each variable in the population can be determined only when the investigator … [Read more...]
about Population Distribution | {"url":"https://businessjargons.com/page/57","timestamp":"2024-11-04T04:32:16Z","content_type":"text/html","content_length":"45469","record_id":"<urn:uuid:79b7ba83-b4c2-4a01-8752-93b12560b2e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00650.warc.gz"} |
Arkadiusz Jadczyk
Arkadiusz Jadczyk
Curriculum vitae
Education and scientific titles
1987 - Professor ordinary, scientific title.
1982 - Professor extraordinary, scientific title.
1977 - Habilitation, theoretical physics, University of Wroclaw.
1970 - Ph.D., theoretical physics, University of Wroclaw.
1966 - B. Sc, M.A., with High Distinction, theoretical physics, University of Wroclaw.
2007- 2012 : guest professor, Center C.A.I.R.O.S, Institute of Mathematics of Toulouse, University Paul Sabatier, Toulouse
1998 - 2001 : contractor scientist, Constellation Technology Corporation, Largo, Florida
1990 - 2004 : full professor, Institute of Theoretical Physics, University of Wroclaw.
1986 - 1990 : associate professor, Institute of Theoretical Physics, University of Wroclaw.
1970 - 1986 : assistant professor, Institute of Theoretical Physics, University of Wroclaw.
1966 - 1970 : PhD fellowship, Institute of Theoretical Physics, University of Wroclaw
1966 - 1966 : assistant lecturer, Department of Mathematics, Higher Pedagogical School, Opole.
International research experience
10/1997 - 6/1998 Department of Mathematics, University of Florida
2-4/1997 Department of Mathematics, University of Florida
6/1995-2/1996 BiBoS Forschung Zentrum, University of Bielefeld (Humboldt Award)
3-6/1995 Research Institute for Mathematical Sciences, Kyoto University
7-9/1994 BiBoS Forschung Zentrum, University of Bielefeld (DFG grant)
6-7/1994 Schrodinger Institute, Wien (visiting professor)
3-5/1994 Research Institute for Mathematical Sciences, Kyoto (JSPS fellowship)
3-5/1993 Department of Applied Mathematics, University of Florence (CNR vis. prof.)
10-12/1993 Laboratory of Theoretical Physics, University of Marseille (EEG grant)
6-8/1993 BiBoS Forschung Zentrum, University of Bielefeld (Humboldt grant)
3-4/1992 Department of Applied Mathematics, University of Florence (CNR vis. prof.)
5-6/1992 BiBoS Forschung Zentrum, University of Bielefeld (Humboldt grant)
3-6/1991 Department of Applied Mathematics, University of Florence (CNR vis. prof.)
6-10/1991 BiBoS Forschung Zentrum, University of Bielefeld (DFG and Humboldt grant)
5-7/1990 Department of Applied Mathematics, University of Florence (CNR vis, prof.)
2-5/1990 Center de Physique Theorique, CNRS Marseille (CNRS vis. prof.)
3-10/1989 Laboratory of Theoretical Physics, University of Marseille (CNRS vis. prof.)
2/1989 Arnold Sommerfeld Institute, Technical University of Clausthal (Humboldt grant)
9-12/1988 Laboratory of Theoretical Physics, University of Marseille (vis. prof.)
5-7/1988 BiBoS Forschung Zentrum, University of Bielefeld (DFG grant)
3-4/1988 Department of Applied Mathematics, University of Florence (CNR vis. prof.)
9/1987 Arnold Sommerfeld Institute, University of Clausthal (Humboldt grant)
9/1987 II Institute of Theoretical Physics, University of Hamburg (vis. prof.)
5-7/1987 Department of Mathematics, University of Rome (vis. prof.)
8-10/1986 Deutsche Elektronen Synchrotron (DESY), University of Hamburg (vis. prof.)
6-7/1985 II Institute of Theoretical Physics, Hamburg (vis. prof.)
9-10/1985 CERN Geneve (guest scientist)
5-8/1984 Center de Physique Theorique, CNRS Marseille (vis. prof.)
10-11/1984 CERN Geneve (guest scientist)
10-12/1982 CERN Geneve (guest scientist)
1-10/1981 Institute of Theoretical Physics, University of Goettingen (vis. prof.)
3-6/1980 II Institute of Theoretical Physics, University of Hamburg (vis. prof.)
1-12/1976 II Institute of Theoretical Physics, University of Hamburg (Humboldt grant)
6-9/1975 Institute of Theoretical Physics, SUNY, Stony Brook, LI (NSF grant)
9-12/1970 CNRS, Marseille (guest researcher)
Member of the Editorial Board of Reports in Mathematical Physics
Member of Advisory Board of Prespace Journal
Member of the Advisory Board of Journal of Consciousness Exploration & Research
Member of the Advisory Board of Scientific God
Member of the Editorial Board of ISRN Mathematical Physics
Editor and co-editor of three conference proceedings, and co-author of one monograph.
Over 80 papers published, in particular in professional journals such as: Acta Applicandae Mathematicae, Annalen der Physics (Leipzig), Annals of Physics (NY), Annales d'Institut Henri Poincare,
Bulletin de l'Academie Polonaise des Sciences, Central European Journal of Physics, Chinese Journal of Physics, Classical and Quantum Gravity, Communications in Mathematical Physics, Foundations
of Physics, Helvetica Physica Acta, International Journal of Theoretical Physics, Journal of Geometry and Physics, Journal of Physics A, Letters in Mathematical Physics, Journal of Statistical
Physics, Nuclear Physics B, Physica D - Nonlinear Phenomena, Nuovo Cimento B, Physics Letters A, Progress in Theoretical Physics, Reports on Mathematical Physics, Reviews in Mathematical Physics.
Conference proceedings on: Advances in Dynamical Systems and Quantum Physics, Biolelectronics, Are There Quantum Jumps? - and On the Present Status of Quantum Mechanics, Chaos - the Interplay
Between Stochastic and Deterministic Behavior, Differential Geometric Methods in Theoretical Physics, Foundations of Modern Physics, Group Theoretical Methods in Physics, General Relativity,
Infinite Dimensional Geometry, Non Commutative Geometry, Operator Algebras, Fundamental Interactions Mysteries, Puzzles and Paradoxes in Quantum mechanics Nonlinear, Deformed and Irreversible
Quantum Systems, Open Systems and Measurement in Relativistic Quantum Theory, Quantum Communications and Measurement, Quantum Future, Quantum Groups, Quantum Theory of Particles and Fields,
Spinors, twistors, Clifford algebras and quantum deformations, Stochastic Processes, Physics and Geometry, Stochasticity and Quantum Chaos, Supersymmetry and Supergravity, Superunification and
Extra Dimensions.
For a complete list, see the List of Publications.
``On Conformal Infinity and Compactifications of the Minkowski Space", Advances in Applied Clifford Algebras, DOI: 10.1007/s00006-011-0285-5, 2011,
``The Theory of Kairons'', Advances in Applied Clifford Algebra, vol 19 (2009), p. 63-82
``Quantum Fractals on n-spheres," A. Jadczyk, Advances in Applied Clifford Algebra, vol 17 no 2, December 2006).
``Piecewise Deterministic Quantum Dynamics and Quantum Fractals on the Poincare Disk," A. Jadczyk, Reports on Mathematical Physics, Vol 54 No 1 (2004).
``Fundamental Geometric Structures for the Dirac Equation In General Relativity," D. Canarutto, A. Jadczyk, Acta Appl. Math. 51 No 1 (1998) 59-92.
``Time of Events in Quantum Theory," Ph. Blanchard, A. Jadczyk, Helv. Phys. Acta 69 (1996) 613-635.
``Particle Tracks, Events and Quantum Theory," A. Jadczyk, Progr.Theor.Phys. 93 (1995), 631-646.
``Topics In Quantum Dynamics," A. Jadczyk, in Infinite Dimensional Geometry, Non Commutative Geometry, Operator Algebras, Fundamental Interactions, Ed. R. Coquereaux Et Al, World Scientific,
Singapore 1995, p. 57-91.
``Born's Reciprocity in the Conformal Domain," A. Jadczyk, in Wroclaw 1992, Proceedings, Spinors, twistors, Clifford algebras and quantum deformations, p. 129-140.
``Differential and Integral Geometry of Grassmann Algebras," R. Coquereaux, A. Jadczyk, D. Kastler, Rev. Math. Phys. 3 (1991), p. 63-100. (2003).
``Conformal Theories, Curved Phase Spaces, Relativistic Wavelets and the Geometry of Complex Domains," R. Coquereaux, A. Jadczyk, Rev.Math.Phys. 2, (1990), p. 1-44.
``Graded Lie--Cartan Pairs. 2. The Fermionic Differential Calculus," A. Jadczyk, D. Kastler, Ann. Phys. 179 (1987) p. 169-200.
``Geometry of Multidimensional Universes," R. Coquereaux, A. Jadczyk, Commun. Math. Phys. 90 (1983) 79-100.
``Conservation Laws and Stringlike Matter Distributions," A. Jadczyk, Ann. Inst. H. Poincarč 28 (1983) p. 99-111. (2000).
``Superspaces and Supersymmetries," A. Jadczyk, K. Pilch, Commun. Math. Phys. 78 (1981) p. 373-390.
``On Some Groups of Automorphisms of Von Neumann Algebras with Cyclic and Separating Vector,," A. Jadczyk, Commun. Math. Phys. 13 (1969) p. 142-153.
`` Quantum Fractals : From Heisenberg's Uncertainty to Barnsley's Fractality'', World Scientific, 2014, ISBN-10: 9814569860
``Riemannian Geometry, Fiber Bundles, Kaluza-Klein Theories And All That, R. Coquereaux, A. Jadczyk, World Scientific, Singapore (1988) 345 p. (World Scientific Lecture Notes in Physics, 16)
Courses on: Algebra and Geometry (including computer assisted linear algebra), Mathematical Analysis (including computer assisted Calculus), Differential equations (ordinary and partial),
Riemannian Geometry, Relativity (special and general), Quantum Mechanics, Methods of Mathematical Physics: (including: measure and probability, Fourier, Laplace and other integral transforms,
functional analysis, differential equations, generalized functions, analytic functions, etc.; also: computer assisted), Mathematical Foundations of Quantum Theory
Humboldt Foundation Award for research: 1995
Polish Ministry of Science and Education for scientific achievements:1970,1976,1985,1990,1996
Polish Academy of Sciences for scientific achievements: 1972,1973
Polish Ministry of Science and Education for excellence teaching: 1980
First Award of the Student Association for excellent teaching: 1983,1984,1985. | {"url":"http://arkadiusz-jadczyk.org/jadczyk_cv.html","timestamp":"2024-11-09T06:39:35Z","content_type":"application/xhtml+xml","content_length":"22811","record_id":"<urn:uuid:1e233c7f-64f2-4d36-bfce-81700bfa6453>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00508.warc.gz"} |
Fig.1 Computes according to this regulation (Cooley-Tukey)
Fig.2 Uses these twiddle factors. Fig.3 Inputs time-domain data.
How is FFT computed in the butterflies
• I wrote some PDF documents to explain how the computation
progresses on eight-points FFT.
Why is FFT constructed like butterflies
• I wrote some PDFs to see the FFT taking the patterns of angle into
e-mail : iwata@digitalfilter.com
How is FFT computed in the butterflies
• I wrote some PDF documents to explain how the computation
progresses on eight-points FFT.
Why is FFT constructed like butterflies
• I wrote some PDFs to see the FFT taking the patterns of angle into | {"url":"http://digitalfilter.com/devken/fftmemo/enfftinsight.html","timestamp":"2024-11-08T17:04:00Z","content_type":"text/html","content_length":"4534","record_id":"<urn:uuid:1234655b-2ccf-444c-b398-1101de8babaf>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00452.warc.gz"} |
PHP Math: Essential Functions with Examples - FlatCoding
PHP math may sound like a pretty simple thing, but I assure you it's a set of tools that will totally change how you handle numbers in your projects.
You don't have to be the next Einstein in math, and it's far more than "add" and "subtract." They are robust, flexible, and handle just about any calculation you throw at them. Let's start with some
basics, then get into some functions that will have you thinking, "Whoa, PHP can do that?"
Basic Arithmetic
Okay, let's start with the easy stuff. When you think of fundamental operations in arithmetic, I'm sure you're thinking of addition, subtraction, multiplication, and division. These are the things
you've been doing since you first learned numbers, but now you're doing it with code in PHP.
It’s the beauty of it: in PHP, besides using a calculator or programming a simple script that does it for you, both can be done quite easily.
Want to add two numbers? Here's what that looks like:
$total = 5 + 10;
echo $total; // Outputs 15
Easy, right? Just replace the symbol (+, -, *, or /) based on what you're trying to achieve. Yet still, there's one key notation most programmers keep in mind: PHP does multiplication and division
before addition and subtraction unless you've used parentheses.
It's just like PEMDAS all over again. But if you want to make your answer absolute, use parentheses to make your math super clear:
$result = (5 + 10) * 2;
echo $result; // Outputs 30
Now you're not just adding and multiplying; you're dictating how PHP sees the math. It's a minor detail, but it’s one that will save you a headache down the line.
Modulus - Finding What's Left Over
Here's a fun one you might not use every day, but when you need it, it’s a lifesaver: modulus (%). Modulus tells you the remainder when one number is divided by another. Think about it—this is
perfect for checking if numbers are even or odd.
$number = 15;
echo $number % 2; // Outputs 1, meaning it's odd
If $number % 2 = 0, your number's even. If it's 1, it's odd. Why does that matter? Say you are coding something that needs to do every other something else. Modulus makes that kind of task much
easier and tidier.
Exponents with pow() - Beyond Basic Math
If you want to raise a number to a particular power, PHP has just the thing for you: pow(). Suppose you want to square something or cube it. No problem.
Here's how you would raise 2 to the power of 3—so, 2^3:
echo pow(2, 3); // Prints out 8
Need to square something? You could use pow(), but seriously, if you're squaring, just multiply the number by itself. Sometimes, it's just easier:
$square = 4 * 4;
echo $square; // Outputs 16
Yep, that's all you need to know for exponents. Easy, right?
Square Roots - Going Back to Basics
Another super common calculation is square roots—especially if you're working with distances or areas. PHP's got a function, called sqrt(), for just this thing:
echo sqrt(16); // Output 4
But suppose you want a cube root or some other root. You can do this with pow() by just using a fraction:
echo pow(27, 1/3); // Outputs 3, which works out to be the cube root of 27
This trick keeps your code flexible because you can calculate all kinds of roots without having a special function for each one.
Rounding - Keeping Your Numbers Clean
Now, what if you've got a number like 4.6 and you need to clean it up? PHP gives you a few options here: round(), ceil(), and floor(). Here's what each does:
• round() gets you the closest whole number.
• ceil() always rounds up.
• floor() always rounds down.
Let’s look at round():
echo round(4.6); // Outputs 5
If you want to make certain your number goes up no matter what, use ceil():
echo ceil(4.2); // Outputs 5
And for floor, i.e., rounding down, here's floor():
echo floor(4.9); // Outputs 4
That's ideal for cleaning up numbers, such as prices or scores, so they look nice.
Random Numbers
Random numbers are a bit exciting, aren't they? PHP's rand() function gives that to you, letting you create randomness in your code. Perfectly ideal for games, lotteries, and any feature in your
application where you would want some surprise. Let's make it spit out a number between 1 and 10:
echo rand(1, 10); // Outputs a random number between 1 and 10
Just set the minimum and maximum, and PHP will choose a random number in that range.
Absolute Values - Going Positive
Sometimes you just want the positive version of a number, right? Perhaps you are dealing with distances, or you simply want to make a negative sign go away. That's what abs() is for:
echo abs(-10); // Outputs 10
Simple yet a lifesaver when you need only positive values.
Getting into Trigonometry
If you're working on a project that involves angles—say graphics or some kind of rotation—you’re going to be using PHP's sine, cosine, and tangent functions: sin(), cos(), and tan(). Fair warning:
all of those take their input in radians, so you may want to convert from degrees using deg2rad():
echo sin(deg2rad(30)); // Outputs 0.5
This setup is pretty flexible for everything, from building something visual to getting into more complex calculations.
Min and Max - Finding Extremes
Need to find the highest or lowest number in a set? PHP's min() and max() functions make this easy:
$values = [3, 7, 10, 2];
echo min($values); // Outputs 2
echo max($values); // Outputs 10
Such functions come in handy when you want to do quick comparisons of numbers.
Constants - Your Built-in Math Helpers
PHP already has some constants ready for use. Some of them are M_PI for pi (3.14159…) and M_E for Euler's number (2.718…). Rather than writing out the full value of pi every time, you can use M_PI,
which keeps things accurate and easy to read:
echo M_PI; // Outputs 3.1415926535898
These constants are helpful for any scientific or geometric calculation where precision matters.
Logarithms and Exponentials - Going a Little Further
To do calculations that involve exponential growth or decay, you have log() for natural logarithms and exp() for exponentials. Want the natural log of 100? Here's how:
echo log(100); // Outputs about 4.605
Want to raise e to a power? Try exp():
echo exp(1); // Outputs e, approximately 2.718
Ideal for financial or scientific calculations.
Hypotenuse - The Pythagorean Shortcut
Need to calculate the hypotenuse? PHP's hypot() makes it easy as pie:
echo hypot(3, 4); // Outputs 5
This will be a perfect shortcut for distance calculations, especially when working with grids or maps.
Wrapping Up
PHP features a great number of math functions beyond basic addition, subtraction, multiplication, and division—it's more like a toolkit for computation. From simple things like addition to somewhat
more advanced functions, such as exponentials, square roots, and trigonometric functions, PHP offers you full power over everything that involves numbers.
Whether you need to round numbers to clean up prices, generate random values for a game, or use constants for precision, these functions will help you save time and strengthen your code. Once you get
used to them, it’s like second nature—even the most complex math seems trivial.
So dive in, experiment a bit, and let PHP's math functions do the heavy lifting. You might be surprised at what you are able to do with just a few lines of code.
Frequently Asked Questions (FAQs)
• What are the basic arithmetic operators in PHP?
• How do I calculate exponents in PHP?
• How can I find the square root of a number?
• What’s the modulus operator, and when should I use it?
• How do I round numbers in PHP?
• How do I generate a random number?
• What are some common math constants in PHP?
• How do I find the absolute value of a number?
• Can I perform trigonometric calculations in PHP?
• How do I calculate the natural logarithm and exponentials?
• How can I calculate the hypotenuse of a right triangle?
Share PHP Math on:
Did you find this tutorial useful?
Your feedback helps us improve our tutorials. | {"url":"https://flatcoding.com/tutorials/php-programming/php-math-functions-guide/","timestamp":"2024-11-02T09:04:01Z","content_type":"text/html","content_length":"1048903","record_id":"<urn:uuid:9bf07aa8-3a87-425a-8e1c-aaa95efb2d1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00687.warc.gz"} |
Prove qualitatively that the compression reinforcement will
increase the deformation capacity (i.e. ductility) of any RC...
Prove qualitatively that the compression reinforcement will increase the deformation capacity (i.e. ductility) of any RC...
Prove qualitatively that the compression reinforcement will increase the deformation capacity (i.e. ductility) of any RC section?
1. When compression reinforcement is provided than it will increase the bond strength between concrete and the reinforcement.
This increase in bond strength will result in prevention of movement of concrete.
If the reinforcement provided in the compression side and tension side is equal than it will prevent the shrinkage deflection because the concrete will shrink equally on top and bottom side.
If we add 2% of compression reinforcement than it will reduce the deflection up to 45%-50 %
2. Compression reinforcement also help in increase in shear strength of the beam and they act as hanger bar for stirrups.
3. A doubly reinforced beam will have a lower overall depth which will also result in decrease in deflection as it reduces the dead weight of beam. | {"url":"https://justaaa.com/civil-engineering/1107355-prove-qualitatively-that-the-compression","timestamp":"2024-11-01T22:18:52Z","content_type":"text/html","content_length":"29443","record_id":"<urn:uuid:57e00975-c7d5-4120-9fb3-105789bacc5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00092.warc.gz"} |
Power system harmonics is a distortion of the normal electrical current waveform, generally transmitted by non-liner loads. Switch-mode power supplies (SMPS), Variable Frequency Driver (VFD) and
electrical motors, photocopiers, personal computers, laser printers, fax machines, battery chargers and UPSs are examples of non-liner loads. Single-phase non-liner loads are prevalent in modern
office building and offshore projects, while three-phase, non-liner loads are widespread in factories and industrial plants.
A large portion of the non-liner electrical load on most electrical distribution systems comes from SMPS equipment. For example, all computer systems use SMPS that convert utility AC voltage to
regulated low-voltage DC for internal electronics. These non-liner power supplies draw current in high-amplitude short pulses that create significant distortion in the electrical current and voltage
wave shape-harmonic distortion, measured as Total harmonic distortion (THD). The distortion travels back into the power source and can affect other equipment connected to the same source (Bose, 1997
Most power systems can accommodate a certain level of harmonic currents but will experience problems when harmonic currents flow through the power system, they can cause communication errors,
overheating and hardware damage such as; · Overheating of electrical distribution equipment, cables, transformers, standby generator, etc
• High voltages and circulating currents caused by harmonic resonance
• Equipment malfunctions due to excessive voltage distortion
• Increased internal energy losses in connected equipment, causing component failure and shortened life span
• Generator failures
2.Method for the Harmonic Analysis
The currents absorbed by power electronics loads - currents that are not perfectly sinusoidal - are decomposed in their harmonic components by means of Fourier Analysis. Once the base frequency (60
Hz in this floating offshore unit) absorbed power is given, and consequently the 1st harmonic (60 Hz) current is know, the currents at the highest frequencies can be given as a percentage of the 1st
harmonic one. The phase shift between each higher harmonic and the 1st one is also needed for the studies (it’s also obtained by Fourier Analysis) in case the superposition of multiple harmonic
sources is to be represented (Arrillage et al., 1997).
The harmonic contents of a waveform is typically called harmonic spectrum and consists a table with the values of the amplitude and phase angle for each single harmonic.
For typical and balanced loads, the only significant harmonic levels (harmonic orders) are the ones with h = 1, 5, 7, 11, ... or h = 6·m ±1, where h = is the ratio between the frequency at each
harmonic and the base frequency (i.e. h = 5 means F = 300 Hz, when the base frequency is 60 Hz). Even harmonics are normally not produced by the power electronic loads, as well as the harmonics
multiple of 3 (3, 6, 9, ...), since the correspond to zero sequence current systems.
The harmonic 1, 7, 13, ... 6·m+1, ... correspond to positive sequence current systems, while the harmonics with h = 5, 11, 17, ..., 6 m-1, ... correspond to negative sequence current systems.
The 1st harmonic voltages and currents in the system are obtained by means of a normal 60 Hz load-flow calculation. The higher harmonic orders currents of the harmonics loads are then considered as
current sources and are injected, for each harmonic order, in a network model that represent the system response at the correspondent frequency. A liner and direct calculation at each harmonic level
(independently) is then performed, allowing to obtain the current distribution for that harmonic in all the branches and the voltage amplitude for that harmonic in all the bus bar.
The system representation can be moved, including the consideration of the skin effects for cables and rotating machines resistance and reactance. This allow to obtain much more realistic results.
Typically, the considered harmonic level ranges from h = 1 to h = 49. Higher harmonic orders are usually neglected, both for the reason that they are usually very small, and for the fact the system
response at such high frequencies depends upon non-linear effects (skin effect, etc) that are in some measure unpredictable and anyway the resistive effects are usually very high and avoid dangerous
The injected current amplitude in general decreases for higher values of h, but the possible presences of resonance frequency is the system must be carefully considered. In fact, at some frequency
the system response - that is, finally, the system impedance seen from the injection point- could have a peak with a very high value (IEC61000-3-2, 2014).
3.Model for Harmonic Studies
The model for this harmonic studies is using the electrical configuration of the Offloading scenario on offshore unit. This configuration of the electrical network is visible in the Electrical
network load flow study.
The Booster Compressor and one (1) Thruster are simultaneously supplied from three (3) Gas Generators.
The only difference comes from the fact that all loads (damping terms) were removed. Only the involved power supplies and harmonic sources were left active, i.e. the VFD of the booster and the VFD of
the Thruster.
All energy of Case 1 comes from the 3 Gas Generators coupled at 13.8 kV level.
The exam of the voltage THD at total no load is destined to get the ultimate highest prospective values for the voltage THD itself and its harmonic ranks. By the way, this also makes this study
almost entirely ‘universal’ regarding situations / scenarios where variable speed might be used. This pessimistic care is also destined to take care of synchronous generators involved to supply these
harmonic loads: The ratio “Number Of Winding Turns of the main Field Winding” / “Number Of Winding Turns of the Stator Winding” is generally in a range between 20 to 30.
Consequently, a 1 Volt pulse applied to the stator may become a 20 to 30 Volt pulse at field-rotor winding level, resulting in a di-electrical stress.
The voltage THD that is obtained is maximized because of the total absence of damping terms coming from liner loads which are inhibited here (Elgerd et al., 1998).
3.1.Case 1: Harmonic study for Low load condition
The considered technology for this VFD is AFE (Active Front End) with a switched input bridge.
The harmonic cancellation by mitigation being more easily obtained by a multiplexed energy consumption on the 3 phase than by a substraction of harmonic consumptions between secondary windings
regularly phased.
The considered technology for this VFD is a common 24 pulse without a switched input bridge (Diode Front End). This choice is guided by the two following reasons:
• The variable speed of thrusters is generally obtained with a long time experienced technology.
• The driven power is relatively weak and so have a low impact in terms of distortion.
Consequently, the 24 pulse choice is the best compromise:
A low requirement in terms of harmonic content on any source, even a sensitive synchronous generator and a longtime experienced technology. Figure 1.
The booster VFD is modelled as follow;
The following spectrum has been used (obtained in a worldwide supplier catalogue). It has to be noticed that ranks over 25 (i.e. ranks 29 to 41) where artificially emphasized with a 0.11 % value in
order to bring the global THD up to rank 41 equal to 5 %. 5 % being the ultimate THD value in current absorption of the AFE technology. Figure 2.
Power factor 0.96 is the intrinsic power factor of the electronic chopping unit, while 0.92 in above VFD dialog box is there to take into account the presence of 4 % line reactor. The presence of
padding ranks 29 to 41 is visible in above screen shot for voltages close to +1 and -1.
The Thruster VFD is modelled in Figure 3.
The following IEEE spectrum has been used (obtained in with embedded library of Power Tools). It has to be noticed that ranks over 25 (i.e. ranks 29 to 41) were artificially emphasized with
decreasing values in the range of 0.9 % to 0.4 %. This is destined to bring the global THD up to rank 41 equal to 4 %. 4 % being the ultimate THD value in current absorption of the 24 pulse
technology. Figure 4.
Power factor 0.8 is the original power factor of the electronic chopping unit as originally librarized, while 0.92 (in place of usual 0.96) in above VFD dialog box is there to take into account the
presence of 4 % line reactor. The efficiency is lowered to take into account the presence of special transformers with multiple secondary windings. Ranks n° 29 - 41 presence is visible in
3.2.Case 2: Harmonic study for High load condition
The model for this harmonic study is using a fictitious stringent configuration where one thruster should be run from one single essential diesel generator. This could be the case during
“commissioning-trouble shooting” or startup. Figure 5.
In this case, only 1 Essential Diesel Generator located at 00-9200-EH-221 Switchboard level (6.6 kV) will supply a 2.5 MW thruster (2.8 MW) motor located at 00-9200-EH-223 Switchboard level (6.6 kV).
This case being also studied at total no load (for previously listed reasons) can be made more interesting and can deliver more information.
This is the reason why the 3 Gas Generators are lst active and isolated on their own 13.8 kV level, still supplying the 17.3MW Booster Compressor.
Finally this case will embed two studies:
1. Main Study : 1 Thruster supplied from 1 Essential Diesel Generator at 6.6 kV level
2. Auxiliary Study : 1 Booster supplied from 3 Gas Turbine Generators at 13.8 kV level.
The voltage THD that is obtained is maximized because of the total absence of damping terms coming from linear loads which are inhibited here. Figure 6.
4.Simulation Result
The harmonic simulation study has been peformed with the maximum number of VFD connected and address the specific harmonic distortion specification to consider in the Variable Frequency Driver
selection with regards to electrical power quality. The calculation results provide the maximum acceptable spectrums for each harmonic rank according to the system frequency response.
The objective of minimizing weight and spaces associated to standard harmonic filters lead to the selection of High Frequency bridges and at least 12 pulse input transformers (Kim et al., 2014). The
calculations results of this study open a large possible choice of VFD technology and vendors. Whenever, the simulation is performed with the selected and guaranteed spectrums during execution phase
with vendor to validate the technology selected associated to the driven machine and speed range of operations.
The results of simulation for each case are shown in Figure 7 and 8.
All IEC61000-3-6 are greater than 1.1 %. Figure 7 (b) and 8 (b) are close up on IEC 61000-3-6 harmonic ranks (IEC61000-3-6, 2008).
It gives the recommended practice for electric power system engineering for offshore unit to control the harmonic distortion which might otherwise determine electric power quality and safety. Each
load condition is to be used as a guideline in the design of power system with non-liner loads. The limits set are for steady-state operation and are recommended for “worse-case” conditions (Gőnen,
By comparison between low load power condition (Case-1) and high load power condition (Case-2), one can see that the VFD of 1 thruster (6.6 kV level) has a negligible influence at 13.8 kV level. This
is due to 20 MVA transformers reactance which value is 0.14 pu.
The 13.8 kV voltage THD remains unchanged: 1.6 %
The subtransient short circuit power is slightly lower at 6.6 kV level when the comparison is achieved between one 20 MVA transformer and one Essential Diesel Generator.
When supplied from the transformer, the voltage THD is 2 %.
When supplied from the diesel generator the voltage THD is 2.6 %.
All voltage THD are remaining frankly below the authorized 8 % long term limit of the IEC 61000-3-6.
All prospective voltage harmonic ranks are also below their own individual limit recommended by IEC 61000-3-6.
This is remarkable keeping in mind these harmonic studies were achieved without damping terms that would come from supplied motors in parallel with harmonic loads.
The choice of an AFE technology for the booster is the best technical choice aboard a ship in terms of harmonic flow, global weight, surface and heat dissipation.
The choice of the 24 pulse technology has been proven to keep the harmonic ranks below their authorized limit and to reduce the “voltage pikes” stress applied to essential diesel generator.
The result of simulation for each load condition can be achieved to application of electric power system on offshore unit. | {"url":"http://kosomes.or.kr/journal/article.php?code=46456","timestamp":"2024-11-01T20:48:05Z","content_type":"application/xhtml+xml","content_length":"52662","record_id":"<urn:uuid:5159a647-2163-4cbc-b27c-c6337ce618e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00503.warc.gz"} |