content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Velocity and Acceleration. | JustToThePointVelocity and Acceleration.
Doing your best means never stop trying.
The best place to find a helping hand is at the end of your own arm, Swedish Proverb.
Definition. A vector $\vec{AB}$ is a geometric object or quantity that has both magnitude (or length) and direction. Vectors in an n-dimensional Euclidean space can be represented as coordinates
vectors in a Cartesian coordinate system.
Definition. The magnitude of the vector $\vec{A}$, also known as length or norm, is given by the square root of the sum of its components squared, $|\vec{A}|~ or~ ||\vec{A}|| = \sqrt{a_1^2+a_2^2+a_3^
2}$, e.g., $||< 3, 2, 1 >|| = \sqrt{3^2+2^2+1^2}=\sqrt{14}$, $||< 3, -4, 5 >|| = \sqrt{3^2+(-4)^2+5^2}=\sqrt{50}=5\sqrt{2}$, or $||< 1, 0, 0 >|| = \sqrt{1^2+0^2+0^2}=\sqrt{1}=1$.
• The sum of two vectors is the sum of their components, $\vec{A}+ \vec{B} = (a_1+b_1)\vec{i}+(a_2+b_2)\vec{j}+(a_3+b_3)\vec{k}$ = < (a[1]+b[1]), (a[2]+b[2]), (a[3]+b[3]) >.
• The subtraction of two vectors is similar to addition and is also done component-wise, it is given by simply subtracting their corresponding components (x, y, and z in 3D), $\vec{A} - \vec{B} =
(a_1-b_1)\vec{i}+(a_2-b_2)\vec{j}+(a_3-b_3)\vec{k}$ = < (a[1]-b[1]), (a[2]-b[2]), (a[3]-b[3]) >.
• Scalar multiplication is the multiplication of a vector by a scalar, a real number, changing its magnitude without altering its direction. It is effectively multiplying each component of the
vector by the scalar value, $c\vec{A} = (ca_1)\vec{i}+(ca_2)\vec{j}+(ca_3)\vec{k} = < ca_1, ca_2, ca_3>$.
• The dot or scalar product is a fundamental operation between two vectors. It produces a scalar quantity that represents the projection of one vector onto another. The dot product is the sum of
the products of their corresponding components: $\vec{A}·\vec{B} = \sum a_ib_i = a_1b_1 + a_2b_2 + a_3b_3.$, e.g. $\vec{A}·\vec{B} = \sum a_ib_i = ⟨2, 2, -1⟩·⟨5, -3, 2⟩ = a_1b_1 + a_2b_2 + a_3b_3
= 2·5+2·(-3)+(-1)·2 = 10-6-2 = 2.$
It is the product of their magnitudes multiplied by the cosine of the angle between them, $\vec{A}·\vec{B}=||\vec{A}||·||\vec{B}||·cos(θ).$
• The cross product, denoted by $\vec{A}x\vec{B}$, is a binary operation on two vectors in three-dimensional space. It results in a vector that is perpendicular to both of the input vectors and has
a magnitude equal to the area of the parallelogram formed by the two input vectors.
The cross product $\vec{A}x\vec{B}$ can be computed using the following formula, $\vec{A}x\vec{B} = det(\begin{smallmatrix}i & j & k\\ a_1 & a_2 & a_3\\ b_1 & b_2 & b_3\end{smallmatrix}) =|\begin
{smallmatrix}a_2 & a_3\\ b_2 & b_3\end{smallmatrix}|\vec{i}-|\begin{smallmatrix}a_1 & a_3\\ b_1 & b_3\end{smallmatrix}|\vec{j}+|\begin{smallmatrix}a_1 & a_2\\ b_1 & b_2\end{smallmatrix}|\vec{k}$
• Matrices offers an efficient way for solving systems of linear equations. Utilizing matrices, we can represent these equations more concisely and conveniently.
For a system like,
$(\begin{smallmatrix}2 & 3 & 3\\ 2 & 4 & 4\\ 1 & 1 & 2\end{smallmatrix}) (\begin{smallmatrix}x_1\\ x_2\\x_3\end{smallmatrix}) = (\begin{smallmatrix}u_1\\ u_2\\u_3\end{smallmatrix})$, A · X = U
provides a more convenient and concise notation and an efficient way to solve system of linear equations where A is a 3 x 3 matrix representing coefficients, X is a 3 x 1 column vector
representing the results where we basically do dot products between the rows of A (a 3 x 3 matrix) and the column vector of X (a 3 x 1 matrix).
• Given a square matrix A (it is a matrix that has an equal number of rows and columns), an inverse matrix A^-1 exists if and only if A is non-singular, meaning its determinant is non-zero (det(A)≠
The inverse matrix of A, denoted as A^-1, has the property that when multiplied by A, it results in the identity matrix, i.e., $A \times A^{-1} = A^{-1} \times A = I$. Essentially, multiplying a
matrix by its inverse reverses (“undoes”) the effect of the original matrix.
• Consider a system of linear equations represented in matrix form as: AX = B, where A is a n×n matrix (coefficient matrix), X is an n×1 matrix (column vector of variables), and B is an n×1 matrix
(column vector of constants).
Solving a system of linear equations expressed as AX = B involves finding or isolating the matrix X when we are given both matrices A and B. AX = B ⇒[A should be non-singular, meaning its
determinant is non-zero, det(A)≠0, then we can multiply both side by A^-1]⇒ X = A^-1B.. $A^{-1} = \frac{1}{\text{det}(A)} \times \text{adj}(A)$. adj(A) = C^T where $C_{ij} = (-1)^{i+j} \times \
text{minor}(A_{ij})$ and det(A) =[A is a 3x3 matrix] a(ei - fh) - b(di - fg) + c(dh - eg)
Matrices provide an efficient way to solve systems of linear equations,
$(\begin{smallmatrix}2 & 3 & 3\\ 2 & 4 & 4\\ 1 & 1 & 2\end{smallmatrix}) (\begin{smallmatrix}x_1\\ x_2\\ x_3\end{smallmatrix}) = (\begin{smallmatrix}u_1\\ u_2\\ u_3\end{smallmatrix}) ↭ A · X = U$
(more convenient and concise notation) where we do the dot products between the rows of A (a 3 x 3 matrix) and the column vector of X (a 3 x 1 matrix).
Given a square matrix A, an inverse matrix A^-1 exists if and only if A is non-singular, i.e., its determinant is non-zero (det(A)≠ 0). The inverse matrix of A, denoted as A^-1, is a matrix such that
when multiplied by A yields the identity matrix, i.e., $A \times A^{-1} = A^{-1} \times A = I$
Definition and Properties of Vectors in Motion
Definition. A position vector indicates the position or location of a point relative to an arbitrary reference point. For a point P in space, this is typically written as:
$\vec{OP} =\vec{r}(t) = ⟨x(t), y(t), z(t)⟩$.
For example, consider a cycloid traced by a point on a wheel rolling along the x-axis at unit speed. If the wheel’s radius is 1, and the time t corresponds to the angle the wheel has rotated, then: $
\vec{r}(t) = ⟨t -sin(t), 1 -cos(t)⟩$
Velocity is a vector quantity that measures the rate of change of position. It is a vector in the direction of motion. It has both magnitude and direction, and is given by the derivate of the
position vector:
$\vec{v}(t) = \frac{d\vec{r}}{dt} = ⟨\frac{dx}{dt}, \frac{dy}{dt}⟩$, e.g., for our cycloid example, $\vec{v}(t) = ⟨1-cos(t), sin(t)⟩$.
The magnitude of the velocity vector, or speed, is: $|\vec{v}| = \sqrt{(1-cos(t))^2 + sin^2(t)} = \sqrt{1 -2cos(t) + cos^2(t)+ sin^2(t)} = \sqrt{2 -2cos(t)} $
Acceleration is the derivative of the velocity vector with respect to time: $\vec{a}(t) = \frac{d\vec{v}}{dt}$. For our cycloid example, $\vec{a} = ⟨sin(t), cos(t)⟩$. At t = 0, we have $\vec{v}(0) =
\vec{0}, \vec{a} = ⟨0, 1⟩$. This indicates that the point is momentarily at rest and then accelerates upwards.
The arc length is the distance traveled by an object from one point to another point along a curve. Since the speed of a moving object is the length of its velocity vector, the distance the object
travels from t = a to t = b is the integral of |r’(t)| over the interval [a, b], e.g., length of an arch in our cycloid is $\int_{a}^{b} |\frac{d\vec{r}}{dt}|dt = \int_{a}^{b} |\vec{v}(t)|dt =$[For
our cycloid example] $\int_{0}^{2π} \sqrt{2 -2cos(t)}dt$
Definition. The unit tangent vector is a vector that points in the direction of the tangent to a curve at a given point and has a magnitude or length of 1. The unit tangent vector is used to describe
the direction of motion along a curve and can be used to calculate the derivative of a curve at a given point, $\vec{T} = \frac{\vec{v}}{|\vec{v}|}$.
Therefore, the velocity vector can be expressed as $\vec{v}=\frac{d\vec{r}}{dt}$ =[The Chain Rule] $\frac{d\vec{r}}{ds}·\frac{ds}{dt}$
Since s is the arc length, $|\vec{v}|=\frac{ds}{dt}⇒ \vec{v}= \vec{T}·\frac{ds}{dt}$. The velocity vector is the product of the unit tangent and the speed, and $\vec{T}(s)=\frac{d\vec{r}}{ds}$, that
is, the derivative of the position vector with respect to the arc length.
To sum up, velocity, being a vector, has both a magnitude and a direction. Its direction is always directed tangent to the object’s motion or trajectory. Its length is the speed, $|\vec{v}|=\frac{ds}
{dt}$. In words, the speed is the magnitude of the velocity vector and is also the derivative of the arc length with respect to time.
When we approximate the change in position $\Delta r$ as $\vec{T}·\Delta s$ (Figure i and ii), the speed $\frac{\Delta s}{\Delta t}$ becomes more accurate as $\Delta t → 0$. Dividing by $\Delta t, \
frac{\Delta r}{\Delta t} ≈ \vec{T}·\frac{\Delta s}{\Delta t}$ and taking the limit as $\Delta t → 0 ⇒ \frac{d\vec{r}}{dt}=\vec{T}·\frac{ds}{dt}$ that is the previous formula where the approximation
becomes an equality (the approximation gets better and better).
Kepler’s Laws and Planetary Motion
Kepler's first law states that all planets orbit the Sun in an elliptical path with the Sun located at one of the two foci of the ellipse. This means that the distance between a planet and the Sun
changes as the planet moves along its orbit.
Kepler’s second law states that planets move in planes, and the imaginary line joining a planet and the Sun (or the radius vector) sweeps out equal areas of space in equal intervals of time as the
planet orbits the sun.
This means that planets do not move with constant speed along their orbits. Rather, their speed varies so that the line joining the centers of the Sun and the planet sweeps out equal areas in equal
intervals of times, (Figure iii):
1. When a planet is closer to the Sun, it moves faster.
2. When a planet is farther from the Sun, it moves slower.
Newton’s Explanation Using Vector Calculus
Isaac Newton later developed a theory, using the principles of Calculus and his laws of motion and universal gravitation, to explain why planets follow Kepler’s laws.
The cross product of two vectors $\vec{u}$ and $\vec{v}$ results in a vector whose magnitude is equal to the area of the parallelogram formed by $\vec{u}$ and $\vec{v}$. Hence, we can use the vector
product to compute the area of a triangle formed by three points A, B and C in space, Area of △ABC = $\frac{1}{2}|\vec{AB}x\vec{AC}|$ (Figure iv).
To apply this to planetary motion, consider the area swept by the radius vector $\vec{r}$ (from the Sun to the planet) in a small time interval Δt. This area can be approximated by the area of the
parallelogram formed by $\vec{r}$ and the change in the position vector $Δ\vec{r}$: Area ≈ $\frac{1}{2}|\vec{r} x \vec{\Delta r}|$ (Area swept by the planet in a small interval of time $\Delta t$) ≈
[For small Δt, $Δ\vec{r} = \vec{v}Δt$] $\frac{1}{2}|\vec{r} x \vec{v}\Delta t| = \frac{1}{2}|\vec{r} x \vec{v}|\Delta t$. Kepler’s second law implies that $|\vec{r} x \vec{v}|$ must be constant
because the area swept out in equal time intervals is constant.
Futhermore, since planets move in planes, the plane of motion contains both $\vec{r}$ and $\vec{v}$. The cross product $\vec{r}x\vec{v}$ is perpendicular to this plane. Given that $|\vec{r} x \vec{v}
|$ is constant, $\vec{r} x \vec{v}$ is a constant vector. This can be expressed mathematically as:
$\frac{d}{dt}(\vec{r} x \vec{v}) = 0$.
Using the product rule for differentiation: $\frac{d\vec{r}}{dt}x\vec{v} + \vec{r} x \frac{d\vec{v}}{dt} = 0$.
Since $\frac{d\vec{r}}{dt} = \vec{v}, \frac{d\vec{v}}{dt} = \vec{a}$: $\vec{v}x\vec{v}+\vec{r}x\vec{a} = 0$.
Note that $∀\vec{v}, \vec{v}x\vec{v}=0$ because the cross product of any vector with itself is zero (geometrically, the cross product is the area of the parallelogram formed by both vectors, but as
we are talking about the same vector, there is no area),
Therefore, $\vec{r}x\vec{a} = 0$.
A cross product of two vectors generates a perpendicular vector to both vectors. The two vectors are parallel if the cross product of their cross products is zero.
This indicates that the acceleration vector is parallel to $\vec{r}$.
$\vec{a} = \frac{d\vec{v}}{dt} || \vec{r}$. Gravitational acceleration is the acceleration an object experiences when gravity is the only force acting on it. Gravitational acceleration is directed
towards the Sun (along $\vec{r}$).
Solved exercises
• The position of a particle is given by x(t) = t^3 -12t^2 + 36t + 18, for t > 0. Calculate the point at which the particle change direction. Determine the intervals of time during which the
particle is slowing down.
To analyze the motion of the particle, we first need to find its velocity v(t) and acceleration a(t).
Velocity v(t) is the derivative of the position function x(t) with respect to time t: $v(t) = \frac{dx}{dt} = 3t^2 -24t + 36.$
Acceleration a(t) is the derivative of the velocity function v(t) with respect to time t: $a(t) = \frac{dv}{dt} = 6t -24.$
A particle changes direction when its velocity v(t) -it describes the rate of change of the particle’s position with respect to time- is zero (this implies that the particle momentarily stops before
reversing its direction), and its acceleration a(t) is non-zero at that point.
If the acceleration is non-zero when the velocity is zero, it means that there is a force acting on the particle, causing its velocity to change. This non-zero acceleration ensures that the particle
doesn’t remain stationary but instead starts moving in the opposite direction.
Set the velocity equal to zero and solve for t: $3t^2 -24t + 36 = 0 ⇒[\text{Divide the equation by 3 to simplify}] 0 = t^2 -8t + 12⇒[\text{Factor the quadratic equation:}] 0 = (t -6)(t-2)$. So, the
solutions are t = 6 or t = 2.
Next, check the acceleration at these points to ensure it is non-zero: a(t) = 0 ↭ 6t = 24 ↭ t = 4. Since the acceleration is non-zero at both t = 2 and t = 6, the particle changes direction at these
A particle is slowing down when its velocity and acceleration have opposite signs.
A particle is slowing down if its speed (the magnitude of its velocity) is decreasing over time. This occurs when the direction of acceleration is opposite to the direction of velocity. If velocity
and acceleration are in the same direction, the particle speeds up because the acceleration adds to the velocity. If velocity and acceleration are in opposite directions, the acceleration subtracts
from the velocity, reducing its magnitude (speed).
To determine these intervals, we need to analyze the signs of v(t) and a(t) in different intervals. v(t) = (t -2)(t -6), so velocity changes sign at t = 2 and t = 6. a(t) = 0 ↭ t = 4, so acceleration
changes sign at t = 4.
Now, let’s summarize the signs of velocity and acceleration in different intervals:
Time Velocity Acceleration
0 < t < 2 + -
2 < t < 4 - -
4 < t < 6 - +
t > 6 + +
From the table, we see that the particle is slowing down when 0 < t < 2 and 4 < t < 6, because in these intervals, the velocity and acceleration have opposite signs (Figure B).
• Calculate the velocity and acceleration vectors at t = -1 of the position function $\vec{r}(t)=(2t-2)\vec{i}+(t^2+t-1)\vec{j}$.
The position vector $\vec{r}(t)$ represents the location of a point in space at time t: $\vec{r}(t)=(2t-2)\vec{i}+(t^2+t-1)\vec{j}$. Here, $\vec{i}[DEL:\text{and}:DEL]\vec{j}$ are the unit vectors in
the x and y directions respectively.
For t = -1: $\vec{r}(-1)=(2·(-1)-2)\vec{i}+((-1)^2+(-1)-1)\vec{j} = -4\vec{i}-\vec{j} = ⟨-4, -1⟩.$
The velocity vector $\vec{v}(t)$ is the derivative of the position vector $\vec{r}(t)$ with respect to time t: $\vec{v}(t)=\frac{d\vec{r}}{dt} = \frac{d}{dt}[(2t-2)\vec{i}+(t^2+t-1)\vec{j}] = 2\vec
{i} + (2t+1)\vec{j} = ⟨2, 2t+1⟩$. It describes how fast the position is changing and in which direction.
For t = -1; $\vec{v}(-1) = ⟨2, 2·(-1)+1⟩ = ⟨2, -1⟩.$
The acceleration vector $\vec{a}(t)$ is the derivative of the velocity vector $\vec{v}(t)$ with respect to time: $\vec{a}(t) = \frac{d\vec{v}}{dt} = \frac{d}{dt}[2\vec{i} + (2t+1)\vec{j}] = 2\vec{j}
= ⟨0, 2⟩$. It describes how fast the velocity is changing and in which direction.
Notice that the acceleration vector is constant and does not depend on t. $\vec{a}(-1) = \vec{a}(-1) = ⟨0, 2⟩$.
• Calculate the velocity and acceleration vectors at t = 0 and t = 2 of the position function $\vec{r}(t)=(t^2-4)\vec{i}+t\vec{j}$ (Illustration A)
Eliminate the parameter (t) to express the equation in terms of x and y: x = t^2-4, y = t ⇒ x = y^2 -4 is a parabola with its vertex at (0, -4).
The position vector $\vec{r}(t)$ indicates the position of a point in space at time t: $\vec{r}(t)=(t^2-4)\vec{i}+t\vec{j}$. Here, $\vec{i}[DEL:\text{and}:DEL]\vec{j}$ are the unit vectors in the x
and y directions respectively.
For t = 0: $\vec{r}(0)=(0^2-4)\vec{i}+0\vec{j} = ⟨-4, 0⟩.$
For t = 2: $\vec{r}(2)=(2^2-4)\vec{i}+2\vec{j} = +2\vec{j} = ⟨0, 2⟩.$
The velocity vector $\vec{v}(t)$ is the derivative of the position vector $\vec{r}(t)$ with respect to time t: $\vec{v}(t)=\frac{d\vec{r}}{dt} = \frac{d}{dt}[(t^2-4)\vec{i}+t\vec{j}] = (2t)\vec{i} +
\vec{j} = ⟨2t, 1⟩$. It describes how fast the position is changing and in which direction.
For t = 0; $\vec{v}(0) = ⟨2·0, 1⟩ = ⟨0, 1⟩.$
For t = 2; $\vec{v}(2) = ⟨2·2, 1⟩ = ⟨4, 1⟩.$
The acceleration vector $\vec{a}(t)$ is the derivative of the velocity vector $\vec{v}(t)$ with respect to time: $\vec{a}(t) = \frac{d\vec{v}}{dt} = \frac{d}{dt}[(2t)\vec{i} + \vec{j}] = 2\vec{i} =
⟨2, 0⟩$. It describes how fast the velocity is changing and in which direction.
Notice that the acceleration vector is constant and does not depend on t. $\vec{a}(0) = \vec{a}(2) = ⟨2, 0⟩$.
For the given position vector, at t = 0, the object is at position ⟨−4, 0⟩, moving vertically upwards with a velocity ⟨0, 1⟩, and has a constant acceleration ⟨2, 0⟩ in the horizontal direction.
At t = 2, the object is at position ⟨0, 2⟩, moving in the direction ⟨4, 1⟩ (mostly horizontally), and still has the same constant horizontal acceleration ⟨2,0⟩.
• Given the acceleration of an object $\vec{a}(t)=\vec{i}+2\vec{j}+6t\vec{k}$, calculate the object’s velocity and position functions given that the initial velocity is $\vec{v}(0)=\vec{j}-\vec{k}$
and the initial position is $\vec{r}(0)=\vec{i}-2\vec{j}+3\vec{k}$
Finding the Velocity Function
To find the velocity function $\vec{v}(t)$, we need to integrate the acceleration function $\vec{a}(t)$:
$\vec{v}(t) = \int \vec{a}(t)dt =\int (\vec{i}+2\vec{j}+6t\vec{k})dt = t\vec{i}+2t\vec{j}+3t^2\vec{k} + C,$ where C is the constant of integration.
Using the initial conditions $\vec{v}(0)=\vec{j}-\vec{k}$: $\vec{v}(0) = 0·\vec{i}+2·0·\vec{j}+3·0^2\vec{k} + C$. Thus, $C = \vec{j}-\vec{k}$
Therefore, the velocity function is: $\vec{v}(t) = t\vec{i}+2t\vec{j}+3t^2\vec{k} + \vec{j}-\vec{k} = t\vec{i}+(2t+1)\vec{j}+(3t^2-1)\vec{k}$
Finding the Position Function
Analogously, we can calculate the position function by integrating the velocity function: $\vec{r}(t) = \int \vec{v}(t)dt = \int (t\vec{i}+(2t+1)\vec{j}+(3t^2-1)\vec{k})dt = \frac{t^2}{2}\vec{i}+(t^
2+t)\vec{j} + (t^3-t)\vec{k} + C,$ where C is the constant of integration.
Using the initial condition $\vec{r}(0)=\vec{i}-2\vec{j}+3\vec{k}: \vec{r}(0) = \frac{0^2}{2}\vec{i}+(0^2+0)\vec{j} + (0^3-0)\vec{k} + C = C$. Thus, C = $\vec{i}-2\vec{j}+3\vec{k}$.
Therefore, the position function is $\vec{r}(t)=(\frac{t^2}{2}+1)\vec{i}+(t^2+t-2)\vec{j} + (t^3-t+3)\vec{k}$
This process illustrates how to integrate acceleration to find velocity, and then integrate velocity to find position, using the problem’s given initial conditions to determine the constants of
• Suppose the acceleration of an object is given by: $\vec{a}(t) = 3\hat{\mathbf{i}} + 3\hat{\mathbf{k}}$ with the initial conditions $\vec{v}(0)=8\hat{\mathbf{j}}, \vec{r}(0) = \vec{0}$. We need
to find the velocity and the position vector.
The velocity vector is obtained by integrating the acceleration vector with respect to time:
$\vec{v}(t) = 3t\hat{\mathbf{i}} + 3t\hat{\mathbf{k}} + \vec{C}$. Here, $\vec{C}$ is the constant of integration. To determine $\vec{C}$, use the initial condition for the velocity:
$\vec{v}(0) = 8\hat{\mathbf{j}}⇒ \vec{C} = 8\hat{\mathbf{j}}⇒ \vec{v}(t) = 3t\hat{\mathbf{i}} + 8\hat{\mathbf{j}} + 3t\hat{\mathbf{k}}.$
The position vector is obtained by integrating the velocity vector with respect to time t.
$\vec{r}(t) = \frac{3t^2}{2}\hat{\mathbf{i}} + 8t\hat{\mathbf{j}} + \frac{3t^2}{2}\hat{\mathbf{k}} + \vec{C}$. Here, $\vec{C}$ is the constant of integration. To determine $\vec{C}$, use the initial
condition for the position:
$\vec{r}(0) = \vec{0}⇒ \vec{C} = \vec{0}⇒ \vec{r}(t) = \frac{3t^2}{2}\hat{\mathbf{i}} + 8t\hat{\mathbf{j}} + \frac{3t^2}{2}\hat{\mathbf{k}}.$
• Consider a position vector described as $\vec{r}(t) = ⟨x(t), y(t), 0⟩$ that has constant length. Suppose $\vec{a(t)} = c\vec{r}(t)$ where c is a non-zero constant. Demonstrate that $\vec{r}·\vec
{v} = 0$ and $\vec{r}x\vec{v}$ is constant. Find an example of such a position vector.
Since the position vector $\vec{r}(t)$, which describes the position of a point in space as a function of time t, is said to have a constant length, the dot product $\vec{r}·\vec{r}$ remain constant.
Let’s denote this constant by c[1]: $\vec{r}·\vec{r} = c_1$.
$\frac{d}{dt}(\vec{r}·\vec{r}) =$[Using the product rule for differentiation, we get:] $\frac{d}{dt}(\vec{r})·\vec{r} + \vec{r}·\frac{d}{dt}(\vec{r}) = \vec{v}·\vec{r}+\vec{r}·\vec{v} = $[The dot
product is commutative] $2\vec{r}·\vec{v} =$[$\vec{r}·\vec{r} = c_1$ where c[1] is a constant, so $\frac{d}{dt}(\vec{r}·\vec{r}) = 0$] 0⇒$\vec{r}·\vec{v} = 0$. This implies that $\vec{r}$ and $\vec
{v}$ are orthogonal (perpendicular).
To demonstrate that $\vec{r}x\vec{v}$ is constant, we will prove that $\frac{d}{dt}(\vec{r}x\vec{v}) = 0$
$\frac{d}{dt}(\vec{r}x\vec{v}) = \frac{d}{dt}(\vec{r})x\vec{v} + \vec{r}x\frac{d}{dt}(\vec{v}) = \vec{v}x\vec{v} + \vec{r}x\vec{a}$
Note that $∀\vec{v}, \vec{v}x\vec{v}=0$ because the cross product of any vector with itself is zero (geometrically, the cross product of two vectors is the area of the parallelogram formed by both
vectors, but as we are talking about the same vector, there is no area).
$\frac{d}{dt}(\vec{r}x\vec{v}) = \vec{r}x\vec{a} =$[By assumption, $\vec{a(t)} = c\vec{r}(t)$] $\vec{r}xc\vec{r}(t) = c(\vec{r}x\vec{r}) = c·\vec{0} = \vec{0}$. This shows that $\vec{r}x\vec{v}$ is
constant over time.
An example of a position vector that satisfies these conditions is: $\vec{r}(t) = ⟨cos(t), sin(t), 0⟩$ (Figure A).
Let’s check this example:
1. $|\vec{r}(t)| = \sqrt{cos^2(t) + sin^2(t)} = 1.$ The position vector has constant length.
2. $\vec{v}(t) = \frac{d}{dt}\vec{r}(t)= \frac{d}{dt}⟨cos(t), sin(t), 0⟩ = ⟨-sin(t), cos(t), 0⟩.$
3. The dot product $\vec{r}(t)·\vec{v}(t) = ⟨cos(t), sin(t), 0⟩·⟨-sin(t), cos(t), 0⟩ = -cos(t)sin(t)+sin(t)cos(t)+ 0 = 0.$
4. The cross product $\vec{r}(t)x\vec{v}(t) = \Bigl \vert\begin{smallmatrix}\hat{\mathbf{i}} & \hat{\mathbf{j}} & \hat{\mathbf{k}}\\ cos(t) & sin(t) & 0\\ -sin(t) & cos(t) & 0\end{smallmatrix} \Bigr
\vert = (cos^2(t)+sin^2(t))\hat{\mathbf{k}} = \hat{\mathbf{k}}$, a constant vector.
Therefore, the example $\vec{r}(t) = ⟨cos(t), sin(t), 0⟩$ satisfies both conditions $\vec{r}·\vec{v} = 0$ and $\vec{r}x\vec{v}$ is constant.
This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare.
1. NPTEL-NOC IITM, Introduction to Galois Theory.
2. Algebra, Second Edition, by Michael Artin.
3. LibreTexts, Calculus. Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson).
4. Field and Galois Theory, by Patrick Morandi. Springer.
5. Michael Penn, Andrew Misseldine, blackpenredpen, and MathMajor, YouTube’s channels.
6. Contemporary Abstract Algebra, Joseph, A. Gallian.
7. MIT OpenCourseWare, 18.01 Single Variable Calculus, Fall 2007 and 18.02 Multivariable Calculus, Fall 2007, YouTube.
8. Calculus Early Transcendentals: Differential & Multi-Variable Calculus for Social Sciences. | {"url":"https://justtothepoint.com/calculus/velocityacceleration/","timestamp":"2024-11-14T04:27:38Z","content_type":"text/html","content_length":"41313","record_id":"<urn:uuid:bef6feea-92fa-41c0-b81f-2e5400e08b00>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00151.warc.gz"} |
4th Grade Measurement and Data
4.MD.A.1 Know relative sizes of measurement units within one system of units including km, m, cm; kg, g; lb, oz; l, ml; hr, min, sec. Within a single system of measurement, express measurements in a
larger unit in terms of a smaller unit. Record measurement equivalents in a two-column table. For example, know that 1 ft. is 12 times as long as 1 in. Express the length of a 4 ft. snake as 48 in.
Generate a conversion table for feet and inches listing the number pairs (1, 12), (2, 24), (3, 36)... | {"url":"https://www.k-5mathteachingresources.com/4th-grade-measurement-and-data.html","timestamp":"2024-11-07T03:59:41Z","content_type":"text/html","content_length":"61167","record_id":"<urn:uuid:3257cb25-1dd5-4c6a-9827-5b133534b4db>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00301.warc.gz"} |
What is the vertex of the parabola whose equation is y2−36x−22y... | Filo
Question asked by Filo student
What is the vertex of the parabola whose equation is
a. {"","","",""}
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
3 mins
Uploaded on: 12/18/2022
Was this solution helpful?
Found 5 tutors discussing this question
Discuss this question LIVE for FREE
6 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text What is the vertex of the parabola whose equation is
Updated On Dec 18, 2022
Topic Calculus
Subject Mathematics
Class Class 12
Answer Type Video solution: 1
Upvotes 52
Avg. Video Duration 3 min | {"url":"https://askfilo.com/user-question-answers-mathematics/what-is-the-vertex-of-the-parabola-whose-equation-is-33343038343530","timestamp":"2024-11-05T01:09:36Z","content_type":"text/html","content_length":"342141","record_id":"<urn:uuid:f6e9ac03-5454-439d-8dd5-0fefbf32ccca>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00671.warc.gz"} |
manual page
eigrpd.conf — EIGRP routing daemon configuration file
The eigrpd(8) daemon implements the Enhanced Interior Gateway Routing Protocol.
The eigrpd.conf config file is divided into the following main sections:
User-defined variables may be defined and used later, simplifying the configuration file.
Global settings for eigrpd(8).
Multiple routing instances can be defined. Routing instances are defined hierarchically by address-family and then autonomous-system.
Interface-specific parameters.
Argument names not beginning with a letter, digit, or underscore must be quoted.
Additional configuration files can be included with the include keyword, for example:
include "/etc/eigrpd.sub.conf"
Macros can be defined that will later be expanded in context. Macro names must start with a letter, digit, or underscore, and may contain any of those characters. Macro names may not be reserved
words (for example, bandwidth, interface, or hello-interval). Macros are not expanded inside quotes.
For example:
address-family ipv4 {
autonomous-system 1 {
interface em1 {
bandwidth $fastethernet
The same can be accomplished by specifying the bandwidth globally or within the address-family or autonomous-system declaration.
Several settings can be configured globally, per address-family, per autonomous-system and per interface. The only settings that can be set globally and not overruled are listed below.
Set the routing priority of EIGRP internal routes to prio. The default is 28.
Set the routing priority of EIGRP external routes to prio. This option may be used as a simple loop-prevention mechanism when another routing protocol is being redistributed into EIGRP. The
default is 28.
Set the routing priority of EIGRP summary routes to prio. The default is 28.
If set to no, do not update the Forwarding Information Base, a.k.a. the kernel routing table. The default is yes.
Specifies the routing table eigrpd(8) should modify. Table 0 is the default table.
Set the router ID; if not specified, the numerically lowest IP address of the router will be used.
Multiple routing instances can be defined. Routing instances are defined hierarchically by address-family and then autonomous-system.
address-family ipv4 {
autonomous-system 1 {
interface em0 {
Routing-instance specific parameters are listed below.
active-timeout minutes
Set the maximum time to wait before declaring a route to be in the stuck in active state. If 0 is given, the active timeout is disabled. The default value is 3; valid range is 0-65535.
address-family (ipv4|ipv6)
Specify an address-family section, grouping one or more autonomous-systems.
autonomous-system number
Specify the autonomous-system, grouping one or more interfaces. Valid range is 1-65535.
default-metric bandwidth delay reliability load mtu
Specify a default metric for all routes redistributed into EIGRP. Valid ranges are: 1-10000000 for the bandwidth, 1-16777215 for the delay, 1-255 for the reliability, 1-255 for the load and
1-65535 for the mtu.
k-values K1 K2 K3 K4 K5 K6
Set the coefficients used by the composite metric calculation. Two routers become neighbors only if their K-values are the same. For K1 and K3, The default value is 1. For K2, K4, K5 and K6 the
default value is 0; valid range is 1-254.
maximum-hops number
Advertise as unreachable the routes with a hop count higher than specified. The default value is 100; valid range is 1-255.
maximum-paths number
Specify the maximum number of ECMP paths to be installed in the FIB for each route. The default value is 4; valid range is 1-32.
[no] redistribute (static|connected|ospf|rip|default) [metric bandwidth delay reliability load mtu]
[no] redistribute prefix [metric bandwidth delay reliability load mtu]
If set to connected, routes to directly attached networks will be announced over EIGRP. If set to static, static routes will be announced over EIGRP. If set to ospf, OSPF routes will be announced
over EIGRP. If set to rip, RIP routes will be announced over EIGRP. If set to default, a default route pointing to this router will be announced over EIGRP. It is possible to specify a network
range with prefix; networks need to be part of that range to be redistributed. By default no additional routes will be announced over EIGRP.
redistribute statements are evaluated in sequential order, from first to last. The first matching rule decides if a route should be redistributed or not. Matching rules starting with no will
force the route to be not announced. The only exception is default, which will be set no matter what, and additionally no cannot be used together with it.
It is possible to set the route metric for each redistribute rule.
variance multiplier
Set the variance used to permit the installation of feasible successors in the FIB if their metric is lower than the metric of the successor multiplied by the specified multiplier. The default
value is 1; valid range is 1-128.
Each interface can have several parameters configured individually, otherwise they are inherited. Interfaces can pertain to multiple routing instances. An interface is specified by its name.
Interface-specific parameters are listed below.
bandwidth bandwidth
Set the interface bandwidth in kilobits per second. The bandwidth is used as part of the EIGRP composite metric. The default value is 100000; valid range is 1-10000000.
delay delay
Set the interface delay in tens of microseconds. The delay is used as part of the EIGRP composite metric. The default value is 10; valid range is 1-16777215.
hello-interval seconds
Set the hello interval. The default value is 5; valid range is 1-65535 seconds.
holdtime seconds
Set the hello holdtime. The default value is 15; valid range is 1-65535 seconds.
Prevent transmission and reception of EIGRP packets on this interface.
split-horizon (yes|no)
If set to no, the split horizon rule will be disabled on this interface. This option should be used with caution since it can introduce routing loops in point-to-point or broadcast networks. The
default is yes.
summary-address address/len
Configure a summary aggregate address for this interface. Multiple summary addresses can be configured.
eigrpd(8) configuration file.
Example configuration file.
The eigrpd.conf file format first appeared in OpenBSD 5.9. | {"url":"https://man.openbsd.org/eigrpd.conf.5","timestamp":"2024-11-09T03:02:48Z","content_type":"text/html","content_length":"20175","record_id":"<urn:uuid:fad5bbf1-6827-426e-8217-615d2abfbd6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00886.warc.gz"} |
Simple Interest Rate in context of percentage rate
31 Aug 2024
Title: Understanding Simple Interest Rates: A Conceptual Analysis
Abstract: This article provides an in-depth examination of simple interest rates, focusing on their conceptual framework and mathematical representation. We explore the concept of percentage rate,
its relationship with simple interest, and the formula that governs this relationship.
Introduction: Simple interest is a fundamental concept in finance, referring to the interest earned on a principal amount over a specified period. In this article, we delve into the world of simple
interest rates, exploring their mathematical representation and conceptual significance.
Simple Interest Rate Formula:
I = P * r * t
• I is the simple interest
• P is the principal amount
• r is the simple interest rate (expressed as a decimal)
• t is the time period (in years)
Percentage Rate and Simple Interest: The percentage rate, often denoted by %, represents the ratio of the simple interest to the principal amount. Mathematically, this can be expressed as:
r = I / P
Substituting the formula for simple interest (I = P * r * t) into the above equation yields:
r = (P * r * t) / P
Simplifying the expression, we get:
r = r * t
This result indicates that the percentage rate is directly proportional to the time period.
Conclusion: In conclusion, this article has provided a comprehensive analysis of simple interest rates in the context of percentage rates. The mathematical formula governing this relationship has
been presented, highlighting the direct proportionality between the percentage rate and the time period. This conceptual understanding is essential for grasping the fundamental principles of finance
and economics.
• [Insert relevant references here]
Note: The above article is a general academic piece and does not contain any numerical examples or specific data. It provides a theoretical framework for understanding simple interest rates in the
context of percentage rates.
Related articles for ‘percentage rate’ :
• Reading: Simple Interest Rate in context of percentage rate
Calculators for ‘percentage rate’ | {"url":"https://blog.truegeometry.com/tutorials/education/8b0e5358de3a109f02621d5ae8e6832d/JSON_TO_ARTCL_Simple_Interest_Rate_in_context_of_percentage_rate.html","timestamp":"2024-11-08T13:59:22Z","content_type":"text/html","content_length":"15986","record_id":"<urn:uuid:66bc763b-7e5f-43ed-8163-7adad789a429>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00132.warc.gz"} |
Cluster Analysis (3)
Advanced Clustering Methods
Luc Anselin^1
07/30/2020 (latest update)
In this chapter, we consider some more advanced partitioning methods. First, we cover two variants of K-means, i.e., K-medians and K-medoids. These operate in the same manner as K-means, but differ
in the way the central point of each cluster is defined and the manner in which the nearest points are assigned. In addition, we discuss spectral clustering, a graph partitioning method that can be
interpreted as simultaneously implementing dimension reduction with cluster identification.
As implemented in GeoDa, these methods share almost all the same options with the partitioning and hierarchical clustering methods discussed in the previous chapters. These common aspects will not be
considered again. We refer to the previous chapters for details on the common options and sensitivity analyses.
We continue to use the Guerry data set to illustrate k-medians and k-medoids, but introduce a new sample data set, spirals.csv, for the spectral clustering examples.
• Understand the difference between k-median and k-medoid clustering
• Carry out and interpret the results of k-median clustering
• Gain insight into the logic behind the PAM, CLARA and CLARANS algorithms
• Carry out and interpret the results of k-medoid clustering
• Understand the graph-theoretic principles underlying spectral clustering
• Carry out and interpret the results of spectral clustering
GeoDa functions covered
• Clusters > K Medians
□ select variables
□ MAD standardization
• Clusters > K Medoids
• Clusters > Spectral
Getting started
The Guerry data set can be loaded in the same way as before.
The spirals data set is specifically designed to illustrate some of the special characteristics of spectral clustering. It is one of the GeoDaCenter sample data sets.
To activate this data set, you load the file spirals.csv and select x and y as the coordinates (the data set only has two variables), as in Figure 1. This will ensure that the resulting layer is
represented as a point map.
The result shows the 300 points, consisting of two distinct but interwoven spirals, as in Figure 2.
We will not be needing this data set until we cover spectral clustering. For k-medians and k-medoids, we use the Guerry data set.
K Medians
K-medians is a variant of k-means clustering. As a partitioning method, it starts by randomly picking k starting points and assigning observations to the nearest initial point. After the assignment,
the center for each cluster is re-calculated and the assignment process repeats itself. In this way, k-medians proceeds in exactly the same manner as k-means. It is in fact also an EM algorithm.
In contrast to k-means, the central point is not the average (in multiattribute space), but instead the median of the cluster observations. The median center is computed separately for each
dimension, so it is not necessarily an actual observation (similar to what is the case for the cluster average in k-means).
The objective function for k-medians is to find the allocation \(C(i)\) of observations \(i\) to clusters \(h = 1, \dots k\), such that the sum of the Manhattan distances between the members of each
cluster and the cluster median is minimized: \[\mbox{argmin}_{C(i)} \sum_{h=1}^k \sum_{i \in h} || x_i - x_{h_{med}} ||_{L_1},\] where the distance metric follows the \(L_1\) norm, i.e., the
Manhattan block distance.
K-medians is often confused with k-medoids. However, there is an important difference in that in k-medoids, the central point has to be one of the observations (Kaufman and Rousseeuw 2005). We
consider k-medoids in the next section.
The Manhattan distance metric is used to assign observations to the nearest center. From a theoretical perspective, this is superior to using Euclidean distance since it is consistent with the notion
of a median as the center (Hoon, Imoto, and Miyano 2017, 16).
In all other respects, the implementation and interpretation is the same as for k-means. To illustrate the logic, a simple worked example is provided in the Appendix.
GeoDa employs the k-medians implementation that is part of the C clustering library of Hoon, Imoto, and Miyano (2017).
Just as the previous clustering techniques, k-medians is invoked from the Clusters toolbar. From the menu, it is selected as Clusters > K Medians, the second item in the classic clustering subset, as
shown in Figure 3.
This brings up the K Medians Clustering Settings dialog, with the Input options in the left-hand side panel, shown in Figure 4.
Variable Settings Panel
The user interface is identical to that for k-means, to which we refer for details. The main difference is that the Distance Function is Manhattan distance. In the example in Figure 4, we again
select the same six variables as before, with the Number of Clusters set to 5 and all other options left to the default settings.
Selecting Run brings up the cluster map and fills out the right-hand panel with some cluster characteristics, listed under Summary. The cluster categories are added to the Table using the variable
name specified in the dialog (default is CL, in our example we use CLme1).
Cluster results
The cluster map is shown in Figure 5. The three largest clusters (they are labeled in sequence of their size) are well-balanced, with 24, 20 and 20 observations. The two others are much smaller, at
12 and 9. Interesting is that the clusters are also geographically quite compact, except for cluster 4, which consists of four different spatial subgroups. Cluster 2, in the south of the country, is
actually fully contiguous (without imposing any spatial constraints). This is not the case for k-means.
While the grouping may seem similar to what we obtained with other methods, this is in fact not the case. In Figure 6, the cluster map for k-means and k-medians are shown next to each other, with the
labels for k-medians adjusted so as the get similar colors for each category. This highlights some of the important differences between the two methods. First of all, the size of the different
“matching” clusters is not the same, nor is their geographic configuration. Considering the clusters for k-medians (with their new labels), we see that the largest cluster, with 24 observations,
corresponds most closely with cluster 3 for k-means, which had 18 observations.
The closest match between the two results is for cluster 2, with only one mismatch out of 9 observations, although that cluster is much larger for k-means, with 19 observations. The worst match is
for cluster 5, where only three observations are shared by the two methods for that cluster (out of 12). For the others, there is about a 3/4 match. In other words, the two methods pick out different
patterns of similarity in the data. There is no “best” method, since each uses a different objective function. It is up to the analyst to decide which of the objectives makes most sense, in light of
the goals of a particular study.
Further insight into the characteristics of the clusters obtained by the k-medians algorithm are found in the Summary panel on the right side of the settings dialog, shown in Figure 7.
The first set of items summarizes the settings for the analysis, such as the method used, the number of clusters and the various options for initialization, standardization, etc. Next follow the
values for each of the variables associated with the median center of each cluster. These results are given in the original scale for the variables, whereas the other summary measures depend on the
standardization used. Typically, the median center values are used to interpret the type of grouping that is obtained. This is not always easy, since one has to look for systematic combinations of
variables with high or low values for the median so as to characterize the cluster.
The third set of items contains the summary statistics, using the squared difference and mean as the criterion, similar to what is used for k-means. Note that this is only for a general comparison,
since this is not the criterion used in the objective function. So, in a sense, it gives a general impression of how the k-medians results compare using the standard used for k-means. In our example,
we obtain a ratio of between to total sum of squares of 0.447, compared to 0.497 for k-means (with the default settings). This does not mean that the k-medians result is worse than that for k-means,
but it gives a sense of how it performs under a different criterion that what it is optimized for.
The final set of summary characteristics are the proper ones for the objective of minimizing the within-cluster Manhattan distance relative to the cluster median. The total sum of the distances is
372.318. This is the sum of the distances between all observations and the overall median (using the z-standardized values for the variables). For k-medians, the objective is to decrease this value
by grouping the observations into clusters with their own medians. The within-cluster total distance is listed for each cluster. In our results, there is quite a range in these values, going from
15.97 in the smallest cluster (with only 9 observations) to 70.49 in cluster 3 (with 20 observations). Clusters 1 and 2, that are larger or equal to the size of cluster 3, have a much better fit.
This is also reflected in the average within-cluster distance results, with the smallest value of 1.77 for C5, followed by 2.61 for C1. Interestingly, the latter has about double the total distance
compared to C4, but its average is better (2.61 compared to 2.84). The averages correct for the size of the cluster and are thus a good comparative measure of fit.
The total of the within-cluster distances is 250.399, a decrease of 121.9 from the original total. As a fraction of the original total, the final result is 0.673. When comparing results for different
values of k, we would look for a bend in the elbow plot as this ratio decreases with increasing values of k.
Options and sensitivity analysis
The variables settings panel contains all the same options as for k-means, except that initialization is always by randomization, since there is no k-means++ method for k-medians. One option that is
particularly useful in the context of k-medians (and k-medoids) is the use of a different standardization.
MAD standardization
The default z-standardization uses the mean and the variance of the original variables. Both of these are sensitive to the influence of outliers. Since the use of Manhattan distance and the median
center for clusters in k-medians already reduces the effect of such outliers, it makes sense to also use a standardization that is less sensitive to those. We considered range standardization in the
discussion of k-means. Here, we look at the mean absolute deviation, or MAD. As usual, this is selected as one of the Transformation options, as shown in Figure 8.
The resulting cluster map and summary characteristics are shown in Figures 9 and 10.
The main effect seems to be on the largest cluster, which grows from 24 to 27 observations, mostly at the expense of what was the second largest cluster (which goes from 20 to 18 observations). As a
result, none of the clusters are fully contiguous any more.
The distance measures listed in the summary show a different starting point, with a total distance sum of 490.478, compared to 372.318 for z-standardization (recall that these measures are expressed
in whatever units were used for the standardization). Therefore, the values for the within-cluster distance and their averages are not directly comparable to those using z-standardization. Only
relative comparisons are warranted.
In the end, the total within-clusters are reduced to 0.677 of the original total, a slightly worse result than for z-standardization. However, this does not necessarily mean that z-standardization is
superior. The choice of a particular transformation should be made within the context of the substantive research question. When no strong guidelines exist, a sensitivity analysis comparing, for
example, z-standardization, range standardization and MAD may be the best strategy.
K Medoids
The objective of the k-medoids algorithm is to minimize the sum of the distances from the observations in each cluster to a representative center for that cluster. In contrast to k-means and
k-medians, those centers do not need to be computed, since they are actual observations. As a consequence, k-medoids works with any dissimilarity matrix. If actual observations are available (as in
the implementation in GeoDa), the Manhattan distance is the preferred metric, since it is less affected by outliers. In addition, since the objective function is based on the sum of distances instead
of their squares, the influence of outliers is even smaller.^2
The objective function can thus be expressed as finding the cluster assignments \(C(i)\) such that: \[\mbox{argmin}_{C(i)} \sum_{h=1}^k \sum_{i \in h} d_{i,h_c},\] where \(h_c\) is a representative
center for cluster \(h\) and \(d\) is the distance metric used (from a dissimilarity matrix). As was the case for k-means (and k-medians), the problem is NP hard and an exact solution does not exist.
The main approach to the k-medoids problem is the so-called partitioning around medoids (PAM) algorithm of Kaufman and Rousseeuw (2005). The logic underlying the PAM algorithm consists of two stages,
BUILD and SWAP. In the first, a set of \(k\) starting centers are selected from the \(n\) observations. In some implementations, this is a random selection, but Kaufman and Rousseeuw (2005), and,
more recently Schubert and Rousseeuw (2019) prefer a step-wise procedure that optimizes the initial set. The main part of the algorithm proceeds in a greedy iterative manner by swapping a current
center with a candidate from the remaining non-centers, as long as the objective function can be improved. Detailed descriptions are given in Kaufman and Rousseeuw (2005), Chapters 2 and 3, as well
as in Hastie, Tibshirani, and Friedman (2009), pp. 515-520, and Han, Kamber, and Pei (2012), pp. 454-457. A brief outline is presented next.
The PAM algorithm for k-medoids
The BUILD phase of the algorithm consists of identifying \(k\) observations out of the \(n\) and assigning them to be cluster centers \(h\), with \(h = 1, \dots, k\). This can be accomplished by
randomly selecting the starting points a number of times and picking the one with the best (lowest) value for the objective function, i.e., the lowest sum of distances from observations to their
cluster centers.
As a preferred option, Kaufman and Rousseeuw (2005) outline a step-wise approach that starts by picking the center, say \(h_1\), that minimizes the overall sum. This is readily accomplished by taking
the observation that corresponds with the smallest row or column sum of the dissimilarity matrix. Next, each additional center (for \(h = 2, \dots, k\)) is selected that maximizes the difference
between the closest distance to existing centers and the new potential center for all points (in practice, for the points that are closer to the new center than to existing centers).
For example, given the distance to \(h_1\), we compute for each of the remaining \(n - 1\) points \(j\), its distance to all candidate centers \(i\) (i.e., the same \(n-1\) points). Consider a \
((n-1) \times (n-1)\) matrix where each observation is both row (\(i\)) and column (\(j\)). For each column \(j\), we compare the distance to the row-element \(i\) with the distance to the nearest
current center for \(j\). In the second iteration, this is simply the distance to \(h_1\), but at later iterations different \(j\) will have different centers closest to them. If \(i\) is closer to \
(j\) than its current closest center, then \(d_{j,h_1} - d_{ji} > 0\). The maximum of this difference and 0 is entered in position \(i,j\) of the matrix (in other words, negatives are not counted).
The row \(i\) with the largest row sum (i.e., the largest improvement in the objective function) is selected as the next center.
At this point, the distance for each \(j\) to its nearest center is updated and the process starts anew for \(n-2\) observations. This continues until \(k\) centers have been picked.
In the SWAP phase, we consider all possible pairs that consist of a cluster center \(i\) and one of the \(n - k\) non-centers, \(r\), for a possible swap, for a total of \(k \times (n-k)\) pairs \
((i, r)\).
We proceed by evaluating the change to the objective function that would follow from removing center \(i\) and replacing it with \(r\), as it affects the allocation of each other point (non-center
and non-candidate) to either the new center \(r\) or one of the current centers \(g\) (but not \(i\), since that is no longer a center). This contribution follows from the change in distance that may
occur between \(j\) and its new center. Those values are summed over all \(n - k - 1\) points \(j\). We compute this sum for each pair \(i, r\) and find the minimum over all pairs. If this minimum is
negative (i.e., it decreases the total sum of distances), then \(i\) and \(r\) are swapped. This continues until there are no more improvements, i.e., the minimum is positive.
We label the change in the objective from \(j\) from a swap between \(i\) and \(r\) as \(C_{jir}\). The total improvement for a given pair \(i, r\) is the sum over all \(j\), \(T_{ir} = \sum_j C_
{jir}\). The pair \(i, r\) is selected for a swap for which the minimum over all pairs of \(T_{ir}\) is negative, i.e., \(\mbox{argmin}_{i,r} T_{ir} < 0\).
The computational burden associated with this algorithm is quite high, since at each iteration \(k \times (n - k)\) pairs need to be evaluated. On the other hand, no calculations other than
comparison and addition/subtraction are involved, and all the information is in the (constant) dissimilarity matrix.
To compute the net change in the objective function due to \(j\) that follows from a swap between \(i\) and \(r\), we distinguish between two cases. In one, \(j\) belongs to the cluster \(i\), such
that \(d_{ji} < d_{jg}\) for all other centers \(g\). In the other case, \(j\) belongs to a different cluster, say \(g\), and \(d_{jg} < d_{ji}\). In both instances, we have to compare the distances
from the nearest current center (\(i\) or \(g\)) to the distance to the candidate point, \(r\). Note that we don’t actually have to carry out the cluster assignments, since we compare the distance
for all \(n - k - 1\) points \(j\) to the closest center (\(i\) or \(g\)) and the candidate center \(r\). All this information is contained in the elements of the dissimilarity matrix.
Consider the first case, where \(j\) is not part of the cluster \(i\), as in Figure 11. We see two scenarios for the configuration of the point \(j\), labeled \(j1\) and \(j2\). These points are
closer to \(g\) than to \(i\), since they are not part of the cluster around \(i\). We now need to check whether \(j\) is closer to \(r\) than to its current cluster center \(g\). If \(d_{jg} \leq d_
{jr}\), then nothing changes and \(C_{jir} = 0\). This is the case for point \(j1\). The dashed red line gives the distance to the current center \(g\) and the dashed green line gives the distance to
\(r\). Otherwise, if \(d_{jr} < d_{jg}\), as is the case for point \(j2\), then \(j\) is assigned to \(r\) and \(C_{jir} = d_{jr} - d_{jg}\), a negative value, which decreases the overall cost. In
the figure, we can compare the length of the dashed red line to the length of the solid green line, which designates a re-assignment to the candidate center \(r\).
When \(j\) is part of cluster \(i\), then we need to assess whether \(j\) would be assigned to \(r\) or to the next closest center, say \(g\), since \(i\) would no longer be part of the cluster
centers. This is illustrated in Figure 12, which has three options for the location of \(j\) relative to \(g\) and \(r\). In the first case, illustrated by point \(j1\), \(j\) is closer to \(g\) than
to \(r\). This is illustrated by the difference in length between the dashed green line (\(d_{j1r}\)) and the solid green line (\(d_{j1g}\)). More precisely, \(d_{jr} \geq d_{jg}\) so that \(j\) is
now assigned to \(g\). The change in the objective is \(C_{jir} = d_{jg} - d_{ji}\). This value is positive, since \(j\) was part of cluster \(i\) and thus was closer to \(i\) than to \(g\) (compare
the length of the red dashed line between \(j1\) and \(i\) and the length of the line connecting \(j1\) to \(g\)).
If \(j\) is closer to \(r\), i.e., \(d_{jr} < d_{jg}\), then we can distinguish between two cases, one depicted by \(j2\), the other by \(j3\). In both instances, the result is that \(j\) is assigned
to \(r\), but the effect on the objective differs. In the Figure, for both \(j2\) and \(j3\) the dashed green line to \(g\) is longer than the solid green line to \(r\). The change in the objective
is the difference between the new distance and the old one (\(d_{ji}\)), or \(C_{jir} = d_{jr} - d_{ji}\). This value could be either positive or negative, since what matters is that \(j\) is closer
to \(r\) than to \(g\), irrespective of how close \(j\) might have been to \(i\). For point \(j2\), the distance to \(i\) (dashed red line) was smaller than the new distance to \(r\) (solid green
line), so \(d_{jr} - d_{ji} > 0\). In the case of \(j3\), the opposite holds, and the length to \(i\) (dashed red line) is larger than the distance to the new center (solid green line). In this case,
the change to the objective is \(d_{jr} - d_{ji} < 0\).
After the value for \(C_{jir}\) is computed for all \(j\), the sum \(T_{ir}\) is evaluated. This is repeated for every possible pair \(i,r\) (i.e., \(k\) centers to be replaced by \(n-k\) candidate
centers). If the minimum over all pairs is negative, then \(i\) and \(r\) for the selected pair are exchanged, and the process is repeated. If the minimum is positive, the iterations end.
An illustrative worked example is given in the Appendix.
Improving on the PAM algorithm
The complexity of each iteration in the original PAM algorithm is of the order \(k \times (n - k)^2\), which means it will not scale well to large data sets with potentially large values of \(k\). To
address this issue, Kaufman and Rousseeuw (2005) proposed the algorithm CLARA, based on a sampling strategy.
Instead of considering the full data set, a subsample is drawn. Then PAM is applied to find the best \(k\) medoids in the sample. Next, the distance from all observations (not just those in the
sample) to their closest medoid is computed to assess the overall quality of the clustering.
The sampling process can be repeated for several more samples (keeping the best solution from the previous iteration as part of the sampled observations), and at the end the best solution is
selected. While easy to implement, this approach does not guarantee that the best local optimum solution is found. In fact, if one of the best medoids is never sampled, it is impossible for it to
become part of the final solution. Note that as the sample size is increased, the results will tend to be closer to those given by PAM.^4
In practical applications, Kaufman and Rousseeuw (2005) suggest to use a sample size of 40 + 2k and to repeat the process 5 times.^5
In Ng and Han (2002), a different sampling strategy is outlined that keeps the full set of observations under consideration. The problem is formulated as finding the best node in a graph that
consists of all possible combinations of \(k\) observations that could serve as the \(k\) medoids. The nodes are connected by edges to the \(k \times (n - k)\) nodes that differ in one medoid (i.e.,
for each edge, one of the \(k\) medoid nodes is swapped with one of the \(n - k\) candidates).
The algorithm CLARANS starts an iteration by randomly picking a node (i.e., a set of \(k\) candidate medoids). Then, it randomly picks a neighbor of this node in the graph. This is a set of \(k\)
medoids where one is swapped with the current set. If this leads to an improvement in the cost, then the new node becomes the new start of the next set of searches (still part of the same iteration).
If not, another neighbor is picked and evaluated, up to maxneighbor times. This ends an iteration.
At the end of the iteration the cost of the last solution is compared to the stored current best. If the new solution constitutes an improvement, it becomes the new best. This search process is
carried out a total of numlocal iterations and at the end the best overall solution is kept. Because of the special nature of the graph, not that many steps are required to achieve a local minimum
(technically, there are many paths that lead to the local minimum, even when starting at a random node).
To make this concept more concrete, consider the toy example used in the Appendix, which has 7 observations. To construct k=2 clusters, any pair of 2 observations from the 7 could be considered a
potential medoid. All those pairs constitute the nodes in the graph. The total number of nodes is given by the binomial coefficient \({n}\choose{k}\). In our example, \({{7}\choose{2}} = 21\).
Each of the 21 nodes in the graph has \(k \times (n - k) = 2 \times 5 = 10\) neighbors that differ only in one medoid connected with an edge. In our example, let’s say we pick (4,7) as a starting
node, as we did in the worked example. It will be connected to all the nodes that differ by one medoid, i.e., either 4 or 7 is replaced (swapped in PAM terminology) by one of the \(n - k = 5\)
remaining nodes. Specifically, this includes the following 10 neighbors: 1-7, 2-7, 3-7, 5-7, 6-7, and 4-1, 4-2, 4-3, 4-5 and 4-6. Rather than evaluating all 10 potential swaps, as we did for PAM,
only a maximum number (maxneighbor) are evaluated. At the end of those evaluations, the best solution is kept. Then the process is repeated, up to the specified total number of iterations, which Ng
and Han (2002) call numlocal.
Let’s say we set maxneighbors to 2. Consider the first step of the random evaluation in which we randomly pick the pair 4-5 from the neighboring nodes. In other words, we replace 7 in the original
set by 5. Using the values from the worked example in the Appendix, we have \(T_{45} = 3\), a positive value, so this does not improve the objective function. We increase the iteration count (for
maxneighbors) and pick a second random node, say 4-2 (i.e., now replacing 7 by 2). Now the value \(T_{42} = -3\) so the objective is improved to 20-3 = 17. Since we have reached the end of
maxneighbors, we store this value as best and now repeat the process, picking a different random starting point. We continue this until we have obtained numlocal local optima and keep the best
overall solution.
Based on their numerical experiments, Ng and Han (2002) suggest that no more than 2 iterations need to be pursued (i.e., numlocal = 2), with some evidence that more operations are not cost-effective.
They also suggest a sample size of 1.25% of \(k \times (n-k)\).^6
Both CLARA and CLARANS are large data methods, since for smaller data sizes (say < 100), PAM will be feasible and obtain better solutions (since it implements an exhaustive evaluation).
Further speedup of PAM, CLARA and CLARANS is outlined in Schubert and Rousseeuw (2019), where some redundancies in the comparison of distances in the SWAP phase are removed. In essence, this exploits
the fact that observations allocated to a medoid that will be swapped out, will move to either the second closest medoid or to the swap point. Observations that are not currently allocated to the
medoid under consideration will either stay in their current cluster, or move to the swap point, depending on how the distance to their cluster center compares to the distance to the swap point.
These ideas shorten the number of loops that need to be evaluated and allow the algorithms to scale to much larger problems (details are in Schubert and Rousseeuw 2019, 175). In addition, they
provide and option to carry out the swaps for all current k medoids simultaneously, similar to the logic in k-means (this is implemented in the FASTPAM2 algorithm, see Schubert and Rousseeuw 2019,
A second improvement in the algorithm pertains to the BUILD phase. The original approach is replaced by a so-called Linear Approximative BUILD (LAB), which achieves linear runtime in \(n\). Instead
of considering all candidate points, only a subsample from the data is used, repeated \(k\) times (once for each medoid).
The FastPAM2 algorithm tends to yield the best cluster results relative to the other methods, in terms of the smallest sum of distances to the respective medoids. However, especially for large n and
large k, FastCLARANS yields much smaller compute times, although the quality of the clusters is not as good as for FastPAM2. FastCLARA is always much slower than the other two. In terms of the
initialization methods, LAB tends to be much faster than BUILD, especially for larger n and k.
The FastPAM2, FastCLARA and FastCLARANS algorithms from Schubert and Rousseeuw (2019) were ported to C++ in GeoDa from the original Java code by the authors.^7
K-medoids is invoked from the Clusters toolbar, as the third item in the classic clustering subset, as shown in Figure 13. Alternatively, from the menu, it is selected as Clusters > K Medoids.
This brings up the usual variable settings panel.
Variable Settings Panel
The variable settings panel in Figure 14 has the by now familiar layout for the input section. It is identical to that for k-medians, except for the Method selection and the associated options. In
our example with the same six variables as before (z-standardized), we use the default method of FastPAM. The default Initialization Method is LAB, with BUILD available as an alternative. In
practice, LAB is much faster than BUILD.
There are no other options to be set, since the PAM algorithm proceeds with an exhaustive search in the SWAP phase, after an initial set of medoids is selected.
Clicking on Run brings up the cluster map, as well as the usual cluster characteristics in the right-hand panel. In addition, the cluster classifications are saved to the table using the variable
name specified in the dialog (here CLmd1).
Cluster results
The cluster map in Figure 15 show results that are fairly similar to k-medians, although different in some important respects. In essence, the first cluster changes slightly, now with 26 members,
while CL2 and CL3 swap places. The new CL2 has 21 members (compared to 20 for CL2 in k-median) and CL3 has 18 (compared to 20 for CL2 in k-median).
The two smallest clusters have basically the same size in both applications, but their match differs considerably. CL4 has no overlap between the two methods. On the other hand, CL5 is mostly the
same between the two (7 out of 9 members in common). As we have pointed out before, this highlights how the various methods pick up different aspects of multi-attribute similarity. In many
applications, k-medoids is preferred over k-means, since it tends to be less sensitive to outliers.
One additional useful characteristic of the k-medoids approach is that the cluster centers are actual observations. In Figure 16, these are highlighted for our example. Note that these observations
are centers in multi-attribute space, and clearly not in geographical space (we address this in the next chapter).
A more meaningful substantive interpretation of the cluster centers can be obtained from the cluster characteristics in Figure 17. Below the usual listing of methods and options, the observation
numbers of the cluster medoids are given. These can be used to select the corresponding observations in the table.^8 As before, the values for the different variables in each cluster center are
listed, but now these correspond to actual observations, so we can also look these up in the data table.
While these values are expressed in the original units, the remainder of the summary characteristics are in the units used for the specified standardization, z-standardized in our example. First are
the results for the within and between sum of squares. The summary ratio is 0.414, slightly worse than for k-median (but recall that this is not the proper objective function).
The point of departure for the within-cluster distances is the total distance to the overall medoid for the data. This is the observation for which the sum of distances from all other observations is
the smallest (i.e., the observation with the smallest row or column sum of the elements in the distance matrix).
In our example, the within-cluster distances to each medoid and their averages show fairly tight clusters for CL1 and CL5, and less so for the other three. The original total distance decreases from
398.5 to 265.1, a ratio of 0.665, compared to 0.673 for k-median. This would suggest a slightly greater success at reducing the overall sum of distances (smaller values are better).
Options and sensitivity analysis
The main option for k-medoids is the choice of the Method. As shown in Figure 18, in addition to the default FastPAM, the algorithms FastCLARA and FastCLARANS are available as well. The latter two
are large data methods and they will always perform (much) worse than FastPAM in small to medium-sized data sets. In our example, it would not be appropriate to use these methods, but we provide a
brief illustration anyway to highlight the deterioration in the solutions found.
In addition, there is a choice of Initialization Method between LAB and BUILD. In most circumstances, LAB is the preferred option, but BUILD is included for the sake of completeness and to allow for
a full range of comparisons.
The two main parameters that need to be specified for the CLARA method are the number of samples considered (by default set to 2) and the sample size. Since n < 100 in our example, the latter is set
to 40+2K = 50, shown in Figure 19. In addition, the option to include the best previous medoids in the sample is checked.
The resulting cluster map is given in Figure 20, with associated cluster characteristics in Figure 21.
The main effect seems to be on CL4, which, even though it has the same number of members as for PAM, they are in totally different locations. CL1 increases its membership from 26 to 30. CL2 and CL3
shrink somewhat, mostly due to the (new) presence of the members of CL4. CL5 does not change.
As the summary characteristics show, CL3 and CL4 now have different medoids. Overall, there is a slight deterioration of the quality of the clusters, with the total within-cluster distance now 268.9
(compared to 265.1), for a ratio of 0.675 (compared to 0.665).
In this example, the use of a sample instead of the full data set does not lead to a meaningful deterioration of the results. This is not totally surprising, since the sample size of 50 is almost 59%
of the total sample size. For larger data sets with k=5, the sample size will remain at 50, thus constituting a smaller and smaller share of the actual observations as the sample size increases.^9
To illustrate the effect of sample size, we can set its value to 30. This results in a much larger overall within sum of distances of 273.9, with a ratio of 0.687 (details not shown). On the other
hand, if we set the sample size to 85, then we obtain the exact same results as for PAM.
For CLARANS, the two relevant parameters pertain to the number of iterations and the sample rate, as shown in Figure 22. The former corresponds to the numlocal parameter in Ng and Han (2002), i.e.,
the number of times a local optimum is computed (default = 2). The sample rate pertains to the maximum number of neighbors that will be randomly sampled in each iteration (maxneighbors in the paper).
This is expressed as a fraction of \(k \times (n - k)\). We use the value of 0.025 recommended by Schubert and Rousseeuw (2019), which yields a maximum neighbors of 10 (0.025 x 400) for each
iteration in our example. Unlike PAM and CLARA, there is no initialization option.
The results are presented in Figures 23 and 24. The cluster map is quite different from the result for PAM. All but the original CL5 are considerably affected. CL1 from PAM moves southward and grows
to 32 members. All the other clusters lose members. CL2 from PAM disintegrates and drops to 16 members. CL3 shifts in location, but stays roughly the same size. CL4 moves north.
The cluster medoids listed in Figure 24 show all different cluster centers under CLARANS, except for CL5 (which is essentially unchanged). The total within sum of distances is 301.177, quite a bit
larger than 265.147 for PAM. Correspondingly, the ratio of within to total is much higher as well, at 0.756.
CLARANS is a large data method and is optimized for speed (especially with large n and large k). It should not be used in smaller samples, where the exhaustive search carried out by PAM can be
computed in a reasonable time.
Comparison of methods and initialization settings
In order to provide a better idea of the various trade-offs involved in the selection of algorithms and initialization settings, Figures 25 and 26 list the results of a number of experiments, for
different values of k and n.
Figure 25 pertains to a sample of 791 Chicago census tracts and a clustering of socio-economic characteristics that were reduced to 5 principal components.^10 For this number of observations,
computation time is irrelevant, since all results are obtained almost instantaneously (in less than a second).
The number of clusters is considered for k = 5, 10, 30, 77 (the number of community areas in the city), and 150. Recall that the value of k is a critical component of the sample size for CLARA (80 +
2k) and CLARANS (a percentage of \(k \times (n-k)\)). In all cases, PAM with LAB obtains the best local minimum (for k=5, it is tied with PAM-BUILD). Also, CLARANS consistently gives the worst
result. The gap with the best local optimum grows as k gets larger, from about 6% larger for k=5 to more than 20% for k=150. The results for CLARA are always in the middle, with LAB superior for k=5
and 10, and BUILD superior in the other instances.
Figure 26 uses the natregimes sample data set with 3085 observations for U.S. counties. The number of variables to be clustered was increased to 20, consisting of the variables RD, PS, UE, DV and MA
in all four years.
In this instance, compute time does matter, and significantly so starting with k=30. CLARANS is always the fastest, from instantaneous at k=30 to 1 second for k=300 and 2 seconds for k=500. PAM is
also reasonable, but quite a bit slower, with the LAB option always faster than BUILD. For k=30, the difference between the two is minimal (2 seconds vs 3 seconds), but for k=300 PAM-LAB is about
twice as fast (12 seconds vs 26 seconds), and for k=500 almost four times as fast (14 seconds vs 41 seconds). The compute time for CLARA is problematic, especially when using BUILD and for k=300 and
500 (for smaller k, it is on a par with PAM). For k=500, CLARA-BUILD takes some 6 minutes, whereas CLARA-LAB only take somewhat over a minute.
The best local optimum is again obtained for PAM, with BUILD slightly better for k=10 and 30, whereas for the other cases the LAB option is superior. In contrast to the Chicago example, CLARANS is
not always worst and is better than CLARA for k = 5, 10 and 30, but not for k = 300 and 500.
Overall, it seems that the FastPAM approach with the default setting of LAB performs very reliably and with decent (to superior) computation times. However, it remains a good idea to carry out
further sensitivity analysis. Also, the algorithms only obtain local optima, and there is no guarantee of a global optimum. Therefore, if there is the opportunity to explore different options, they
should be investigated.
Spectral Clustering
Clustering methods like k-means, k-medians or k-medoids are designed to discover convex clusters in the multidimensional data cloud. However, several interesting cluster shapes are not convex, such
as the classic textbook spirals or moons example, or the famous Swiss roll and similar lower dimensional shapes embedded in a higher dimensional space. These problems are characterized by a property
that projections of the data points onto the original orthogonal coordinate axes (e.g., the x, y, z, etc. axes) do not create good separations. Spectral clustering approaches this issue by
reprojecting the observations onto a new axes system and carrying out the clustering on the projected data points. Technically, this will boil down to the use of eigenvalues and eigenvectors, hence
the designation as spectral clustering (recall the spectral decomposition of a matrix discussed in the chapter on principal components).
To illustrate the problem, consider the result of k-means and k-medoids (for k=2) applied to the famous spirals data set from Figure 2. As Figure 27 shows, these methods tend to yield convex
clusters, which fail to detect the nonlinear arrangement of the data. The k-means clusters are arranged above and below a diagonal, whereas the k-medoids result shows more of a left-right pattern.
In constrast, as shown in Figure 28, a spectral clustering algorithm applied to this data set perfectly extracts the two underlying patterns (initialized with a knn parameter of 3, see below for
further details and illustration).
The mathematics underlying spectral clustering view it as a problem of graph partitioning, i.e., separating a graph into subgraphs that are internally well connected, but only weakly connected with
the other subgraphs. We briefly discuss this idea first, followed by an overview of the main steps in a spectral clustering algorithm, including a review of some important parameters that need to be
tuned in practice.
An exhaustive overview of the various mathematical properties associated with spectral clustering is contained in the tutorial by von Luxburg (2007), to which we refer for technical details. An
intuitive description is also given in Han, Kamber, and Pei (2012), pp. 519-522.
Clustering as a graph partitioning problem
So far, the clustering methods we have discussed were based on a dissimilarity matrix and had the objective of minimizing within-cluster dissimilarity. In spectral clustering, the focus is on the
complement, i.e., a similarity matrix, and the goal is to maximize the internal similarity within a cluster. Of course, any dissimilarity matrix can be turned into a similarity matrix using a number
of different methods, such as the use of a distance decay function (inverse distance, negative exponential) or by simply taking the difference from the maximum (e.g., \(d_{max} - d_{ij}\)).
A similarity matrix \(S\) consisting of elements \(s_{ij}\) can be viewed as the basis for the adjacency matrix of a weighted undirected graph \(G = (V,E)\). In this graph, the vertices \(V\) are the
observations and the edges \(E\) give the strength of the similarity between \(i\) and \(j\), \(s_{ij}\). This is identical to the interpretation of a spatial weights matrix that we have seen before.
In fact, the standard notation for the adjacency matrix is to use \(W\), just as we did for spatial weights.
Note that, in practice, the adjacency matrix is typically not the same as the full similarity matrix, but follows from a transformation of the latter to a sparse form (see below for specifics).
The goal of graph partitioning is to delineate subgraphs that are internally strongly connected, but only weakly connected with the other subgraphs. In the ideal case, the subgraphs are so-called
connected components, in that their elements are all internally connected, but there are no connections to the other subgraphs. In practice, this will rarely be the case. The objective thus becomes
one of finding a set of cuts in the graph that create a partioning of \(k\) subsets to maximize internal connectivity and minimize in-between connectivity. A naive application of this principle would
lead to the least connected vertices to become singletons (similar to what we found in single and average linkage hierarchical clustering) and all the rest to form one large cluster. Better suited
partitioning methods include a weighting of the cluster size so as to end up with well-balanced subset.^11
A very important concept in this regard is the graph Laplacian associated with the adjacency matrix \(W\). We discuss this further in the Appendix.
The spectral clustering algorithm
In general terms, a spectral clustering algorithm consists of four phases:
• turning the similarity matrix into an adjacency matrix
• computing the \(k\) smallest eigenvalues and eigenvectors of the graph Laplacian (alternatively, the \(k\) largest eigenvalues and eigenvectors of the affinity matrix can be calculated)
• using the (rescaled) resulting eigenvectors to carry out k-means clustering
• associating the resulting clusters back to the original observations
The most important step is the construction of the adjacency matrix, which we consider first.
Creating an adjacency matrix
The first phase consists of selecting a criterion to turn the dense similarity matrix into a sparse adjacency matrix, sometimes also referred to as the affinity matrix. The logic is very similar to
that of creating spatial weights by means of a distance criterion.
For example, we could use a distance band to select neighbors that are within a critical distance \(\delta\). In the spectral clustering literature, this is referred to as an epsilon (\(\epsilon\))
criterion. This approach shares the same issues as in the spatial case when the observations are distributed with very different densities. In order to avoid isolates, a max-min nearest neighbor
distance needs to be selected, which can result in a very unbalanced adjacency matrix. An adjacency matrix derived from the \(\epsilon\) criterion is typically used in unweighted form.
A preferred approach is to use k nearest neighbors, although this is not a symmetric property. Consequently, the resulting adjacency matrix is for a directed graph, since \(i\) being one of the k
nearest neighbors of \(j\) does not guarantee that \(j\) is one of the k nearest neighbors for \(i\). We can illustrate this with our toy example, using two nearest neighbors for observations 5 and 6
in Figure 29. In the left hand panel, the two neighbors of observation 5 are shown as 4 and 7. In the right hand panel, 5 and 7 are neighbors of 6. So, while 5 is a neighbor of 6, the reverse is not
true, creating an asymmetry in the affinity matrix (the same is true for 2 and 3: 3 is a neighbor of 2, but 2 is not a neighbor of 3).
Since the eigenvalue computations require a symmetric matrix, there are two approaches to remedy the asymmetry. In one, the affinity matrix is made symmetric as \((1/2) (W + W')\). In other words, if
\(w_{ij} = 1\), but \(w_{ji} = 0\) (or the reverse), a new set of weights is created with \(w_{ij} = w_{ji} = 1/2\). This is illustrated in the left-hand panel of Figure 30, where the associated
symmetric connectivity graph is shown.
Instead of a pure k-nearest neighbor criterion, so-called mutual k nearest neighbors can be defined, which consists of those neighbors among the k-nearest neighbor set that \(i\) and \(j\) have in
common. More precisely, only those connectivities are kept for which \(w_{ij} = w_{ji} = 1\). The corresponding connectivity graph for our example is shown in the right hand panel of Figure 30. The
links between 2 and 3 as well as between 5 and 6 have been removed.
Once the affinity matrix has been turned into a symmetric form, the resulting adjacencies are weighted with the original \(s_{ij}\) values.
The knn adjacency matrix can join points in disconnected parts of the graph, whereas the mutual k nearest neighbors will be sparser and tends to connect observations in regions with constant density.
Each has pros and cons, depending on the underlying structure of the data.
A final approach is to compute a similarity matrix that has a built-in distance decay. The most common method is to use a Gaussian density (or kernel) applied to the Euclidean distance between
observations: \[s_{ij} = \exp [- (x_i - x_j)^2 / 2\sigma^2],\] where the standard deviation \(\sigma\) plays the role of a bandwidth. With the right choice of \(\sigma\), we can make the
corresponding adjacency matrix more or less sparse.
Note that, in fact, the Gaussian transformation translates a dissimilarity matrix (Euclidean distances) into a similarity matrix. The new similarity matrix plays the role of the adjacency matrix in
the remainder of the algorithm.
Clustering on the eigenvectors of the graph Laplacian
With the adjacency matrix in place, the \(k\) smallest eigenvalues and associated eigenvectors of the normalized graph Laplacian can be computed. Since we only need a few eigenvalues, specialized
algorithms are used that only extract the smallest or largest eigenvalues/eigenvectors.^12
When a symmetric normalized graph Laplacian is used, the \(n \times k\) matrix of eigenvectors, say \(U\), is row-standardized such that the norm of each row equals 1. The new matrix \(T\) has
elements:^13 \[t_{ij} = \frac{u_{ij}}{(\sum_{h=1}^k u_{ih}^2)^{1/2}}.\]
The new “observations” consist of the values of \(t_{ij}\) for each \(i\). These values are used in a standard k-means clustering algorithm to yield \(k\) clusters. Finally, the labels for the
clusters are associated with the original observations and several cluster characteristics can be computed.
In GeoDa, the symmetric normalized adjacency matrix is used, porting to C++ the algorithm of Ng, Jordan, and Weiss (2002) as implemented in Python in scikit-learn.
Spectral clustering parameters
In practice, the results of spectral clustering tend to be highly sensitive to the choice of the parameters that are used to define the adjacency matrix. For example, when using k-nearest neighbors,
the choice of the number of neighbors is an important decision. In the literature, a value of \(k\) (neighbors, not clusters) of the order of \(\log(n)\) is suggested for large \(n\) (von Luxburg
2007). In practice, both \(\ln(n)\) as well as \(\log_{10}(n)\) are used. GeoDa provides both as options.^14
Simlarly, the bandwidth of the Gaussian transformation is determined by the value for the standard deviation, \(\sigma\). One suggestion for the value of \(\sigma\) is to take the mean distance to
the k nearest neighbor, or \(\sigma \sim \log(n) + 1\) (von Luxburg 2007). Again, either \(\ln(n) + 1\) or \(\log_{10}(n) + 1\) can be implemented. In addition, the default value used in the
scikit-learn implementation suggests \(\sigma = \sqrt{1/p}\), where \(p\) is the number of variables (features) used. GeoDa includes all three options.
In practice, these parameters are best set by trial and error, and a careful sensitivity analysis is in order.
Spectral clustering is invoked from the Clusters toolbar, as the next to last item in the classic clustering subset, as shown in Figure 31. Alternatively, from the menu, it is selected as Clusters >
We use the spirals data set, with an associated variable settings panel that only contains the variables x and y, as shown in Figure 32.
Variable Settings Panel
The variable settings panel has the same general layout as for the other clustering methods. One distinction is that there are two sets of parameters. One set pertains to the construction of the
Affinity (or adjacency) matrix, the others are relevant for the k-means algorithm that is applied to the transformed eigenvectors. The k-means options are the standard ones.
The Affinity option provides three alternatives, K-NN, Mutual K-NN and Gaussian, with specific parameters for each. For now, we set the number of clusters to 2 and keep all options to the default
setting. This includes K-NN with 3 neighbors for the affinity matrix, and all the default settings for k-means. The value of 3 for knn corresponds to \(\log_{10}(300) = 2.48\), rounded up to the next
We save the cluster in the field CLs1.
Cluster results
The cluster map in Figure 33 shows a perfect separation of the two spirals, with 150 observations each. As mentioned earlier, neither k-means nor k-medoids is able to extract these highly nonlinear
and non-convex clusters.
The cluster characteristics in Figure 34 list the parameter settings first, followed by the values for the cluster centers (the mean) for the two (standardized) coordinates and the decomposition of
the sum of squares. The ratio of between sum of squares to total sum of squares is a dismal 0.04. This is not surprising, since this criterion provides a measure of the degree of compactness for the
cluster, which a non-convex cluster like the spirals example does not meet.
In this example, it is easy to visually assess the extent to which the nonlinearity is captured. However, in the typical high-dimensional application, this will be much more of a challenge, since the
usual measures of compactness may not be informative. A careful inspection of the distribution of the different variables across the observations in each cluster is therefore in order.
Options and sensitivity analysis
The results of spectral clustering are extremely sensitive to the parameters chosen to create the affinity matrix. Suggestions for default values are only suggestions, and the particular values many
sometimes be totally unsuitable. Experimentation is therefor a necessity. There are two classes of parameters. One set pertains to the number of nearest neighbors for knn or mutual knn. The other set
relates to the bandwidth of the Gaussian kernel, determined by the standard deviation sigma.
K-nearest neighbors affinity matrix
The two default values for the number of nearest neighbors are contained in a drop-down list, as shown in Figure 35. In our example, with n=300, \(\log_{10}(n) = 2.48\), which rounds up to 3, and \(\
ln(n) = 5.70\), which rounds up to 6. These are the two default values provided. Any other value can be entered manually in the dialog as well.
The cluster map with knn = 6 is shown in Figure 36. Unlike what we found for the default option, this value is not able to yield a clean separation of the spirals. In addition, the resulting clusters
are highly unbalanced, with respectively 241 and 59 members.
The options for a mutual knn affinity matrix have the same entries, as in Figure 37.
Here again, neither of the options yields a satisfactory solution, as illustrated in Figure 38, for knn=3 in the left panel and knn=6 in the right panel. The first solution has members in both
spirals, whereas the second solution does not cross over, but it only picks up part of the separate spiral.
Gaussian kernel affinity matrix
The built-in options for sigma, the standard deviation of the Gaussian kernel are listed in Figure 39. The smallest value of 0.707107 corresponds to \(\sqrt{1/p}\), where \(p\), the number of
variables, equals 2 in our example. The other two values are \(\log_{10}(n) + 1\) and \(\ln(n) + 1\), yielding respectively 3.477121 and 6.703782 for n=300. In addition, any other value can be
entered in the dialog.
None of the default values yield particularly good results, as illustrated in Figure 40. In the left hand panel, the clusters are shown with \(\sigma = 0.707107\). The result totally fails to extract
the shape of the separate spirals and looks similar to the results for k-mean and k-median in Figure 27. The result for \(\sigma = 6.703782\) is almost identical, with the roles of cluster 1 and 2
switched. Both are perfectly balanced clusters (the result for \(\sigma = 3.477121\) are similar).
In order to find a solution that provides the same separation as in Figure 33, we need to experiment with different values for \(\sigma\). As it turns out, we obtain the same result as for knn with 3
neighbors for \(\sigma = 0.08\) or \(0.07\), neither of which are even close to the default values. This illustrates how in an actual example, where the results cannot be readily visualized in two
dimensions, it may be very difficult to find the parameter values that discover the true underlying patterns.
Worked example for k-medians
We use the same toy example as in the previous chapters to illustrate the workings of the k-median algorithm. For easy reference, the coordinates of the seven points are listed again in the second
and third columns of Figure 41.
The corresponding dissimilarity matrix is based on the Manhattan distance metric, i.e., the sum of the absolute difference in the x and y coordinates between the points. The result is given in Figure
The median center of the data cloud consists of the median of the x and y coordinates. In our case, this is the point (6,6), as shown in Figure 41. This happens to coincide with one of the
observations (4), but in general this would not be the case. The distances from each observation to the median center are listed in the fourth column of 41. The sum of these distances (24) can be
used to gauge the improvement that follows from each subsequent cluster allocation.
As in the k-means algorithm, the first step consists of randomly selecting k starting centers. In our example, with k=2, we select observations 4 and 7, as we did for k-means. Figure 43 gives the
Manhattan distance between each observation and each of the centers. Observations closest to the respective center are grouped into the two initial clusters. In our example, these first two clusters
consist of 1, 2, 3, and 5 assigned to center 4, and 6 assigned to center 7.
Next, we calculate the new cluster medians and compute the total within cluster distance as the sum of the distances from each allocated observation to the median center. This is illustrated in
Figure 44. For the first cluster, the median is (4,5) and the total within cluster distance is 14. In the second cluster, the median is (8.5,7) and the total is 3 (in the case of an even number of
observations in the cluster, the midpoint between the two values closest to the median is chosen, hence 8.5 and 7). The value of the objective function after this first step is thus 14 + 3 = 17.
We repeat the procedure, calculate the distance from each observation to both cluster centers and assign the observation to the closest center. This results in observation 5 moving from cluster 1 to
cluster 2, as illustrated in Figure 45.
The new allocation results in (4,3.5) as the median center for the first cluster, and (8,6) as the center for the second cluster, as shown in Figure 46. The total distance decreases to 10 + 4 = 14.
We compute the distances to the two new centers in Figure 47 and assign the observations to the closest center. Now, observation 4 moves from the first cluster to the second cluster.
The updated median centers are (4,3) and (7.5,6) as shown in Figure 48. The total distance is 5 + 6 = 11.
Finally, computing the distances to the new centers does not induce any more changes, and the algorithm concludes (Figure 49).
The first cluster consists of 1, 2, and 3 with a median center of (4,3), the second cluster consists of 4, 5, 6, and 7 with a median center of (7.5,6). Neither of these are actual observations. The
clustering resulted in a reduction of the total sum of distances to the center from 24 to 11.
The ratio 11/24 = 0.458 gives a measure of the relative improvement of the objective function. It should not be confused with the ratio of within sum of squares to total sum of squares, since the
latter uses a different measure of fit (squared differences instead of absolute differences) and a different reference point (the mean of the cluster rather than the median). So, the two measures are
not comparable.
Worked example for PAM
We continue to use the same toy example, with the point coordinates as in Figure 41 and associated Manhattan distance matrix in Figure 42.
In the BUILD phase, since \(k=2\), we need find two starting centers. The first center is the observation that minimizes the row or column sum of the Manhattan distance matrix. In Figure 41, we saw
that the median center coincided with observation 4 (6,6), with an associated sum of distances to the center of 24. That is our point of departure. To find the second center, we need for each
candidate (i.e., the 6 remaining observations) the distance to observation 4, the current closest center (there is only one, so that is straightforward in our example). The associated values are
listed below the observation ID in the second row of Figure 50.
Next, we evaluate for each row-column combination \(i,j\) the expression \(\mbox{max}(d_{j4} - d_{ij},0)\), and enter that in the corresponding element of the matrix. For example, for column 1 and
row 2, that value is \(7 - d_{2,1} = 7 - 3 = 4\). For column 1 and row 5, the corresponding value is \(7 - 8 = -1\), which results in a zero entry.
With all the values entered in the matrix, we compute the row sums. There is a tie between observations 3 and 7. For consistency with our other examples, we pick 7 as the second starting point.
The initial stage is the same as for the k-median example, given in Figure 43. The total distance to each cluster center equals 20, with cluster 1 contributing 17 and cluster 2, 3.
The first step in the swap procedure consists of evaluating whether center 4 or center 7 can be replaced by any of the current non-centers, i.e., 1, 2, 3, 5, or 6. The comparison involves three
distances for each point: the distance to the closest center, the distance to the second closest center, and the distance to the candidate center. In our example, this is greatly simplified, since
there are only two centers, with distances \(d_{j4}\) and \(d_{j7}\) for each non-candidate and non-center point \(j\). In addition, we need the distance to the candidate center \(d_{jr}\), where
each non-center point is in turn considered as a candidate (\(r\) in our notation).
All the evaluations for the first step are included in Figure 51. There are five main panels, one for each current non-center point. The rows in each panel are the non-center, non-candidate points.
For example, in the top panel, 1 is considered a candidate, so the rows pertain to 2, 3, 5, and 6. Columns 3-4 give the distance to, respectively, center 4 (\(d_{j4}\)), center 7 (\(d_{j7}\)) and
candidate center 1 (\(d_{j1}\)).
Columns 5 and 6 give the contribution of each row to the objective with point 1 replacing, respectively 4 (\(C_{j41}\)) and 7 (\(C_{j71}\)). First consider a replacement of 4 by 1. For point 2, the
distances to 4 and 7 are 6 and 9, so point 2 is closest to center 4. As a result 2 will be allocated to either the new candidate center 1 or the current center 7. It is closest to the new center (3
relative to 9). The decrease in the objective from assigning 2 to 1 rather than 4 is 3 - 6 = -3, the entry in the column \(C_{j41}\).
Now consider the contribution of 2 to the replacement of 7 by 1. Since 2 is closer to 4, we have the situation that a point is not closest to the center that is to be replaced. So, now we need to
check whether it would stay with its current center (4) or move to the candidate. We already know that 2 is closer to 1 than to 4, so the gain from the swap is again -3, entered under \(C_{j71}\). We
proceed in the same way for each of the other non-center and non-candidate points and find the sum of the contributions, listed in the row labeled \(T\). For a replacement of 4 by 1, the sum of -3,
1, 1, and 0 gives -1 as the value of \(T_{41}\). Similarly, the value of \(T_{47}\) is the sum of -3, 0, 0, and 1, or -2.
The remaining panels show the results when each of the other current non-centers is evaluated as a center candidate. The minimum value over all pairs \(i,r\) is obtained for \(T_{43} = -5\). This
suggests that center 4 should be replaced by point 3 (there is actually a tie with \(T_{73}\), so in each case 3 should enter the center set; we take it to replace 4). The improvement in the overall
objective function from this step is -5.
This process is repeated in Figure 52, but now using \(d_{j3}\) and \(d_{j7}\) as reference distances. The smallest value for \(T\) is found for \(T_{75} = -2\), which is also the improvement to the
objective function (note that the improvement is smaller than for the first step, something we would expect from a gradient descent method). This suggests that 7 should be replaced by 5.
In the next step, we repeat the calculations, using \(d_{j3}\) and \(d_{j5}\) as the distances. The smallest value for \(T\) is found for \(T_{32} = -1\), suggesting that 3 should be replaced by 2.
The improvement in the objective is -1 (again, smaller than in the previous step).
In the last step, we compute everything again for \(d_{j2}\) and \(d_{j5}\). At this stage, none of the \(T\) yield a negative value, so the algorithm has reached a local optimum and stops.
The final result consists of a cluster of three elements, centered on 2, and a cluster of four elements, centered on 5. As Figure 55 shows, both clusters contribute 6 to the total sum of deviations,
for a final value of 12. This also turns out to be 20 - 5 - 2 - 1, or the total effect of each swap on the objective function.
The graph Laplacian
A useful property of the adjancency matrix \(W\) associated with a graph is the degree of a vertex \(i\): \[d_i = \sum_j w_{ij},\] the row-sum of the similarity weights. Previously, we used an
identical concept to characterize the connectivity structure of a spatial weights, and referred to it as neighbor cardinality.
The graph Laplacian is the following matrix: \[L = D - W,\] where \(D\) is a diagonal matrix containing the degree of each vertex. The graph Laplacian has the property that all its eigenvalues are
real and non-negative, and, most importantly, that its smallest eigenvalue is zero.
In the (ideal) case where the adjacency matrix can be organized into separate partitions (unconnected to each other), it takes on a block-diagonal form, with each block containing the partioning
sub-matrix for the matching group. The corresponding graph Laplacian will similarly have a block-diagonal structure. Since each of these sub-blocks is itself a graph Laplacian (for the subnetwork
corresponding to the partition), its smallest eigenvalue is zero as well. An important result is then that if the graph is partitioned into \(k\) disconnected blocks, the graph Laplacian will have \
(k\) zero eigenvalues. This forms the basis for the logic of using the \(k\) smallest eigenvalues of \(L\) to find the corresponding clusters.
While it can be used to proceed with spectral clustering, the unnormalized Laplacian \(L\) has some undesirable properties. Instead, the preferred approach is to use a so-called normalized Laplacian.
There are two ways to normalize the adjacency matrix and thus the associated Laplacian. One is to row-standardize the adjacency matrix, or \(D^{-1}W\). This is exactly the same idea as
row-standardizing a spatial weights matrix. When applied to the Laplacian, this yields: \[L_{rw} = D^{-1}L = D^{-1}D - D^{-1}W = I - D^{-1}W.\] This is referred to as a random walk normalized graph
Laplacian, since the row elements can be viewed as transition probabilities from state \(i\) to each of the other states \(j\). As we saw with spatial weights, the resulting normalized matrix is no
longer symmetric, although its eigenvalues remain real, with the smallest eigenvalue being zero. The associated eigenvector is \(\iota\), a vector of ones.
A second transformation pre- and post-multiplies the Laplacian by \(D^{-1/2}\), the inverse of the square root of the degree. This yields a symmetric normalized Laplacian as: \[L_{sym} = D^{-1/2}LD^
{-1/2} = D^{-1/2}DD^{-1/2} - D^{-1/2}WD^{-1/2} = I - D^{-1/2}WD^{-1/2}.\] Again, the smallest eigenvalue is zero, but the associated eigenvector is \(D^{1/2}\iota\).
Spectral clustering algorithms differ by whether the unnormalized or normalized Laplacian is used to compute the \(k\) smallest eigenvalues and associated eigenvectors and whether the Laplacian or
the adjacency matrix is the basis for the calculation.
Specifically, as an alternative to using the smallest eigenvalues and associated eigenvectors of the normalized Laplacian, the largest eigenvalues/eigenvectors of the normalized adjacency (or
affinity) matrix can be computed. The standard eigenvalue expression is the following equality: \[Lu = \lambda u,\] where \(u\) is the eigenvector associated with eigenvalue \(\lambda\). If we
subtract both sides from \(Iu\), we get: \[(I - L)u = (1 - \lambda)u.\] In other words, if the smallest eigenvalue of \(L\) is 0, then the largest eigenvalue of \(I - L\) is 1. Moreover: \[I - L = I
- [I - D^{-1/2}WD^{-1/2}] = D^{-1/2}WD^{-1/2},\] so that the search for the smallest eigenvalue of \(L\) is equivalent to the search for the largest eigenvalue of \(D^{-1/2}WD^{-1/2}\), the
normalized adjacency matrix.
Anselin, Luc. 2019. “Quantile Local Spatial Autocorrelation.” Letters in Spatial and Resource Sciences 12 (2): 155–66.
Han, Jiawei, Micheline Kamber, and Jian Pei. 2012. Data Mining (Third Edition). Amsterdam: MorganKaufman.
Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2009. The Elements of Statistical Learning (2nd Edition). New York, NY: Springer.
Hoon, Michiel de, Seiya Imoto, and Satoru Miyano. 2017. “The C Clustering Library.” Tokyo, Japan: The University of Tokyo, Institute of Medical Science, Human Genome Center.
Kaufman, L., and P. Rousseeuw. 2005. Finding Groups in Data: An Introduction to Cluster Analysis. New York, NY: John Wiley.
Ng, Andrew Y., Michael I. Jordan, and Yair Weiss. 2002. “On Spectral Clustering: Analysis and an Algorithm.” In Advances in Neural Information Processing Systems 14, edited by T. G. Dietterich, S.
Becker, and Z. Ghahramani, 849–56. Cambridge, MA: MIT Press.
Ng, Raymond R., and Jiawei Han. 2002. “CLARANS: A Method for Clustering Objects for Spatial Data Mining.” IEEE Transactions on Knowledge and Data Engineering 14: 1003–16.
Schubert, Erich, and Peter J. Rousseeuw. 2019. “Faster k-Medoids Clustering: Improving the PAM, CLARA, and CLARANS Algorithms.” In Similarity Search and Applications, SISAP 2019, edited by Giuseppe
Amato, Claudio Gennaro, Vincent Oria, and Miloš Radovanović, 171–87. Cham, Switzerland: Springer Nature.
Shi, Jianbo, and Jitendra Malik. 2000. “Normalized Cuts and Image Segmentation.” IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (8): 888–905.
von Luxburg, Ulrike. 2007. “A Tutorial on Spectral Clustering.” Statistical Computing 27 (4): 395–416.
1. University of Chicago, Center for Spatial Data Science – anselin@uchicago.edu↩︎
2. Strictly speaking, k-medoids minimizes the average distance to the representative center, but the sum is easier for computational reasons.↩︎
3. As Kaufman and Rousseeuw (2005) show in Chapter 2, the problem is identical to some facility location problems for which integer programming branch and bound solutions have been suggested. But
these tend to be limited so smaller sized problems.↩︎
4. Clearly, with a sample size of 100%, CLARA becomes the same as PAM.↩︎
5. More precisely, the first sample consists of 40 + 2k random points. From the second sample on, the best k medoids found in a previous iteration are included, so that there are 40 + k additional
random points. Also, in Schubert and Rousseeuw (2019), they suggested to use 80 + 4k and 10 repetitions for larger data sets. In the implementation in GeoDa, the latter is used for data sets
larger than 100.↩︎
6. Schubert and Rousseeuw (2019) also consider 2.5% in larger data sets with 4 iterations instead of 2.↩︎
7. The Java code is contained in the open source ELKI software, available from https://elki-project.github.io.↩︎
8. This is the approach used to obtain the selections in Figure 16.↩︎
9. As mentioned, for sample sizes > 100, GeoDa uses a sample size of 80 + 4k.↩︎
10. The data set is available as one of the GeoDa Center sample data. Details about the computation of the principal components and other aspects of the data are given in Anselin (2019).↩︎
11. For details, see Shi and Malik (2000), as well as the discussion of a “graph cut point of view” in von Luxburg (2007).↩︎
12. GeoDa uses the Spectra C++ library for large scale eigenvalue problems. Note that the particular routines implemented in this library extract the largest eigenvalues/eigenvectors from the
symmetric normalized affinity matrix, not the smallest from the graph Laplacian. The latter is the textbook explanation, but not always the most efficient implementation in practice. The
equivalence between the two approaches is detailed in the Appendix.↩︎
13. Alternative normalizations are used as well. For example, in the implementation in scikit-learn, the eigenvectors are rescaled by the inverse square root of the eigenvalues.↩︎
14. In GeoDa, the values for the number of nearest neighbors are rounded up to the nearest integer.↩︎ | {"url":"https://geodacenter.github.io/workbook/7c_clusters_3/lab7c.html","timestamp":"2024-11-14T18:25:58Z","content_type":"text/html","content_length":"141227","record_id":"<urn:uuid:7608d4d5-7cf7-471f-a3e4-377aea0228f2>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00623.warc.gz"} |
Find the angle between the vectors $\
Hint: The given problem statement states to find the angle between the vectors with the help of the scalar product. You can assume each vector a variable that means one vector will be “a” and another
one will be “b”. So, let’s look at the approach of the problem statement.
Complete Complete Step by Step Solution:
The given problem statement is to find the angle between the vectors $\hat{i}-2\hat{j}+3\hat{k}$ and $3\hat{i}-2\hat{j}+\hat{k}$.
Now, the first thing is to assume each of the vectors, that means, we will let $\hat{i}-2\hat{j}+3\hat{k}$is $\vec{a}$and $3\hat{i}-2\hat{j}+\hat{k}$ is $\vec{b}$.
So, if we write the vectors in a different manner, that means,
$\Rightarrow \vec{a}=1\hat{i}-2\hat{j}+3\hat{k}$
$\Rightarrow \vec{b}=3\hat{i}-2\hat{j}+1\hat{k}$
Now, we will use the scalar product formula, that means, we get,
$\Rightarrow \vec{a}.\vec{b}=|\vec{a}||\vec{b}|\cos \theta $ where ,$\theta $is the angle between $\vec{a}$ and $\vec{b}$.
Now, we will find$\vec{a}.\vec{b}$, that means, we get,
$\Rightarrow \vec{a}.\vec{b}=(1\hat{i}-2\hat{j}+3\hat{k}).(3\hat{i}-2\hat{j}+1\hat{k})$
Now, when we solve this above equation, we get,
$\Rightarrow \vec{a}.\vec{b}=[(1\hat{i}.3\hat{i})+(-2\hat{j}.-2\hat{j})+(3\hat{k}.1\hat{k})]$
We have kept$\hat{i}.\hat{i}$, $\hat{j}.\hat{j}$ and $\hat{k}.\hat{k}$ because all these have the values 1 instead of $\hat{i}.\hat{j}$, $\hat{j}.\hat{k}$ and $\hat{k}.\hat{i}$because all these have
the values as 0.
$\Rightarrow \vec{a}.\vec{b}=[(1.3)+(-2.-2)+(3.1)]$
$\Rightarrow \vec{a}.\vec{b}=3+4+3$
Now, when we solve, we get,
$\Rightarrow \vec{a}.\vec{b}=10$
Now, we will find magnitude of $\vec{a}$, that means, we get,
$\Rightarrow \vec{a}=1\hat{i}-2\hat{j}+3\hat{k}$
$\Rightarrow \vec{a}=\sqrt{{{(1)}^{2}}+{{(-2)}^{2}}+{{(3)}^{2}}}$
Now, when we solve, we get,
$\Rightarrow \vec{a}=\sqrt{1+4+9}$
$\Rightarrow \vec{a}=\sqrt{14}$
Similarly, we will find magnitude of $\vec{b}$, that means, we get,
$\Rightarrow \vec{b}=3\hat{i}-2\hat{j}+1\hat{k}$
$\Rightarrow \vec{b}=\sqrt{{{(3)}^{2}}+{{(-2)}^{2}}+{{(1)}^{2}}}$
Now, when we solve, we get,
$\Rightarrow \vec{b}=\sqrt{9+4+1}$
$\Rightarrow \vec{b}=\sqrt{14}$
Now, we will put the respective values in the formula $\vec{a}.\vec{b}=|\vec{a}||\vec{b}|\cos \theta $, we get,
$\Rightarrow 10=\sqrt{14}\sqrt{14}\cos \theta $
Now, we will rearrange the equation, we get,
$\Rightarrow \dfrac{10}{\sqrt{14}\sqrt{14}}=\cos \theta $
As, we now $\sqrt{y}\sqrt{y}=y$, similarly we will apply in the equation and then we will convert it to lowest terms, we get,
$\Rightarrow \dfrac{10}{14}=\cos \theta $
$\Rightarrow \dfrac{5}{7}=\cos \theta $
Now when we rearrange we will get the value of $\theta $, we get,
$\Rightarrow {{\cos }^{-1}}(\dfrac{5}{7})=\theta $
After rearranging the equation, we get,
$\Rightarrow \theta ={{\cos }^{-1}}(\dfrac{5}{7})$
Therefore, the value of $\theta $or the angle between two vectors is ${{\cos }^{-1}}(\dfrac{5}{7})$.
In the above problem statement, we have used the fine concept of the scalar product. The scalar product is also known as the dot product. The formula used in the scalar product is $\vec{a}.\vec{b}=|\
vec{a}||\vec{b}|\cos \theta $. You need to note that the scalar product is commutative as well as distributive. | {"url":"https://www.vedantu.com/question-answer/find-the-angle-between-the-vectors-class-11-maths-cbse-600e52416cb1be2c719000ef","timestamp":"2024-11-05T19:20:18Z","content_type":"text/html","content_length":"168589","record_id":"<urn:uuid:6e12ba08-bf71-4121-82ff-c2f92b23857f>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00227.warc.gz"} |
Public University of Navarre
Module/Subject matter
Basic training / Mathematics.
• Functions of several variables: limits, continuity, differentiation, Taylor series and graphics.
• Multiple integration. Applications.
• Vector calculus.
• Ordinary differential equations.
Elements of differential and integral calculus in several variables. Ordinary differential equations.
General proficiencies
General proficiencies that a student should acquire in this course:
• G8 Knowledge of basic and tecnological subjects to have the ability to learn new methods and theories, and versatility to adapt to new situations
• G9 Problem solving proficiency with personal initiative, decision making, creativity and critical reasoning. Ability to elaborate and communicate knowledge, abilities and skills in computer
• T1 Analysis and synthesis ability
• T3 Oral and written communication
• T4 Problem solving
• T8 Self-learning
Specific proficiencies
Specific proficiencies that a student should acquire in this course:
• FB1 Ability to solve mathematical problems in engineering. Ability to apply theoretical knowledge on linear algebra, differential and integral calculus, numerical methods, numerical algorithms,
statistics and optimization
• FB3 Ability to understand and master the basic concepts of discrete matemathics, logic, algorithmics and computational complexity, and their applications to problem solving in engineering.
Learning outcomes
At the end of the course, the student is able to:
• O1: Apply the basic elements of differential calculus in several variables: limits, continuity, differentiability.
• O2: Formulate and solve unconstrained and constrained optimization problems.
• O3: Apply the basic elements of integral calculus in several variables, e.g., to determine the length of a curve, the area of a surface, the volume of a solid, etc.
• O4: Understand the basic elements of vector calculus: flux integral, gradient, divergence, curl, integral theorems.
• O5: Recognize and solve some basic types of ordinary differential equations.
The following table shows the distribution of activities in the course:
│Methodology - Activity │On-site hours│Off-site hours│
│A-1: Theoretical lectures │45 │ │
│A-2: Practical lectures │15 │ │
│A-3: Self-study │ │80 │
│A-4: Exams and assessment │5 │ │
│A-5: Tutoring │5 │ │
│TOTAL │70 │80 │
Continuous assessment is considered along the semester based on the following activities:
│Learning outcome│Assessment activity │Weight (%) │Resit assessment│
│O1, O2 │Midterm exam A on lessons 1 and 2│35 │Yes (final exam)│
│O3, O4 │Midterm exam B on lessons 3 and 4│45 │Yes (final exam)│
│O5 │Midterm exam C on lesson 5 │20 (Minimum to be considered in the final mark: 3 out of 10) │Yes (final exam)│
In order to pass the subject, one of the following conditions must be fulfilled:
• the mark of the midterm exam C is not less than 3 (out of 10) and the weighted mark of all three midterm exams is not less than 5 (out of 10);
• the mark of the final exam covering the whole course (to be scheduled during the resit assessment period) is not less than 5 (out of 10). Only students who did not pass the course by continuous
assessment can sit this exam.
1. Functions, limits and continuity in R^n. Definition. Scalar and vector functions. Limits. Continuity.
2. Differential calculus in R^n. Partial and directional derivatives. Gradient vector and Jacobian matrix. Differentiability. The chain rule. Higher-order derivatives. Hessian matrix. Taylor
polynomials. Relative extrema. Constrained optimization: the theorem of Lagrange multipliers. Absolute extrema on closed bounded regions.
3. Integral calculus in R^n. The Riemann integral. Elementary regions. Fubini's theorem. Change of variables. Polar, cylindrical and spherical coordinates.
4. Vector calculus. Conservative fields. Potential function. Line and surface integrals. Circulation and flux. Divergence and curl. Green's, divergence and Stokes' theorems.
5. Ordinary differential equations. Basic notions on differential equations. First-order ordinary differential equations. Existence and uniqueness of solution. Some elementary integration methods.
Higher-order linear differential equations. Applications.
Access the bibliography that your professor has requested from the Library.
Basic bibliography:
• R.A. Adams. Calculus: a complete course. Addison-Wesley.
• E. Kreyszig. Advanced engineering mathematics. John Wiley & Sons.
• J.E. Marsden, A.J. Tromba. Vector calculus. W.H. Freeman.
• R.K. Nagle, E.B. Saff, A.D. Snider. Fundamentals of differential equations and boundary value problems. Addison-Wesley.
Additional bibliography:
• M. Braun. Differential equations and their applications: an introduction to applied mathematics. Springer-Verlag.
• R.E. Larson, R.P. Hostetler. Cálculo y geometría analítica. McGraw-Hill.
• S.L. Salas, E. Hille, G.J. Etgen. Calculus: una y varias variables. Reverté.
• D.G. Zill. Ecuaciones diferenciales con aplicaciones de modelado. Thomson.
Lecture room building at Arrosadia Campus. | {"url":"https://www.unavarra.es/ficha-asignaturaDOA/?languageId=1&codPlan=250&codAsig=250206&anio=2019","timestamp":"2024-11-04T20:16:47Z","content_type":"text/html","content_length":"20574","record_id":"<urn:uuid:5dd2bbc9-0d3e-43c1-89bc-e5d3d3688dda>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00335.warc.gz"} |
OR in an OB World
I was asked a question about mixed integer programs (MIPs) that I'm pretty sure I've seen on forums before, so I'll summarize my answer here. The question was in the context of the
solver, but the answer applies more generally.
Consider a MIP model of the form \[ \begin{array}{lrclr} \mathrm{maximize} & s\\ \mathrm{s.t.} & Ax+s & \le & b\\ & s & \ge & 0\\ & x & \in & X \end{array} \]where $X$ is some domain and $x$ may be a
mix of integer and continuous variables. The questioner knows
a priori
that $s\le\bar{s}$ for some constant $\bar{s}$. When he adds $\bar{s}$ as an upper bound for $s$, though, his solution time roughly triples. So (a) why does that happen and (b) how can knowledge of $
\bar{s}$ be exploited?
Answering the
second question first
, the main virtue of knowing $\bar{s}$ is that we can stop the solver as soon as it finds a solution with $s = \bar{s}$. Of course, this is only helpful if the bound is tight, i.e., if a feasible
solution does exist with $s = \bar{s}$. If so, I think the best way to accomplish the desired end is to attach a callback that monitors incumbents and, when it sees one with $s = \bar{s}$, tells the
solver to terminate the search.
Solvers (including CPLEX) typically incorporate a relative convergence criterion, which stops the solver if an incumbent $(x, s)$ is found with\[\frac{\hat{s}-s}{\hat{s}}\le\delta,\]where $\hat{s}$
is the current best bound and $\delta$ is a parameter. Asserting the known a priori bound, this effectively becomes\[\frac{\min(\hat{s},\bar{s})-s}{\min(\hat{s},\bar{s})}\le\delta\]with the left side
of the new inequality less than or equal to the left side of the previous inequality (left to the reader as an exercise). Thus the relative convergence termination criterion might be triggered sooner
(a good thing), provided that the bound $\bar{s}$ could be communicated to the solver in a benign way.
There are at least two more effects, though, that are harder to predict. Suppose that, in the absence of $\bar{s}$, the solver at some juncture is staring at a number of live nodes with bounds
$s_1,s_2,\dots$ greater than $\bar{s}$ and not all identical (which is likely to be the case). When it comes time to leave whatever branch of the tree it is currently exploring, the solver chooses
the node with the best bound (the largest among $\{s_1, s_2,\dots\}$). Now suppose that the solver has been informed, in some way, of the upper bound $\bar{s}$. The bounds of all those live nodes are
truncated to $\{\bar{s},\bar{s},\dots\}$, meaning that they are all tied for best. The solver will select one, not necessarily the same one it would have selected without $\bar{s}$, and begin
exploring it. Thus knowledge of the bound changes the order in which the solution tree is explored, and we have no way of knowing whether this moves the solver from a less productive area of the tree
to a more productive area or vice versa.
CPLEX (and I assume other solvers) also employ a test to decide when to
(shift exploration to nodes higher in the tree without pruning the current node). Let $s_{lp}$ denote the bound provided by the linear relaxation of the current node, let $s_{inc}$ denote the value
of the current incumbent solution, and as before let $\hat{s}$ be the best bound (the largest linear relaxation bound of any live node). Backtracking occurs when$$\hat{s}-s_{lp}\ge \alpha(\hat{s}-s_
{inc})$$with $\alpha\in [0,1]$ a parameter. Smaller values of $\alpha$ promote backtracking ($\alpha=0$, if allowed, would be pure
best-first search
), while larger values of $\alpha$ inhibit backtracking ($\alpha=1$, if allowed, would be pure
depth-first search
). The backtracking inequality can be rewritten as$$(1-\alpha)\hat{s}\ge s_{lp}-\alpha s_{inc}.$$Knowing $\bar{s}$ would effectively replace $\hat{s}$ with $\min(\hat{s},\bar{s})$, making the
inequality harder to satisfy and thus making backtracking less likely to occur (pushing the solver closer to depth-first search). Again, this changes the way the solution tree is explored, in a way
whose impact is difficult to predict.
The questioner said that his solution time roughly tripled when he added $\bar{s}$ as an upper bound. As far as I can tell, that was just bad luck; the solution time might decrease in other cases
(but also might increase more). Personally, the only use I would make of $\bar{s}$ would be in an incumbent callback to terminate the search as soon as $s=\bar{s}$ occurred -- and I would only do
that if I thought that $\bar{s}$ was really the optimal value (not just a tight bound).
A high proportion of my research projects end up with my writing code that solves a set of test problem and spews results that eventually need to be analyzed. Once upon a time this mean parsing log
files, transcribing data to a spreadsheet or statistics program and proceeding from there. In recent years I've taken to adding a database interface to the program, so that it records results (both
intermediate and final) to a database. Add records of problem characteristics (as seen by the program) and parameter values (as passed to the program) and the database becomes a one-stop-shop for any
data I need when it comes time to write the associated paper.
SQLite has proven to be a great choice for the back end, because it does not require a server. It understands a reasonably complete dialect of SQL and is more than sufficiently efficient for my
purposes. Drivers are readily available for it. (I program in Java, and I've been having very good luck with SQLite JDBC.) As with pretty much everything I use, it is open-source.
I typically create the database outside my program, and once my program has done its thing I need to access the results stored in the database. There are a variety of client programs for SQLite. The
best one I've found so far is SQLite Studio (open-source, available for multiple platforms). It has a very complete feature set and is remarkably easy to use. I recently had to merge data from
multiple tables of one database into the corresponding tables of another one. SQLite Studio allowed me to open both databases and then do the merge with simple SQL queries; it handled the mechanics
of "attaching" one database to the other silently in the background.
So I thought I should give SQLite, SQLite JDBC and especially SQLite Studio a shout-out.
I ran into a momentarily scary bug in the NetBeans 7.2 IDE as I was working on a Java program. I'm posting details here in case anyone else is doing what I did -- desperately googling for a fix.
I had just successfully run my code from the IDE and committed the changes (using Git, although I doubt that's relevant). Somewhere between doing the commit and possibly starting to edit a file -- I
forget exactly where -- the IDE gave me a low memory warning. So I exited and restarted the IDE, thinking that might free up some memory. When I went to run my code (unmodified from the previous
successful run), the program would not start. Instead, I was informed in the terminal window that a ClassNotFoundException had occurred. The named class was definitely there, at least in source form.
I tried a "clean and build" to recompile the code, but the exception repeated. Restarting the IDE (again) did not help. Rebooting the PC did not help.
So I went off to Google to hunt for the reason for the exception, and found a variety of bug reports (from 2011 onward) that might or might not be pertinent. Fortunately, one of them contained a
suggestion to turn off the Compile on Save feature. You can access this, which is on by default (at least for me), by right-clicking the project in the Projects window and clicking Properties > Build
> Compiling. I turned it off, and sure enough the program would again run from the IDE. So I tried a "clean and build", verified the program ran, switched Compile on Save back on, and the bug
Compile on Save does what is sounds like: it compiles source files when you save them. This saves time when you run or debug the program, so I was not entirely thrilled at having to turn it off.
Before submitting a bug report, I did a quick search of the NetBeans issue tracker, and found only one bug report (again, from 2011) that mentioned both Compile on Save and ClassNotFoundException.
The person who filed that report resolved his problem by deleting the NetBeans cache (which NetBeans rebuilds when next run). So I tracked down the cache (on my Linux Mint PC, it's in the folder ~
/.cache/netbeans/7.2), deleted it, restarted NetBeans, turned Compile on Save back on, and happily discovered that the program once again runs.
[APOLOGY: Not too long ago I added to the blog a "feature" provided by Google that hypothetically merges comments made on the blog with comments made to the Google+ post in which I announce the blog
entry. I should have read the fine print more carefully. This (a) disenfranchised anyone without a Google account from commenting, (b) hid the presence of comments on the blog until you clicked the
"$n$ comments" link, and (c) set $n$ to be the number of comments posted directly to the blog, not the number of comments in the merged stream. The effect of (c) was that this post was showing "0
comments" when in fact there were four or five, all on the G+ post. So I turned off the feature ... which appears to have deleted the existing comments from the G+ post (unless Google is messing with
me). Google messing with me is entirely possible: they logged me out in the middle of the edit below, and when I logged back in they had lost all the changes, even though I'd done several previews,
which typically has the effect of saving changes. Anyone know if Microsoft recently bought Google??]
Stripped of some unnecessary detail, the following showed up on a support forum today: given a set $x_1,\dots,x_n$ of binary variables and a parameter $K$, how can one specify in a mathematical
program that (a) $\sum_{i=1}^n x_i$ is either 0 or $K$ and (b) that, in the latter case, the $K$ variables with value 1 must be consecutive?
The first part is easy. Define a new integer variable $s\in\{0,K\}$ and add the constraint $$\sum_{i=1}^n x_i = s.$$Some modeling languages (AMPL comes to mind) allow you to specify directly the
domain $\{0,K\}$; in other cases, replace $s$ with $K\hat{s}$ where $\hat{s}$ is a binary variable.
A couple of methods come to mind for enforcing requirement (b). I'll assume that $1<K<n$ to avoid trivial cases. One method is to add the constraints
$$\begin{eqnarray*} x_{1} & \le & x_{2}\\ 2x_{j} & \le & x_{j-1}+x_{j+1}\quad \forall j\in\left\{ 2,\dots,n-1\right\} \\ x_{n} & \le & x_{n-1}. \end{eqnarray*}$$[END NONSENSE]
I suffered a rather severe brain cramp when I wrote the above. Several comments pointed out that this is erroneous. The corrected formulation (which I've tested, and thus post with a modicum of
possibly misplaced confidence), is:$$(K-1)(x_{i-1}-x_i)\le \sum_{j=\max(1,i-K)}^{i-2}x_j\quad \forall i\in\{2,\dots,n\}$$(with an empty sum understood to be 0).
Another is to replace binary variable $x_i$ with continuous variable $\tilde{x}_i$ and introduce a new set of binary variables $y_1,\dots,y_{n-K+1}$ where $y_j=1$ signals that $\tilde{x}_j$ starts a
run of $K$ consecutive ones. In this approach, we do not need $s$; requirement (a) becomes $$\sum_{i=1}^{n-K+1}y_j \le 1.$$Requirement (b) translates to$$ \tilde{x}_{j} = \sum_{i=\max\left\{ 1,j-K+1\
right\} }^{j}y_{j}\quad \forall j.$$With the second approach, a presolver can easily eliminate the $\tilde{x}_j$ variables, although it may be preferable to retain them. (Retaining the $\tilde{x}_j$
increases both the number of variables and number of constraints; but in cases where the original $x_j$ appear in more than one expression, eliminating the $\tilde{x}_j$ by substitution increases the
density of the constraint matrix.) | {"url":"https://orinanobworld.blogspot.com/2013/05/","timestamp":"2024-11-04T15:22:17Z","content_type":"application/xhtml+xml","content_length":"156713","record_id":"<urn:uuid:ead4356d-6186-4323-a8b7-9c8c737a3a7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00831.warc.gz"} |
Interagency Modeling and Analysis Group
This model describes the flow characteristics of a single vessel with resistance to flow, R, nonlinear compliance, C, and input flow of Fin.
The model simulates pressure and division of fluid flow through a compliant
vessel with resistance to flow, R, and vessel compliance, C, given an input
flow of Fin. The compliant vessel follows a nonlinear pressure volume
curve which takes into account the resistance to vessel collapse that
occurs at low or negative pressures and the concave form of the curve at
high pressures. The pressure-volume function used is that proposed by
Drzewiecki et al. (Am J Physiol, Heart Circ Physiol 273:H2030-H2043, 1997).
The flow out of the vessel is related to the resistance by the fluid
equivalent of Ohm's Law.
Fout = (Pin - Pout) / R
where Pin is the pressure at the vessel entrance, Pout is the pressure at
the end of the vessel and R is determined from Poiseuille's Law as:
R = 128*mu*L / pi*D^4
where mu is the fluid viscosity, L is the vessel length, and D is the
vessel diameter.
Pin is given from the formulation by Drzewiecki et al. and is given by
the following expression:
_ _ _ _
| | V/L-Ab | |
Pin = a * < exp< b * -------- > - 1 >
|_ |_ Ab _| _|
_ _
| n |
- EIhat * < (Ab*L/V) - 1 > + Pb
|_ _|
where a and b are constants which determine the nonlinear shape of the
vessel elastance, V is the vessel volume, Ab is the vessel cross-
sectional area at buckling, EIhat is the normalized flexural rigidity
of the vessel wall, n determines the curvature of the pressure volume
curve in compression and Pb is the intraluminal pressure at buckling
which is given by:
_ _ _ _
| | Ab-A0 | |
Pb = a * < exp< b * ------- > - 1 >
|_ |_ A0 _| _|
where A0 is the vessel cross-sectional area at an intraluminal
pressure of zero. For more details about this formulation and the
parameter values given for different vessels see Drzewiecki et al.
The flow into the vessel and the flow out of the vessel are different
because of the change in volume which adds or subtracts flow from that
leaving the vessel depending on whether the pressure is increasing or
decreasing in the vessel. So we have for vessel compliance, Fcomp:
Fcomp = Fin - Fout
The change in vessel volume as a function of time is determined from
the flow attributed to the vessel diameter change and is given by:
dV/dt = Fcomp
It should be noted in the code that the compliance, C, is calculated by
taking the inverse of the derivative of the Pin with respect to V.
The equations for this model may be viewed by running the JSim model applet and clicking on the Source tab at the bottom left of JSim's Run Time graphical user interface. The equations are written in
JSim's Mathematical Modeling Language (MML). See the Introduction to MML and the MML Reference Manual. Additional documentation for MML can be found by using the search option at the Physiome home
Download JSim model project file
• Download JSim model MML code (text):
• Download translated SBML version of model (if available):
□ No SBML translation currently available.
Model Feedback
We welcome comments and feedback for this model. Please use the button below to send comments:
Drzewiecki G, Field S, Moubarak I and Li JKJ.
Vessel growth and collapsible pressure-area relationship
American Journal of Physiology, Heart and Circulatory Physiology
273:H2030-H2043, 1997.
Key terms
Ohm's Law
Please cite https://www.imagwiki.nibib.nih.gov/physiome in any publication for which this software is used and send one reprint to the address given below:
The National Simulation Resource, Director J. B. Bassingthwaighte, Department of Bioengineering, University of Washington, Seattle WA 98195-5061.
Model development and archiving support at https://www.imagwiki.nibib.nih.gov/physiome provided by the following grants: NIH U01HL122199 Analyzing the Cardiac Power Grid, 09/15/2015 - 05/31/2020, NIH
/NIBIB BE08407 Software Integration, JSim and SBW 6/1/09-5/31/13; NIH/NHLBI T15 HL88516-01 Modeling for Heart, Lung and Blood: From Cell to Organ, 4/1/07-3/31/11; NSF BES-0506477 Adaptive Multi-Scale
Model Simulation, 8/15/05-7/31/08; NIH/NHLBI R01 HL073598 Core 3: 3D Imaging and Computer Modeling of the Respiratory Tract, 9/1/04-8/31/09; as well as prior support from NIH/NCRR P41 RR01243
Simulation Resource in Circulatory Mass Transport and Exchange, 12/1/1980-11/30/01 and NIH/NIBIB R01 EB001973 JSim: A Simulation Analysis Platform, 3/1/02-2/28/07. | {"url":"https://www.imagwiki.nibib.nih.gov/physiome/jsim/models/webmodel/NSR/nonlinearcompliantvessel","timestamp":"2024-11-09T16:05:48Z","content_type":"text/html","content_length":"61576","record_id":"<urn:uuid:8c6344b0-abf6-4cc7-b207-0f4a422c6b52>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00082.warc.gz"} |
Each 5-mL portion of Maalox contains 400 mg of magnesium hydroxide, and 40 mg of simethicone. If the recommended dose is 2 teaspoons 4 times a day, how many grams of each substance would an individual take in a 24-hour period? (1 teaspoon= 5 mL) | Socratic
Each 5-mL portion of Maalox contains 400 mg of magnesium hydroxide, and 40 mg of simethicone. If the recommended dose is 2 teaspoons 4 times a day, how many grams of each substance would an
individual take in a 24-hour period? (1 teaspoon= 5 mL)
1 Answer
Your patient would get 8 times more magnesium hydroxide and simethicone than a teaspoon contains.
So, you know that you have $\text{5-mL}$ portions of your drug to work with. The recommended dosage is two teaspoons, or two $\text{5-mL}$ doses, 4 times a day.Since a day has 24 hours, this will
mean that your patient gets 2 teaspoons every 6 hours.
This means that, during a 24-hour period, your patient will get
$\text{24 hours" * "2 teaspoons"/"6 hours" = "8 teaspoons}$
Since a teaspoon holds $\text{5-mL}$ of your drug, the amounts of magnesium hydroxide and simethicone he'll get are
$\text{8 teaspoons" * "5 mL"/"1 teaspoon" * "400 mg"/"5 mL" = "3200 mg }$$M g {\left(O H\right)}_{2}$
$\text{8 teaspoons" * "5 mL"/"1 teaspoon" * "40 mg"/"5 mL" = "320 mg simethicone}$
If you these values to one sig fig, the number of sig figs in 5 mL, 400 mg, and 40 mg, the answer will be
${m}_{M g {\left(O H\right)}_{2}} = \text{3000 mg }$ and
#m_("simethicone") = "300 mg"#
Since you must give the answer in grams, use a simple conversion factor to go from mg to g
$\text{3000 mg "Mg(OH)_2 * "1 g"/"1000 mg" = "3 g }$$M g {\left(O H\right)}_{2}$
$\text{300 mg simethicone" * "1 g"/"1000 mg" = "0.3 g simethicone}$
Impact of this question
3795 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/each-5-ml-portion-of-maalox-contains-400-mg-of-magnesium-hydroxide-and-40-mg-of-","timestamp":"2024-11-06T12:08:52Z","content_type":"text/html","content_length":"36479","record_id":"<urn:uuid:1352e128-25fa-4311-9c85-379bd30a807d>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00133.warc.gz"} |
Current Ratio Explained With Formula and Examples.
The current ratio is a liquidity ratio that measures a company's ability to pay short-term and long-term obligations. The current ratio is calculated by dividing a company's current assets by its
current liabilities.
A company's ability to pay its short-term obligations is determined by its current assets, which include cash, accounts receivable, and inventory. A company's ability to pay its long-term obligations
is determined by its long-term assets, which include property, plant, and equipment.
The current ratio is a important financial ratio because it provides insights into a company's ability to pay its obligations. For example, if a company has a current ratio of 2, it means that the
company has twice as many current assets as current liabilities. This is generally considered to be a good thing because it means that the company is in a strong financial position and is unlikely to
default on its obligations.
However, it is important to note that a high current ratio does not necessarily mean that a company is financially healthy. For example, a company may have a high current ratio because it is not
investing its cash properly or because it is carrying too much inventory.
The current ratio is just one of many financial ratios that can be used to assess a company's financial health. Other important financial ratios include the debt-to-equity ratio, the quick ratio, and
the operating cash flow ratio. How do you calculate current ratio in Excel? To calculate the current ratio in Excel, you will need to use the following formula:
Current Ratio = Current Assets / Current Liabilities
For example, let's say that a company has $1,000 in current assets and $500 in current liabilities. The current ratio would be calculated as follows:
Current Ratio = $1,000 / $500
Current Ratio = 2.0
This means that the company has $2.00 in current assets for every $1.00 in current liabilities. What affects current ratio? There are several factors that can affect a company's current ratio. One is
the mix of a company's assets and liabilities. For example, if a company has a lot of inventory or other assets that take a long time to convert to cash, its current ratio will be lower than if it
had fewer of those types of assets. Another factor that can affect a company's current ratio is how quickly it pays its bills. If a company takes a long time to pay its suppliers, its current ratio
will be lower than if it paid its bills more quickly. Is current ratio calculated in times? Current ratio is calculated by dividing a company's current assets by its current liabilities. This ratio
is used to measure a company's ability to pay its short-term obligations with its current assets.
Why do we calculate current ratio?
Current ratio is a liquidity ratio that measures a company's ability to pay its current liabilities with its current assets. The current ratio is calculated by dividing a company's current assets by
its current liabilities. A current ratio of 1.0 means that a company has an equal amount of current assets and current liabilities. A current ratio of less than 1.0 means that a company has more
current liabilities than current assets. A current ratio of greater than 1.0 means that a company has more current assets than current liabilities.
Current ratio is used to assess a company's ability to pay its short-term liabilities with its short-term assets. The current ratio is a liquidity ratio, which means that it measures a company's
ability to pay its short-term liabilities with its short-term assets. Current assets are assets that can be converted into cash within one year. Current liabilities are liabilities that are due
within one year.
The current ratio is not a perfect measure of a company's liquidity, but it is a good general indicator. The current ratio does not take into account the company's long-term liabilities, which may be
due after one year. The current ratio also does not take into account the company's ability to generate future cash flows.
The current ratio is a good general indicator of a company's liquidity, but it is not a perfect measure. The current ratio does not take into account the company's long-term liabilities, which may be
due after one year. The current ratio also does not take into account the company's ability to generate future cash flows.
What is the formula for calculating current ratio?
The current ratio is a liquidity ratio that measures a company's ability to pay short-term and long-term obligations. The current ratio is calculated by dividing a company's current assets by its
current liabilities. A company that has a current ratio of 1.5 or higher is considered to be financially healthy. | {"url":"https://www.infocomm.ky/current-ratio-explained-with-formula-and-examples/","timestamp":"2024-11-11T20:43:03Z","content_type":"text/html","content_length":"42461","record_id":"<urn:uuid:c22599f9-90fa-499d-9f32-1667dede501a>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00323.warc.gz"} |
finding volume of a cube worksheet
Volume of Cubes Worksheets
Volume of a Cube Calculator
Rectangular prisms & cubes worksheets | K5 Learning
Volume of a Cube Worksheets
Volume of Cubes Worksheets
Volume Cube Worksheet Worksheets
Volume of a Cube Worksheets
5th Grade Volume Worksheets
Volume of cubes | 5th grade Math Worksheet | GreatSchools
Volume Cubes - Worksheets
Volume of A Cube - GCSE Maths - Steps, Examples & Worksheet
Volume, Cubes & Cuboids (Year 6) | CGP Plus
Finding Volume with Unit Cubes Worksheet Download
Printable volume and capacity mathematics worksheets for primary ...
Cubes of small numbers | 5th grade Math Worksheet | GreatSchools
Volume of Cubes Worksheets
Surface area of a cube worksheet: Fill out & sign online | DocHub
Finding the Volume by Counting Cubes Activity | Twinkl
Grade 5 Volume Worksheets | Free Printables | Math Worksheets
Printable primary math worksheet for math grades 1 to 6 based on ...
Topic : Finding The Surface Area And Volume Of A Cube- Worksheet 1 ...
Volume Worksheets | Teach Starter
Worksheet on Volume of a Cube and Cuboid |The Volume of a RectangleBox
KS2 Volume of Cuboids Worksheet - Primary Resources - Twinkl
Volume of Cubes Worksheets
Volume of rectangular prisms and cubes - Math Worksheets ...
Find the Volume by Counting Cubes (Year 6) | CGP Plus
Volume of Cubes | PDF printable Measurement Worksheets
Word Problems on Volume of Cube and Cuboid — Printable Math Worksheet
Find the volume count the cubes worksheet | Live Worksheets
Finding the Surface Area and Volume of a Cube Worksheets
Calculate Volume and Surface Area: Cubes Worksheet for 7th - 10th ...
Problems When Teaching Volume Of A Cube Discounts | fiammaespresso.com
Volume of a Cube Lesson Plans & Worksheets :: 25 - 48
Volume of Rectangular Prisms and Cubes With Fractions | Worksheet ... | {"url":"https://worksheets.clipart-library.com/finding-volume-of-a-cube-worksheet.html","timestamp":"2024-11-14T11:35:46Z","content_type":"text/html","content_length":"27052","record_id":"<urn:uuid:a372d36f-76d8-470b-96ea-6ee1a2a42e90>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00640.warc.gz"} |
1419. Maps of the Island Worlds
It was senseless, amusing, and a bit terrifying. In a sea, or maybe in an ocean, or even on a~planet completely covered with water there were 40 small islands. There was a castle with its own
insignia and name on each island. Each island, or, more exactly, each castle was connected with three neighboring islands. Our neighbors were the Twelfth, the Twenty Fourth, and the Thirtieth
This is how Sergey Lukyanenko, a famous Russian science fiction writer, describes in his novel a mysterious world where teenagers have got into. A fictitious world. Or perhaps a possible one? You are
to answer this question.
Well, let's formalize the literary description of the world. Assume that the islands are located at nodes of an integer grid and form a rectangle of width
and height
. The following conditions are satisfied:
1. Every island is connected by bridges with exactly three neighboring islands (two islands are neighboring if the distance between them doesn't exceed 1.42). The bridges do not intersect.
2. For any two islands, there is a chain of bridges connecting them. Moreover, the destruction of any one bridge doesn't violate this property.
To make a travel in their world more convenient, aborigines use maps. A map is a rectangle of width 2W − 1 and height 2H − 1. Islands on the map are denoted by the Latin letter 'O', and bridges are
denoted by an appropriate symbol from the set '-|/\'. Spaces denote an area without islands and bridges.
By accident, you've got a piece of paper that seems to be a map of some world. Try to determine if the image on the paper is a proper map of an Island World, i.e., if it satisfies the conditions
given above.
The first line contains two integers: the width M and the height N of a map. Both integers are odd and lie in the range from 1 to 99. The next N lines contain an image. Each line of the image is of
length M. The cells with odd numbers of rows and columns contain the Latin letters 'O'. Other cells contain characters '-', '|', '/', '\' or spaces.
If the input image is a proper map of an Island World, then output the phrase “Island world”. Otherwise, output “Just a picture”.
input output
O-O-O-O-O-O O-O
|\| | \|/|/|/|
O-O O-O O O O O
\ / /| / |/| Island world
O-O O-O-O-O O-O
|\ | / \ |
O-O O O-O-O O-O
|/ /| |\ |\ \|
O-O-O Just a picture
| |
Problem Author: Aleksandr Klepinin
Problem Source: The Ural State University Championship, October 29, 2005 | {"url":"https://timus.online/problem.aspx?space=1&num=1419","timestamp":"2024-11-13T02:46:55Z","content_type":"text/html","content_length":"8219","record_id":"<urn:uuid:965ff9aa-51cc-4303-bdc4-b5100794c254>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00196.warc.gz"} |
WSPR reports frequency distribution experiment: 20m
Last week I did an experiment where I transmitted WSPR on a fixed frequency for several days and studied the distribution of the frequency reports I got in the WSPR Database. This can be used to
study the frequency accuracy of the reporters’ receivers.
I was surprised to find that the distribution of reports was skewed. It was more likely for the reference of a reporter to be low in frequency than to be high in frequency. The experiment was done in
the 40m band. Now I have repeated the same experiment in the 20m band, obtaining similar results.
The details of the experiment are contained in the previous post. This time I have been transmitting on 14097100Hz, and after measuring against my GPSDO I have determined that my real transmit
frequency was 14097098.43Hz. The test has run over the course of 6 days, collecting a total of 3643 reports coming from 146 different reporters. Below you can see the distribution of the frequency
error of the reporters.
The distribution is similar to the one obtained on 40m, but it is not quite the same. Nevertheless, both distributions are clearly skewed to the left. Below you can see the histograms showing the
reporters per number of reports. We see that most reporters have only given 10 reports or less.
When I tweeted about this bias in frequency, several people presented theories to try to explain the bias. F4DAV suggested that this could be an artefact of the WSPR decoder. However, I have
contacted Joe Taylor to check if the software could play any role, and he doesn’t think so. He remarks that he has done tests with K9AN using a GPS-locked system and a well calibrated transceiver and
they obtain reports which are accurate to 1Hz (note that 1Hz is 0.1ppm in 40m, and here we are seeing much larger biases).
Miguel Ángel EA4EOZ remarked that it is not so easy to derive the frequency error of the reference of a receiver given the frequency error in the WSPR report (which is measured at the audio output by
the WSPR software). He said that this depends on the receiver architecture and how many stages it has in the case of a superhet receiver. This left me wondering for a while, as it occurred to me that
according as to whether the first LO was below or above the RF frequency you would see the frequency error go in a different direction.
A moment’s thought shows that this is not the case: the architecture of the receiver doesn’t matter at all. At the audio output you will get an error in frequency which equals the relative error of
the frequency reference of the receiver times the RF frequency of the signal. It doesn’t matter if the receiver is a superhet having several stages, a direct conversion zero-IF SDR or a direct
sampling DDC SDR (in the case where the receiver has frequencies derived from different crystal oscillators you should replace the concept of “frequency reference” by a suitable weighted average of
all the crystal oscillators).
My favourite way to see this is that the frequency reference of a receiver is used to measure time. It’s the receiver’s clock. It doesn’t matter how it’s used to bring the RF signal down to AF. If
the receiver’s clock is running slow, the RF will appear higher in frequency and vice versa.
Another way to see it is to think that each stage in a superhet receiver performs a frequency conversion where the RF frequency \(f^{\text{RF}} = f^{\text{IF}}_0\) is transformed into the first IF, \
(f^{\text{IF}}_1\), by \(f^{\text{IF}}_j = \pm f^{IF}_{j-1} \pm f^{\text{LO}}_j\), with the condition that \(f^{\text{IF}}_j \geq 0\), so in particular both signs cannot be \(-\) simultaneously.
After several frequency conversions we arrive at the AF, \(f^{\text{AF}} = f^{\text{IF}}_n\) and we can groups all the \(f^{\text{LO}}_j\) terms as \(f^{\text{LO}} = -(\pm f^{\text{LO}}_1 \pm \cdots
\pm f^{\text{LO}}_k)\). Hence, \(f^{\text{AF}} = \pm f^{\text{RF}} – f^{\text{LO}}\). Now, the fact that the frequency conversion from RF to AF is sideband preserving (i.e., the mode is USB), means
that \(f^{\text{RF}}\) should have the sign \(+\). Note that this forces \(0 < f^{\text{LO}} < f^{\text{RF}}\). If the receiver uses only a single crystal, then each of the frequencies \(f^{\text
{LO}}_j\) is a constant multiple of the frequency of the crystal, so the same is true of \(f^{\text{LO}}\). If the receiver uses several crystals, then \(f^{\text{LO}}\) is a linear combination of
the frequencies of all the crystals. Note that the coefficients may have positive and negative signs, which can get interesting. In the usual case the crystal whose frequency error dominates the
error in \(f^{\text{LO}}\) will have a positive coefficient in the linear combination, but there can be pathological cases. The same kind of reasoning done here applies for a direct conversion
zero-IF receiver or a DDC.
Miguel Ángel EA4EOZ has also made a key remark: crystals only go down in frequency when they age. This is almost true. As this image shows, there is an increase in frequency due to stress relief and
a decrease in frequency due to mass transfer. In the long run, the decreasing effect wins out. This seems a convincing cause that explains the bias I am seeing in the frequency distribution. Amateur
transceivers are calibrated on fabrication but eventually their crystals age and go down in frequency, and most hams don’t bother to calibrate the transceiver frequently.
One comment
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://destevez.net/2017/08/wspr-reports-frequency-distribution-experiment-20m/","timestamp":"2024-11-07T09:34:55Z","content_type":"text/html","content_length":"54748","record_id":"<urn:uuid:32ea82e1-39fd-48b3-abcc-e9e91d97f570>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00189.warc.gz"} |
Research in Machine Learning
Research in Machine Learning
I. Robustness of deep neural networks. In colaboration with E. Pauwels and V. Magron we want to analyze the robustness of deep neural networks with respect to noise in the input. Recent works have
considered semidefinite relaxations to address this robustness issue for RELU neural networks. In fact this relaxation is the first level of the Moment-SOS hierarchy that we have developed and we
claim that we can improve such analysis by solving higher-level semidefinite relaxations of the hierarchy. Indeed the scalability issue of the Moment-SOS hierarchy can be overcome for certain types
of neural nets (e.g. RELU, but not only) by implementing an adequate ``sparse" version of the hierarchy. The same methodology can also be used to provide certified upper bounds on the Lipschitz
constant of Deep Neural Nets. See e.g.
II. The Christoffel function for ML. This research is done mainly in collaboration with Edouard Pauwels (IMT, Toulouse) and more recently also with Mihai Putinar (University of California, Santa
Barbara). The ultimate goal is to promote the Christoffel function as a new and promising tool for some important applications in Machine Learning (ML) and Data Analysis (e.g. data representation and
encoding, outlier detection, density estimation). Our foundational approach is summarized in the three papers:
• Lasserre J.B., Pauwels E. The empirical Christoffel function with applications in data analysis, Adv. Comp. Math. 45 (2019), pp. 1439--1468
• Pauwels E., Putinar M., Lasserre J.B. Data analysis from empirical moments and the Christoffel function, Found. Comp. Math. 21 (2021), pp. 243--273
• Lasserre J.B., Pauwels E. Sorting out typicality with the inverse moment matrix SOS polynomial, Proceedings of NIPS 2016, Barcelona 2016, arXiv:1606.03858
We started from a simple and striking observation. First we draw a cloud of 2-D points and built up the empirical moment matrix M[d][ ](with moments up to order "2d") associated witht the points of
the cloud. Then we built up the SOS polynomial x -> c[d](x) of degree "2d" whose associated Gram matrix is the inverse of M[d ]. Then one readily observes that the level sets of c[d ]capture the
shape of the cloud quite well even for small "d"! When µ is a measure with compact support K and with a density f w.r.t. the Lebesgue measure on K, then the reciprocal of c[d ]is the Christoffel
function, well-known in approximation theory. For instance when K has a simple geometry (a box, ellipsoid, simplex) then it is well-known that 1/c[d](x) converges pointwise to the density f "times"
an equilibrium density, intrinsic to K, for x in K, and converges to zero for x outside K.
Therefore, somehow the Christoffel function identifies the support of K. What is striking is how well this happens (even for small "d") for the Christoffel function associated with an empirical
measure on a cloud of points drawn from an unknown distribution µ. Also it is worth noticing that the empirical moment matrix M[d] is quite easy to construct and to invert (modulo the dimension), and
with no optimization process involved! This should make c[d](x) an appealing and easy to use tool in some important applications of Machine Learning (ML) and statistics. For instance we have already
shown that it has a remarkable efficiency in e.g. outlier detection and estimation of density. In addition, if the cloud of points is on a manifold then the kernel of M[d ]contains a lot of
information that can be used, e.g., to learn the dimension of the manifod when it has an algebraic boundary, with relatively little effort compared to more sophisticated methods of the literature.
All these potential applications in Machine Learning and Data Analysis are described in our forthcoming book
``The Christoffel_Darboux Kernel for Data Analysis" by J.B. Lasserre, E. Pauwels, M. Putinar, Cambridge University Press, 2021
III. New applications and connections of the Christoffel function.
It turns out that a non-standard applications of the Christoffel functions allow to recover accurately a function from a sample of its values and in several cases even without the Gibbs phenomenon if
the function is discontinuous. To illustrate the basic idea consider a function f: [0,1] -> [0,1]. We (i) build up the moment matrix of the empirical measure µ on [0,1]×[0,1] supported on the graph
of the function at the points where it is evaluated, (ii) and the degree-n Christoffel function Cn(x,y) associated with this measure. Then for each fixed x in 0,1], we approximate f(x) by f_n(x):=
argmin_y Cn(x,y), which can bedone efficiently as y -> C_n(x,y) is a univariate SOS polynomial. This strategy is described in
We have also shown the potential of the Christoffel function as a simple tool for supervised classification in data analysis (of moderate dimension) in
We have also shown that the Christoffel function of a measure µ on a cartesian product X×Y disintegrates as the product of the Christoffel function of the marginal of µ on X and the
Christoffel function of a measure with the flavor of the conditional probability on Y given x in X, exactly as for Borel measures on X×Y. See
In addition, we have also revealed some (in the author's opinion surprising) connections with other disciplines involving the polynomial Pell's equation, certificates of positivity, equilibrium
measure of compact sets, and a duality result of Nesterov for some cones of positive polynomials. See e.g.
We have also provided a variant of the CF with same computational complexity and same asymptotic dichotomy behavior, and with the advantage that the (usually unknown) equilibrium measure of the
support disappears in the limit. | {"url":"https://homepages.laas.fr/lasserre/drupal/content/research-machine-learning","timestamp":"2024-11-14T20:23:42Z","content_type":"application/xhtml+xml","content_length":"32436","record_id":"<urn:uuid:da2cc2e8-1927-4c0e-8085-fa1b4ce54e9b>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00554.warc.gz"} |
SAT Physics Conventions and Graphing - Interpreting Graphs
SAT Physics Conventions and Graphing - Interpreting Graphs
Consider the graph of velocity versus time in Figure 1.2.
The graph tells the story of an object, such as a car, as it moves over a 60-second period of time. At time zero, the object has a velocity of 0 meters per second and is therefore starting from rest.
The y-intercept of a speed versus time graph is the initial velocity of the object, v
What the object is doing during the 60 seconds can be determined by analyzing the slope and area during the separate time intervals. Determine the significance of the slope by dividing the rise units
(y-axis values) by the run units (x-axis values).
The slope units, meters per second squared (m/s
), are the units of acceleration. Thus, the slope of speed versus time is acceleration. Determine the significance of the area between the graphed function and the x-axis by multiplying the units of
the y-axis by the units of the x-axis.
area units = height units X base units = m/s x s = m
Meters (m) are the units of displacement. The area of a velocity versus time graph is displacement.
To analyze the motion mathematically, divide the graph into a series of line segments and evaluate each section. The following chart shows the acceleration and displacement for the time intervals
corresponding to the graphed line segments.
Graphs That Illustrate Physics Equations
The SAT Subject Test in Physics may ask you to identify which graph correctly matches a given equation. Equations in beginning physics typically take one of four possible forms: linear, quadratic,
square root, and inverse. You can quickly deduce the shape of a graph by looking at the relationship between the dependent and independent variables as shown in Table 1.3.
1. I got information from your article which I will be sharing with my friends who will need this information. I will suggest reading this article because it will really help those who need this
information about physics. Thanks for the information which you have shared here. Walk In Series Fume Hood
2. I read the above article and I got some different kind of information from your article about a mattress.Interpreting Courses in Saudi Arabia It is a helpful article to enhance our knowledge for
us. Thankful to you for sharing an article like this. | {"url":"http://www.bestsatprepbook.com/2017/09/sat-physics-conventions-and-graphing.html","timestamp":"2024-11-09T04:44:03Z","content_type":"text/html","content_length":"105431","record_id":"<urn:uuid:7f0d574d-e215-4cf9-b416-ffc884930102>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00300.warc.gz"} |
conductor sizing Archives - Production Technology
Cable power losses or power drop are due to the conductor resistance heating that occurs when current flows. These cable losses are more often called KW losses or I²R losses. This is expressed by the
following formula:
Power losses = 3 × (I²R) /1000
Where: Power losses in kW units, I is the current (in amps) and R (in ohms) is the average conductor resistance.
How to lower the resistance in the cable?
Power lost in a cable depends on the cable length, cable size and the current through the cable. Therefore, there are three ways to lower the resistance in the cable:
• Shorten the length of the cable,
• Increase the size of the conductor,
• Decrease the current through the cable. | {"url":"https://production-technology.org/tag/conductor-sizing/","timestamp":"2024-11-07T19:53:40Z","content_type":"text/html","content_length":"54255","record_id":"<urn:uuid:8a983c3c-6a39-4dd9-b2ab-999ee2e652fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00161.warc.gz"} |
Triangle Types by Sides
Triangles: Learn
Identifying types of triangles by their sides:
Type Description Sketch
Equilateral all 3 sides are the same length
Isosceles 2 sides are the same length
Scalene all 3 sides are the different length
Triangles: Practice
What type of triangle is it?
Side A is cm
Side B is cm
Side C is cm
Press the Start Button To Begin
You have 0 correct and 0 incorrect.
This is 0 percent correct.
Game Name Description Best Score
How many correct answers can you get in 60 seconds? 0
Extra time is awarded for each correct answer. 0
Play longer by getting more correct.
How fast can you get 20 more correct answers than wrong answers? 999
Math Lessons by Grade
Math Topics
Spelling Lessons by Grade
Vocabulary Lessons by Grade | {"url":"https://aaaknow.com/lessonFull.php?slug=geoTriangleSides&menu=Fourth%20Grade","timestamp":"2024-11-04T11:41:15Z","content_type":"text/html","content_length":"20746","record_id":"<urn:uuid:f5915a05-4763-4dcc-9968-626e9c3886d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00769.warc.gz"} |
A cuboidal vessel is 10 m long and 8 m wide. How high must it be made to hold 380 cubic metres of a liquid? The cuboidal vessel must be made 4.75 m high.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A cuboidal vessel is 10 m long and 8 m wide. How high must it be made to hold 380 cubic metres of a liquid?
Given: Length and breadth of the cuboidal vessel are 10 m and 8 m respectively. It must hold 380 m^3 of a liquid.
Since the vessel is cuboidal in shape, the volume of liquid in the vessel will be equal to the volume of the cuboid.
The volume of the cuboid of length l, breadth b, and height h, is V = l × b × h
Let the height of the cuboidal vessel be h.
Length of the cuboidal vessel, l = 10 m
The breadth of the cuboidal vessel, b = 8 m
The capacity of the cuboidal vessel (V) = 380 m^3
Volume of the liquid in the cuboidal vessel = l × b × h
l × b × h = 380m^3
10 m × 8 m × h = 380m^3
h = 380 / (10 × 8)
h = 4.75 m
Thus, the cuboidal vessel must be made 4.75 m high.
☛ Check: NCERT Solutions for Class 9 Maths Chapter 13
Video Solution:
A cuboidal vessel is 10 m long and 8 m wide. How high must it be made to hold 380 cubic metres of a liquid?
Class 9 Maths NCERT Solutions Chapter 13 Exercise 13.5 Question 3
It is given that there is a cuboidal vessel that is 10 m long and 8 m wide that can hold 380 cubic metres of liquid. We have found that the cuboidal vessel must be made 4.75 m high.
☛ Related Questions:
Math worksheets and
visual curriculum | {"url":"https://www.cuemath.com/ncert-solutions/a-cuboidal-vessel-is-10-m-long-and-8-m-wide-how-high-must-it-be-made-to-hold-380-cubic-metres-of-a-liquid/","timestamp":"2024-11-02T17:33:27Z","content_type":"text/html","content_length":"200473","record_id":"<urn:uuid:991ce7df-c07c-429a-a8a8-035a4c58091a>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00547.warc.gz"} |
Cophenetic correlation coefficient
c = cophenet(Z,Y)
[c,d] = cophenet(Z,Y)
c = cophenet(Z,Y) computes the cophenetic correlation coefficient for the hierarchical cluster tree represented by Z. Z is the output of the linkage function. Y contains the distances or
dissimilarities used to construct Z, as output by the pdist function. Z is a matrix of size (m–1)-by-3, with distance information in the third column. Y is a vector of size m*(m–1)/2.
[c,d] = cophenet(Z,Y) returns the cophenetic distances d in the same lower triangular distance vector format as Y.
The cophenetic correlation for a cluster tree is defined as the linear correlation coefficient between the cophenetic distances obtained from the tree, and the original distances (or dissimilarities)
used to construct the tree. Thus, it is a measure of how faithfully the tree represents the dissimilarities among observations.
The cophenetic distance between two observations is represented in a dendrogram by the height of the link at which those two observations are first joined. That height is the distance between the two
subclusters that are merged by that link.
The output value, c, is the cophenetic correlation coefficient. The magnitude of this value should be very close to 1 for a high-quality solution. This measure can be used to compare alternative
cluster solutions obtained using different algorithms.
The cophenetic correlation between Z(:,3) and Y is defined as
$c=\frac{{\sum }_{i<j}\left({Y}_{ij}-y\right)\left({Z}_{ij}-z\right)}{\sqrt{{\sum }_{i<j}{\left({Y}_{ij}-y\right)}^{2}{\sum }_{i<j}{\left({Z}_{ij}-z\right)}^{2}}}$
• Y[ij] is the distance between objects i and j in Y.
• Z[ij] is the cophenetic distance between objects i and j, from Z(:,3).
• y and z are the average of Y and Z(:,3), respectively.
X = [rand(10,3); rand(10,3)+1; rand(10,3)+2];
Y = pdist(X);
Z = linkage(Y,'average');
% Compute Spearman's rank correlation between the
% dissimilarities and the cophenetic distances
[c,D] = cophenet(Z,Y);
r = corr(Y',D','type','spearman')
r =
Version History
Introduced before R2006a | {"url":"https://au.mathworks.com/help/stats/cophenet.html","timestamp":"2024-11-04T08:29:01Z","content_type":"text/html","content_length":"72011","record_id":"<urn:uuid:25e3c638-3bb0-4493-916e-fc81b857b50e>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00243.warc.gz"} |
Radius of Investigation - TestWells
Radius of Investigation
The radius of investigation concept is one of the most important concepts in pressure transient analysis. We will go through a bit of theory, define the radius of investigation and see how we can
derive it from well test analysis.
Let’s assume a fully completed vertical well producing at constant rate q during Δt in an infinite homogeneous isotropic reservoir with constant properties, bounded above and below by impermeable
planes. At the start of production, the reservoir is at equilibrium state with a uniform pressure equal to the initial pressure Pi.
In these conditions, a solution of the diffusivity equation is given by:
With the diffusivity coefficient:
and the exponential integral function:
Fortunately, there is a log approximation of the Ei function:
As a result, the solution of the diffusivity equation could be simplified as:
At the wellbore (r = r[w]), we have:
This solution defines the fundamental flow regime in well testing: radial flow regime. In the bedding plane, the flow will have the following cylindrical path:
When the reservoir production is established, the flow-lines converge towards the well with a radial geometry.
Within the reservoir, the pressure around the well decreases due to production, with the following pressure change:
This pressure change ∆P increases with time and decreases as the radius r (distance from the well) increases. This defines the transient effect. It is also worth noticing that the pressure
disturbance is proportional to the well rate q.
The figure below shows the pressure profile in the reservoir versus the radius r, at an instant t, which is created by the well producing at a constant rate q.
The minimum pressure is at the wellbore with p(r[w],∆t). Pressure increases with the distance from the well and tends towards the initial reservoir pressure Pi further away from the well.
The figure below shows the evolution of the pressure profile as the production time increases.
At a time ∆t[1] and at a reservoir radius R[1], we have:
At a time ∆t[2], the pressure at the point R[1] and time ∆t[1] will move to a distance R[2 ] in the reservoir:
As a result, the pressure that was in the reservoir at the point R[1] and time ∆t[1] will move by the time ∆t[2 ]to:[ ]
On a log scale, the pressure profile is shifted by 0.5 log(∆t[2])-0.5 log(∆t[1]), as shown below.
As it can be observed on the figure above, a larger part of the reservoir is affected by this pressure disturbance as the production time increases. The transient response evolves away from the well.
The radius from the well to the affected reservoir region is called the radius of investigation Ri. As production time increases, the radius of investigation also increases.
Definition of the Radius of Investigation Ri
Let’s first use a very simple definition of the radius of investigation at the time ∆t as p(Ri,∆t)= Pi. At this stage, a more complex definition of Ri could be argued.
In this case, the radius of investigation is defined as:
Here we used a simplistic approach to define the radius of investigation. In addition, we used the log approximation of the Ei function, but this is only valid for small values of r^2/(4ƞ∆t). When
using the exponential integral function, the definition of ΔP= 0 will not result in a finite Ri value and therefore will not work.
We could define the radius of investigation as the point where the pressure variation is the fastest.
The pressure variation is defined as the derivative of pressure with respect to time:
The maximum of the pressure variation is defined as:
In this case, the radius of investigation is defined as:
In the oilfield units, this gives:
(Muskat, Van Poolen, Matthews & Russell, Lee, Streltsova, Bourdarot)
Below is another definition of the radius of investigation, in the oilfield units, from Earlougher:
More definitions of the radius of investigation are available in SPE 120515.
For all these definitions, the radius of investigation increases with time with the square root of ηΔt. This is our third golden rule in the following post: Conventional Well Test Derivative.
We can also observe that the radius of investigation doesn’t directly depend on the rate q. However, the rate will affect the amplitude of the pressure signal.
So far, we have seen that the theory is applicable to a constant-rate production period, with a vertical well in a homogeneous reservoir of infinite extent. In this particular case, the flow is
radial/circular towards the well, following the radial flow regime.
In practice, it is difficult to define the radius of investigation from a production period.
The theory above is applicable to a constant-rate production period, when radial flow regime or the infinite-acting transient period is visible.
It is quite difficult in practice to keep a production rate constant for a long time period. The time ∆t from the equations above cannot be defined as the longest production time period. It has to be
the time when radial flow regime is present. The radius of investigation can then be estimated from the time when the radial flow regime ends.
One way to achieve a constant-rate period over a long time interval is to shut the well in and to look at the PBU test.
PBU radius of investigation
When the PBU and the drawdown responses are consistent, the radius of investigation can be derived from the PBU test.
To do so, the stabilization line is selected on the derivative plot. This defines the radial flow regime and results in a KH value. Then the radius of investigation is calculated from the formula
The radius of investigation can be estimated from the time when radial flow regime ends. In the PIE software, this is marked by a “cross” on the last point of the derivative stabilization line. In
this example, Ri= 255 ft.
Whenever possible, the derivative plot should be used to estimate the radius of investigation. If the derivative is noisy, the superposition plot could be used.
In this case, the radius of investigation Ri depends only on the duration of the PBU test. While it is in theory independent of the previous production rate and duration, the PBU radius of
investigation is limited by the accuracy of the pressure gauge. We can imagine that for a very long PBU test, the pressure changes recorded at the end of the PBU may become too small to be detected.
Nowadays with high quality gauges, this is no longer a common issue. However, some care must be taken when designing the PBU test. The company may have to increase the duration of the drawdown period
or increase the flow rate.
Extend the radius of investigation with Deconvolution
The above method to extract the radius of investigation from the conventional derivative is limited by the PBU duration, i.e. by the data from a single flow period.
The Deconvolution technique could then be used to extend the radius of investigation. Indeed, Deconvolution results in the equivalent initial constant-rate production period, of duration equal to the
entire duration of the test sequence. More information on the Deconvolution technique is available here: Deconvolution.
Using the example above, the deconvolution in red is defined over a longer time interval. By using the same technique with the derivative stabilization line, the radius of investigation is tripled
for this particular case. In this example, Ri= 759 ft.
Does it work in reservoirs with boundaries ?
Strictly speaking, the radius of investigation is only applicable during the radial flow regime, or the infinite-acting transient period, when the flow is radial towards the well.
In case of boundaries, then the user can define the radius of investigation by using the last points in the derivative stabilization, as such:
In this derivative plot, the increase in derivative marks the presence of boundaries.
For this case, we can define the minimum connected volume. This is the minimum hydrocarbon volume that the test proves. More info on this is available here: How to define the reserves associated
with a well.
An example
Let’s assume the reservoir below, with an increase in reservoir net thickness on one direction, and with the presence of boundaries.
The following test was performed: (note that this test sequence is not optimized…)
What should we do to calculate the radius of investigation ?
And which time ∆t should we use ? The production time, the longest drawdown period, the entire PBU time ?
The first thing we need to do is to plot all the PBU tests in the log-log plot (or derivative plot).
This helps us to gain some confidence in the data and it is easier to identify the radial flow regime. We then draw the stabilization line to highlight the radial flow regime, which then results in a
permeability-thickness KH value.
The ∆t to use for the radius of investigation is the last point of the derivative stabilization indicative of radial flow regime. In PIE, it is automatically calculated, based on the “cross” marker
in the stabilization line. As shown in the plot, this gives a radius of investigation Ri= 285 ft.
We derived below the deconvolution response in red and ensured its validity.
The deconvolved derivative in red is consistent with the conventional derivative in blue until 3 hours. Then we can note a discrepancy between deconvolution and the conventional derivative. This is
because of a distortion on the conventional derivative. While deconvolution is derived with respect to the log of time, the superposition time is used for the conventional derivative. This brings
some error in the conventional derivative from 3 hours onwards.
From deconvolution, we can extract a larger radius of investigation, Ri= 449 ft.
The log-log plot below shows one of the models that matches the deconvolution data. We also verified that all the other plots are matched. (click on the plot to enlarge it)
with the following legend:
To obtain a minimum connected volume, we can close the interpretation model by adding some fictitious boundaries. Then we can reduce the distances from the well to these fictitious boundaries until a
discrepancy is noted on the derivative at late times (SPE 102483). This is shown below.
The minimum pore-volume is 136.3 million cubic feet.
As a result, this test proves a minimum connected oil volume of 8.3 million barrels of oil at surface.
Another technique can be used, which is to place a unit-slope straight line on the final point of the deconvolved derivative (SPE 116575). This results in a conservative minimum connected volume of
7.2 million barrels of oil at surface.
15 comments on “Radius of Investigation”
thank you very much for this gift in this end of year.
Hi hadi_ansari238, thanks very much for this kind comment!
We are pleased to be able to help!
The TestWells team
[email protected]
Thanks a lot for sharing this.
What would be the Rinv in an injection well? when dealing with heavy viscous oil?
Hi hhendizadeh,
Thanks for your comment. With a (thermal) fracture and multiphase flow in the reservoir, the radius of investigation should -strictly speaking- not be applied to injection wells. However, it
could give you some ideas, in particular about the distance to the water/oil interface or water front. This could be supported by a volume calculation. More information is available in the
training course called “Pressure Transient Analysis in Producing Fields” (http://testwells.com/pta-in-producing-fields-content).
We hope this helps.
Best Regards,
The TestWells team
Thanks so much, really helpful article…
Appreciate your efforts
Thanks OmarBittar for your comment, much appreciated.
Ali AlFarsi
Thank you for this.
Since Radius of investigation is not applicable for the boundaries then, how is it possible that Saphir and PIE calculate the distance to the second boundary or even the third given that these
boundaries are not equally spaced to the well location???
Are these wrong practices???
Hi Ali,
Thanks for your comment. Distances to these boundaries are obtained from the interpretation model. The radius of investigation formula can be used to get some “idea” about the distances to
the boundaries or the size of the reservoir, but strictly speaking, the equation should be applied only to the radial flow regime (circular flow geometry) in a homogeneous reservoir.
We hope this helps,
Best Regards,
The TestWells team
Ali AlFarsi
Appreciate your quick reply.
Thank you this is very helpful.
Hello dear welltest.
It is mentioned using above techniques one can know minimum connected volume of oil at surface. What pressure and temperature conditions does surface means in this context?
Hi dh,
Thanks for your comment. This should be standard surface conditions.
Best Regards,
The TestWells team
Ahmed Nour
Why there is a difference in the Rinv value when using “stab line” vs Fault.?
Hi Ahmed,
Thanks for your comment.
There should be a small difference between the two, as there is a transition between the end of the radial flow regime and the development of the fault response. The radius of investigation from
the radial flow regime should be lower than the distance to the fault.
If the Rinv is obtained from a distorted conventional derivative, then the difference may be more significant.
We hope this helps,
Best Regards,
The TestWells team
Sorry my last comment – cant recall if i stated the expansion factor – try 76
Hi testwell teams,
Thanks alot for this sharing.
if we have radial composite case which has 2 radial flow regime, so we have two different of permeability number, so which one permeability data that we used for calculate the radius
investigation ?. As we know in saphir investigation number generete depend on we put the radial flow regime line.
You must be logged in to post a comment.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://testwells.com/radius-of-investigation","timestamp":"2024-11-04T01:23:44Z","content_type":"text/html","content_length":"120175","record_id":"<urn:uuid:9205d29c-1882-4fc6-80cd-e2bfdf7cc842>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00378.warc.gz"} |
Green's Function for the Two-Dimensional, Radial Laplacian
November 12th, 2016
Green's Function for the Two-Dimensional, Radial Laplacian
In a previous blog post I derived the Green’s function for the three-dimensional, radial Laplacian in spherical coordinates. That post showed how the actual derivation of the Green’s function was
relatively straightforward, but the verification of the answer was much more involved. At the end of the post, I commented on how many textbooks simplify the expression for the Green’s function,
ignoring the limiting behavior around the origin (\(r = 0\)).
I decided to follow up that post with a similar derivation for the two-dimensional, radial Laplacian. The derivations are almost identical - in fact, throughout this post, I will refer to the
spherical Laplacian in a few places. You might want to read over the previous blog post before tackling this one.
Derivation of the Green’s Function
Consider Poisson’s equation in polar coordinates.
\[$$abla^2 \psi = f$$\]
We can expand the Laplacian in terms of the \((r,\theta)\) coordinate system. I looked up the full Laplacian on Wolfram Mathworld (i.e. cylindrical coordinates without the z-component).
\[$$\frac{1}{r} \frac{\partial}{\partial r} \left ( r \frac{\partial \psi}{\partial r} \right ) + \frac{1}{r^2} \frac{\partial^2 \psi}{\partial \theta^2} = f$$\]
Some textbooks write the equation in a slightly different form; they expand the derivatives with respect to \(r\), yielding:
\[$$\frac{\partial^2 \psi}{\partial r^2} + \frac{1}{r} \frac{\partial \psi}{\partial r} + \frac{1}{r^2} \frac{\partial^2 \psi}{\partial \theta^2} = f$$\]
In this derivation, I am going to keep the original form of the Laplacian because it will make it easier to use integration by parts. We are only interested in the radial Green’s function here, so we
can immediately simplify the expression to include only radial terms.
\[$$\frac{1}{r} \frac{\partial}{\partial r} \left ( r \frac{\partial \psi}{\partial r} \right ) = f$$\]
We are looking for a Green’s function \(G\) that satisfies:
\[$$abla^2 G = \frac{1}{r} \frac{d}{dr} \left ( r \frac{dG}{dr} \right ) = \delta(r)$$\]
Let’s point something out right off the bat. In the previous blog post, I set the Green’s function equal to \(4\pi \delta(r)\) wheras here, I set it equal to \(\delta(r)\) without the constant. Why
is that?
It’s because physicists love to mix all sorts of conventions and confuse people :P
Traditionally, mathematicians use \(\delta(r)\) in all cases. However, it is convenient to use \(4\pi\delta(r)\) for the derivation of the three-dimensional Green’s function for the Laplacian in
spherical coordinates for a number of reasons. The final ouput is simple (no constants aside from a negative sign), the constant describes the solid angle of a sphere, and the answer aligns with CGS
units in electromagnetism. However, I have not come across any sources where the 2D Green’s function is set to \(2\pi\delta(r)\). Thus, sticking with tradition, I will use \(\delta(r)\).
If this change of notation rubs you the wrong way, you can go back to my previous blog post and re-derive the Green’s function using \(\delta(r)\) without the constant of \(4\pi\). Your answer will
(unsurprisingly) be \(G = -1/(4\pi r)\) rather than \(G = -1/r\).
Let’s integrate both sides of the expression above with respect to \(r\). This will eliminate the Dirac delta function in the expression.
\[$$\int \frac{1}{r} \frac{d}{dr} \left ( r \frac{dG}{dr} \right ) dr = \int \delta(r) dr$$\] \[$$\int \frac{1}{r} \frac{d}{dr} \left ( r \frac{dG}{dr} \right ) dr = 1$$\]
Whenever we see derivatives within an integral, there is a chance that the integral might be simplified by integration by parts. It turns out that integration by parts works splendidly in this case.
\[$$u = \frac{1}{r} \Rightarrow du = -\frac{1}{r^2} dr$$\] \[$$dv = \frac{d}{dr} \left [ r \frac{dG}{dr} \right ] dr \Rightarrow v = r \frac{dG}{dr}$$\]
The expression becomes:
\[$$\left ( \frac{1}{r} \right ) \left ( r \frac{dG}{dr} \right ) - \int \left ( r \frac{dG}{dr} \right ) \left ( -\frac{1}{r^2} dr \right ) = 1$$\] \[$$\frac{dG}{dr} + \int \frac{1}{r} \frac{dG}{dr}
dr = 1$$\]
Similar to the radial Laplacian in spherical coordinates, we can use integration by parts a second time to simplify the integral in the expression above.
\[$$u = \frac{1}{r} \Rightarrow du = -\frac{1}{r^2} dr$$\] \[$$dv = \frac{dG}{dr} dr \Rightarrow v = G$$\]
We expand the expression using integration by parts once again.
\[$$\frac{dG}{dr} + \left [ \left (\frac{1}{r} \right ) \left ( G \right ) - \int G \left ( -\frac{1}{r^2} dr\right ) \right ] = 1$$\] \[$$\frac{dG}{dr} + \frac{G}{r} + \int \frac{G}{r^2} dr = 1$$\]
It is generally preferable to work with differential equations rather than integral equations. Thus, on each side of the expression above, we can take the derivative with respect to \(r\); this will
eliminate the integral in the expression. Remember that \(G\) is a function of \(r\), so we have to use the quotient rule on the \(G/r\) term.
\[$$\frac{d^2 G}{dr^2} + \left ( \frac{1}{r} \frac{dG}{dr} - \frac{G}{r^2} \right ) + \frac{G}{r^2} = 0$$\] \[$$\frac{d^2 G}{dr^2} + \frac{1}{r} \frac{dG}{dr} = 0$$\]
Similar to the case of the radial, spherical Laplacian, we have simplified the expression for \(G\) into a straightforward, differential equation. I am going to guess and check the solution \(G = A
ln \vert r \vert + B\) where \(A\) and \(B\) are some constants.
\[ \frac{d^2 G}{dr^2} + \frac{1}{r}\frac{dG}{dr} &=\\ \frac{d^2}{dr^2} \left [ A ln \vert r \vert + B \right ] + \frac{1}{r}\frac{d}{dr} \left [ A ln \vert r \vert + B \right ] &=\\ -\frac{A}{r^2} +
\frac{1}{r} \left ( \frac{A}{r} \right ) &=\\ -\frac{A}{r^2} + \frac{A}{r^2} &= 0 \]
Thus, we see that the solution \(G = A ln \vert r \vert + B\) solves the differential equation. Notice how \(B\) is simply an additive constant in the expression. We usually set this constant equal
to zero in order to obtain the fundamental solution of the differential equation. Thus, we will use the function \(G = A ln \vert r \vert\) for the remainder of this post.
Checking the Solution
Like the derivation in the previous blog post, we need to check that our solution makes sense. And like the previous blog post, we run into a small problem.
\[ \nabla^2 G &= \frac{1}{r} \frac{d}{dr} \left [ r \frac{d}{dr} \left ( A ln \vert r \vert \right ) \right ]\\ &= \frac{1}{r} \frac{d}{dr} \left [ r \left ( \frac{A}{r} \right ) \right ]\\ &= \frac
{1}{r} \frac{d}{dr} \left [ A \right ]\\ &= 0 \]
Zero is not our expected output - we were expecting a Dirac delta function. The reason that we are not getting the expected output is that the domain of the solution does not include the origin \(r =
0\) (we divided by \(r\) throughout the derivation). We need to analyze the solution in the limiting case of \(r \rightarrow 0\). The strategy is identical to the previous blog post.
Define a function \(\Phi_\epsilon\) where:
\[$$\Phi_\epsilon = A ln \vert r + \epsilon \vert$$\]
As \(\epsilon\) approaches zero, \(\Phi_\epsilon\) approaches the Green’s function \(G\). Let’s plug the expression \(\Phi_\epsilon\) into the Laplacian and simplify.
\[ \nabla^2 \Phi_\epsilon &= \frac{1}{r} \frac{d}{dr} \left [ r \frac{d}{dr} \left ( A ln \vert r + \epsilon \vert \right ) \right ]\\ &= \frac{1}{r} \frac{d}{dr} \left [ r \left ( \frac{A}{r+\
epsilon} \right ) \right ]\\ &= \frac{1}{r} \frac{d}{dr} \left [ \frac{A r}{r+\epsilon} \right ]\\ &= \frac{1}{r} \left [ \frac{A}{r+\epsilon} - \frac{Ar}{(r+\epsilon)^2}\right ]\\ &= \frac{1}{r} \
left [ \frac{A(r + \epsilon)}{(r+\epsilon)^2} - \frac{Ar}{(r+\epsilon)^2}\right ]\\ &= \frac{1}{r} \left [ \frac{Ar + A\epsilon - Ar}{(r+\epsilon)^2} \right ]\\ &= \frac{1}{r} \left [ \frac{A\
epsilon}{(r+\epsilon)^2} \right ]\\ &= \frac{A\epsilon}{r(r+\epsilon)^2} \]
Remember that by the definition of a Green’s function, the Laplacian of \(G\) (or more correctly, the Laplacian of the limiting case of \(\Phi_\epsilon\)), will yield \(\delta(r)\). If that is true,
I should be able to integrate \(\nabla^2 \Phi_\epsilon\) over all space and get one. Let’s try doing just that:
\[ \int \nabla^2 \Phi_\epsilon dA &= \int_0^R \int_0^{2\pi} \frac{A\epsilon}{r(r+\epsilon)^2} r dr d\theta\\ &= 2\pi \int_0^R \frac{A\epsilon}{r(r+\epsilon)^2} r dr\\ &= 2\pi A\epsilon \int_0^R \frac
{1}{(r+\epsilon)^2} dr\\ &= 2\pi A\epsilon \left [ - \frac{1}{r+\epsilon} \right ]_0^R\\ &= - 2\pi A\epsilon \left [ \frac{1}{R+\epsilon} - \frac{1}{\epsilon} \right ]\\ &= - 2\pi A\epsilon \left [ -
\frac{R}{\epsilon(R+\epsilon)} \right ]\\ &= \frac{2\pi A\epsilon R}{\epsilon(R+\epsilon)}\\ &= \frac{2\pi A R}{R+\epsilon} \]
If we are going to integrate over all space, we need to take the limit of this expression as \(R \rightarrow \infty\). Once again, we turn to L’Hopital’s rule.
\[ \lim_{R \rightarrow \infty} \int \nabla^2 \Phi_\epsilon dA &= \lim_{R \rightarrow \infty} \frac{2\pi A R}{R+\epsilon}\\ &= \lim_{R \rightarrow \infty} 2\pi A\\ &= 2 \pi A \]
When \(A = 1/(2\pi)\), the Laplacian of \(\Phi_\epsilon\) satisfies the condition that the integral must equal one. Finally, let’s verify that the Laplacian of \(\Phi_\epsilon\) is actually a Dirac
delta function. We will define one final function, \(f_\epsilon(r)\):
\[$$f_\epsilon (r) = \frac{1}{2 \pi A} abla^2 \Phi_\epsilon (r) = \frac{1}{2\pi A} \frac{A\epsilon}{r(r+\epsilon)^2} = \frac{1}{2\pi} \frac{\epsilon}{r(r+\epsilon)^2}$$\]
What is the value of this function in the limit of \(\epsilon \rightarrow 0\)? If \(r\) is non-zero, the limit is very simple:
\[$$\lim_{\epsilon \rightarrow 0} f_\epsilon (r) = \lim_{\epsilon \rightarrow 0} \frac{1}{2\pi} \frac{0}{r^3} = 0$$\]
However, if \(r\) is equal to zero, we need to use L’Hopital’s rule to simplify the expression.
\[$$\lim_{\epsilon \rightarrow 0} f_\epsilon (r) = \lim_{\epsilon \rightarrow 0} \frac{1}{2\pi} \frac{1}{2r(r+\epsilon)} = \frac{1}{2\pi} \frac{1}{2r(r)} = \frac{1}{ 4 \pi r^2}$$\]
This becomes infinitely large when \(r = 0\).
The Final Solution
We can now tie everything together in a similar method as the previous post. In the limit of \(\epsilon \rightarrow 0\), the function \(f_\epsilon(r)\) is infinitely large at the origin and zero
everywhere else. Thus, \(f_\epsilon(r)\) is a Dirac delta function \(\delta(r)\).
The function \(f_\epsilon(r)\) is equal to the Laplacian of \(\Phi_\epsilon\) (remember that the constant \((2 \pi A)^{-1}\) is equal to one since \(A=(2\pi)^{-1}\)). This means that in the limit as
\(\epsilon \rightarrow 0\), the function \(\nabla^2\Phi_\epsilon\) is equal to a Dirac delta function.
Finally, the limiting case of \(\Phi_\epsilon\) as \(\epsilon \rightarrow 0\) is equal to the Green’s function \(G = A ln \vert r \vert = (2\pi)^{-1} ln \vert r \vert\). Thus, the Laplacian of the
Green’s function is a Dirac delta function. Just what we needed.
Until next time! | {"url":"https://cupcakephysics.com/math%20methods/2016/11/12/greens-function-for-the-two-dimensional-radial-laplacian.html","timestamp":"2024-11-04T14:13:36Z","content_type":"text/html","content_length":"20774","record_id":"<urn:uuid:17b4ca1e-813a-4699-a476-96c685dcf7e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00501.warc.gz"} |
PFE: Pfizer | Logical Invest
What do these metrics mean?
'The total return on a portfolio of investments takes into account not only the capital appreciation on the portfolio, but also the income received on the portfolio. The income typically consists of
interest, dividends, and securities lending fees. This contrasts with the price return, which takes into account only the capital gain on an investment.'
Using this definition on our asset we see for example:
• Compared with the benchmark SPY (101.5%) in the period of the last 5 years, the total return, or performance of -4.9% of Pfizer is lower, thus worse.
• Compared with SPY (29.7%) in the period of the last 3 years, the total return, or performance of -26.8% is smaller, thus worse.
'The compound annual growth rate (CAGR) is a useful measure of growth over multiple time periods. It can be thought of as the growth rate that gets you from the initial investment value to the ending
investment value if you assume that the investment has been compounding over the time period.'
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (15.1%) in the period of the last 5 years, the annual return (CAGR) of -1% of Pfizer is smaller, thus worse.
• Looking at compounded annual growth rate (CAGR) in of -9.9% in the period of the last 3 years, we see it is relatively lower, thus worse in comparison to SPY (9.1%).
'In finance, volatility (symbol σ) is the degree of variation of a trading price series over time as measured by the standard deviation of logarithmic returns. Historic volatility measures a time
series of past market prices. Implied volatility looks forward in time, being derived from the market price of a market-traded derivative (in particular, an option). Commonly, the higher the
volatility, the riskier the security.'
Applying this definition to our asset in some examples:
• Looking at the historical 30 days volatility of 27.2% in the last 5 years of Pfizer, we see it is relatively greater, thus worse in comparison to the benchmark SPY (20.9%)
• Looking at 30 days standard deviation in of 26.3% in the period of the last 3 years, we see it is relatively larger, thus worse in comparison to SPY (17.6%).
'The downside volatility is similar to the volatility, or standard deviation, but only takes losing/negative periods into account.'
Which means for our asset as example:
• Looking at the downside volatility of 18.3% in the last 5 years of Pfizer, we see it is relatively greater, thus worse in comparison to the benchmark SPY (14.9%)
• During the last 3 years, the downside risk is 17.7%, which is larger, thus worse than the value of 12.3% from the benchmark.
'The Sharpe ratio (also known as the Sharpe index, the Sharpe measure, and the reward-to-variability ratio) is a way to examine the performance of an investment by adjusting for its risk. The ratio
measures the excess return (or risk premium) per unit of deviation in an investment asset or a trading strategy, typically referred to as risk, named after William F. Sharpe.'
Applying this definition to our asset in some examples:
• Looking at the risk / return profile (Sharpe) of -0.13 in the last 5 years of Pfizer, we see it is relatively lower, thus worse in comparison to the benchmark SPY (0.6)
• During the last 3 years, the Sharpe Ratio is -0.47, which is lower, thus worse than the value of 0.37 from the benchmark.
'The Sortino ratio improves upon the Sharpe ratio by isolating downside volatility from total volatility by dividing excess return by the downside deviation. The Sortino ratio is a variation of the
Sharpe ratio that differentiates harmful volatility from total overall volatility by using the asset's standard deviation of negative asset returns, called downside deviation. The Sortino ratio takes
the asset's return and subtracts the risk-free rate, and then divides that amount by the asset's downside deviation. The ratio was named after Frank A. Sortino.'
Applying this definition to our asset in some examples:
• The downside risk / excess return profile over 5 years of Pfizer is -0.19, which is smaller, thus worse compared to the benchmark SPY (0.84) in the same period.
• Looking at downside risk / excess return profile in of -0.7 in the period of the last 3 years, we see it is relatively smaller, thus worse in comparison to SPY (0.53).
'The Ulcer Index is a technical indicator that measures downside risk, in terms of both the depth and duration of price declines. The index increases in value as the price moves farther away from a
recent high and falls as the price rises to new highs. The indicator is usually calculated over a 14-day period, with the Ulcer Index showing the percentage drawdown a trader can expect from the high
over that period. The greater the value of the Ulcer Index, the longer it takes for a stock to get back to the former high.'
Applying this definition to our asset in some examples:
• The Ulcer Index over 5 years of Pfizer is 28 , which is larger, thus worse compared to the benchmark SPY (9.32 ) in the same period.
• Looking at Downside risk index in of 35 in the period of the last 3 years, we see it is relatively greater, thus worse in comparison to SPY (10 ).
'Maximum drawdown measures the loss in any losing period during a fund’s investment record. It is defined as the percent retrenchment from a fund’s peak value to the fund’s valley value. The drawdown
is in effect from the time the fund’s retrenchment begins until a new fund high is reached. The maximum drawdown encompasses both the period from the fund’s peak to the fund’s valley (length), and
the time from the fund’s valley to a new fund high (recovery). It measures the largest percentage drawdown that has occurred in any fund’s data record.'
Using this definition on our asset we see for example:
• Looking at the maximum reduction from previous high of -54.8 days in the last 5 years of Pfizer, we see it is relatively smaller, thus worse in comparison to the benchmark SPY (-33.7 days)
• Looking at maximum drop from peak to valley in of -54.8 days in the period of the last 3 years, we see it is relatively lower, thus worse in comparison to SPY (-24.5 days).
'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Max Drawdown Duration is the worst (the maximum/longest) amount of time an investment has
seen between peaks (equity highs). Many assume Max DD Duration is the length of time between new highs during which the Max DD (magnitude) occurred. But that isn’t always the case. The Max DD
duration is the longest time between peaks, period. So it could be the time when the program also had its biggest peak to valley loss (and usually is, because the program needs a long time to recover
from the largest loss), but it doesn’t have to be'
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (488 days) in the period of the last 5 years, the maximum time in days below previous high water mark of 723 days of Pfizer is larger, thus worse.
• Looking at maximum days below previous high in of 723 days in the period of the last 3 years, we see it is relatively higher, thus worse in comparison to SPY (488 days).
'The Average Drawdown Duration is an extension of the Maximum Drawdown. However, this metric does not explain the drawdown in dollars or percentages, rather in days, weeks, or months. The Avg
Drawdown Duration is the average amount of time an investment has seen between peaks (equity highs), or in other terms the average of time under water of all drawdowns. So in contrast to the Maximum
duration it does not measure only one drawdown event but calculates the average of all.'
Which means for our asset as example:
• Compared with the benchmark SPY (123 days) in the period of the last 5 years, the average days below previous high of 243 days of Pfizer is greater, thus worse.
• Compared with SPY (177 days) in the period of the last 3 years, the average days under water of 352 days is higher, thus worse. | {"url":"https://logical-invest.com/app/stock/pfe/pfizer","timestamp":"2024-11-04T11:07:34Z","content_type":"text/html","content_length":"61603","record_id":"<urn:uuid:378de356-d88d-4468-a808-dcba90e1b4ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00238.warc.gz"} |
Superrational Agents Kelly Bet Influence! — AI Alignment Forum
As a follow-up to the Walled Garden discussion about Kelly betting, Scott Garrabrant made some super-informal conjectures to me privately, involving the idea that some class of "nice" agents would
"Kelly bet influence", where "influence" had something to do with anthropics and acausal trade.
I was pretty incredulous at the time. However, as soon as he left the discussion, I came up with an argument for a similar fact. (The following does not perfectly reflect what Scott had in mind, by
any means. His notion of "influence" was very different, for a start.)
The meat of my argument is just Critch's negotiable RL theorem. In fact, that's practically the entirety of my argument. I'm just thinking about the consequences in a different way from how I have
Rather than articulating a real decision theory that deals with all the questions of acausal trade, bargaining, commitment races, etc, I'm just going to imagine a class of superrational agents which
solve these problems somehow. These agents "handshake" with each other and negotiate (perhaps acausally) a policy which is Pareto-optimal wrt each of their preferences.
Negotiable RL
Critch's negotiable RL result studies the question of what an AI should do if it must serve multiple masters. For this post, I'll refer to the masters as "coalition members".
He shows the following:
Any policy which is Pareto-optimal with respect to the preferences of coalition members, can be understood as doing the following. Each coalition member is assigned a starting weight, with weights
summing to one. At each decision, the action is selected via the weighted average of the preferences of each coalition member, according to the current weights. At each observation, the weights are
updated via Bayes' Law, based on the beliefs of coalition members.
He was studying what an AI's policy should be, when serving the coalition members; however, we can apply this result to a coalition of superrational agents who are settling on their own policy,
rather than constructing a robotic servant.
Critch remarks that we can imagine the weight update as the result of bets which the coalition members would make with each other. I've known about this for a long time, and it made intuitive sense
to me that they'll happily bet on their beliefs; so, of course they'll gain/lose influence in the coalition based on good/bad predictions.
What I didn't think too hard about was how they end up betting. Sure, the fact that it's equivalent to a Bayesian update is remarkable. But it makes sense once you think about the proof.
Or does it?
To foreshadow: the proof works from the assumption of Pareto optimality. So it collectively makes sense for the agents to bet this way. But the "of course it makes sense for them to bet on their
beliefs" line of thinking tricks you into thinking that it individually makes sense for the agents to bet like this. However, this need not be the case.
Kelly Betting & Bayes
The Kelly betting fraction can be written as:
Where p is your probability for winning, and r is the return rate if you win (ie, if you stand to double your money, r=2; etc).
Now, it turns out, betting f of your money (and keeping the rest in reserve) is equivalent to betting p of your money and putting (1-p) on the other side of the bet. Betting against yourself is a
pretty silly thing to do, but since you'll win either way, there's no problem:
Betting of your money:
• If you win, you've got , plus the you held.
□ So the sum = of your initial money.
• If you lose, you've still got of what you had.
Betting against yourself, with fractions like your beliefs:
• If you win, you've got of your money.
• If you lose, the payoff ratio (assuming you can get the reverse odds for the reverse bet) is . So, since you put down , you get .
But now imagine that a bunch of bettors are using the second strategy to make bets with each other, with the "house odds" being the weighted average of all their beliefs (weighted by their bankrolls,
that is). Aside from the betting-against-yourself part, this is a pretty natural thing to do: these are the "house odds" which make the house revenue-neutral, so the house never has to dig into its
own pockets to award winnings.
You can imagine that everyone is putting money on two different sides of a table, to indicate their bets. When the bet is resolved, the losing side is pushed over to the winning side, and everyone
who put money on the winning side picks up a fraction of money proportional to the fraction they originally contributed to that side. (And since payoffs of the bet-against-yourself strategy are
exactly identical to Kelly betting payoffs, a bunch of Kelly bets at house odds rearrange money in exactly the same way as this.)
But this is clearly equivalent to how hypotheses redistribute weight during Bayesian updates!
So, a market of Kelly betters re-distributes money according to Bayesian updates.
Altruistic Bets
Therefore, we can interpret the superrational coalition members as betting their coalition weight, according to the Kelly criterion.
But, this is a pretty weird thing to do!
I've argued that the main sensible justification for using the Kelly criterion is if you have utility logarithmic in wealth. Here, this translates to utility logarithmic in coalition weight.
It's possible that under some reasonable assumptions about the world, we can argue that utility of coalition members will end up approximately logarithmic. But Critch's theorem applies to lots of
situations, including small ones where there isn't any possibility for weird things to happen over long chains of bets as in some arguments for Kelly.
Typically, final utility will not even be continuous in coalition weight: small changes in coalition weight often won't change the optimal strategy at all, but at select tipping points, the optimal
strategy will totally change to reflect the reconfigured trade-offs between preferences.
Intuitively, these tipping points should factor significantly in a coalition member's betting strategy; you'd be totally indifferent to small bets which can't change anything, but avoid specific
transitions strongly, and seek out others. If the coalition members were betting based on their selfish preferences, this would be the case.
Yet, the coalition members end up betting according to a very simple formula, which does not account for any of this.
We can't justify this betting behavior from a selfish perspective (that is, not with the usual decision theories); as I said, the bets don't make sense.
But we're not dealing with selfish agents. These agents are acting according to a Pareto-optimal policy.
And that's ultimately the perspective we can justify the bets from: these are altruistically motivated bets. Exchanging coalition weight in this way is best for everyone. It keeps you Pareto-optimal!
This is very counterintuitive. I suspect most people would agree with me that there seems to be no reason to bet, if you're being altruistic rather than selfish. Not so! They're not betting for their
personal benefit. They're betting for the common good!
Of course, that fact is a very straightforward consequence of Critch's theorem. It shouldn't be surprising. Yet, somehow, it didn't stick out to me in quite this way. I was too stuck in the frame of
trying to interpret the bets selfishly, as Pareto-improvements which both sides happily agree to.
I'm quite curious whether we can say anything interesting about how altruistic agents would handle money, based on this. I don't think it means altruists should Kelly bet money; money is a very
different thing from coalition weight. Coalition weights are like exchange rates or prices. Money is more of a thing being exchanged. You do not pay coalition weight in order to get things done.
I think you are right, I was confused. A pareto-improvement bet is necessarily one which all parties would selfishly consent to (at least not actively object to).
Can you justify Kelly "directly" in terms of Pareto-improvement trades rather than "indirectly" through Pareto-optimality? I feel this gets at the distinction between the selfish vs altruistic view.
I also looked into this after that discussion. At the time I thought that this might have been something special about Kelly, but when I did some calculations afterwards I found that I couldn't
get this to work in the other direction.
I'm not sure what you mean here. What is "this" in "looked into this" -- Critch's theorem? What is "the other direction"?
Everything you've written (as I currently understand it) also applies for many other betting strategies. eg if everyone was betting (the same constant) fractional Kelly.
Specifically the market will clear at the same price (weighted average probability) and "everyone who put money on the winning side picks up a fraction of money proportional to the fraction they
originally contributed to that side".
It seems obvious to me that the market will clear at the same price if everyone is using the same fractional Kelly, but if people are using different Kelly fractions, the weighted sum would be
correspondingly skewed, right? Anyway, that's not really important here...
The important thing for the connection to Critch's theorem is: the total wealth gets adjusted like Bayes' Law. Other betting strategies may not have this property; for example, fractional Kelly means
losers lose less, and winners win less. This doesn't limit us to exactly Kelly (for example, the bet-against-yourself strategy in the post also has the desired property); however, all such strategies
must be equivalent to Kelly in terms of the payoffs (otherwise, they wouldn't be equivalent to Bayes in terms of the updates!).
For example, if everyone uses fractional Kelly with the same fraction, then on the first round of betting, the market clears with all the right prices, since everyone is just scaling down how much
they bet. However, the subsequent decisions will then get messed up, because the everyone has the wrong weights (weights changed less than they should).
New Comment
6 comments, sorted by Click to highlight new comments since:
Maybe I'm missing something, but it seems to me that all of this is straightforwardly justified through simple selfish pareto-improvements.
Take a look at Critchs cake-splitting example in section 3.5. Now imagine varying the utility of splitting. How high does it need to get, before [red->Alice;green->Bob] is no longer a pareto
improvement over [(split)] from both player's selfish perspective before the observation? It's 27, and thats also exactly where the decision flips when weighing Alice 0.9 and Bob 0.1 in red, and
Alice 0.1 and Bob 0.9 in green.
Intuitively, I would say that the reason you don't bet influence all-or-nothing, or with some other strategy, is precisely because influence is not money. Influence can already be all-or-nothing all
by itself, if one player never cares that much more than the other. The influence the "losing" bettor retains in the world where he lost is not some kind of direct benefit to him, the way money would
be: it functions instead as a reminder of how bad a treatment he was willing to risk in the unlikely world, and that is of course proportional to how unlikely he thought it is.
So I think all this complicated strategizing you envision in influence betting, actually just comes out exactly to Critches results. Its true that there are many situations where this leads to
influence bets that don't matter to the outcome, but they also don't hurt. The theorem only says that actions must be describable as following a certain policy, it doesn't exclude that they can be
described by other policies as well.
Suppose instead of a timeline with probabilistic events, the coalition experiences the full tree of all possible futures - but we translate everything to preserve behavior. Then beliefs encode which
timelines each member cares about, and bets trade influence (governance tokens) between timelines.
I also looked into this after that discussion. At the time I thought that this might have been something special about Kelly, but when I did some calculations afterwards I found that I couldn't get
this to work in the other direction. I haven't fully parsed what you mean by:
(And since payoffs of the bet-against-yourself strategy are exactly identical to Kelly betting payoffs, a bunch of Kelly bets at house odds rearrange money in exactly the same way as this.)
But this is clearly equivalent to how hypotheses redistribute weight during Bayesian updates!
So, a market of Kelly betters re-distributes money according to Bayesian updates.
So take the following with a (large) grain of salt before I can recheck my reasoning, but:
Everything you've written (as I currently understand it) also applies for many other betting strategies. eg if everyone was betting (the same constant) fractional Kelly.
Specifically the market will clear at the same price (weighted average probability) and "everyone who put money on the winning side picks up a fraction of money proportional to the fraction they
originally contributed to that side". | {"url":"https://www.alignmentforum.org/posts/7EupfLrZ63pbdyb9J/superrational-agents-kelly-bet-influence","timestamp":"2024-11-08T04:02:31Z","content_type":"text/html","content_length":"524536","record_id":"<urn:uuid:f0cbd912-0536-4345-94cc-5e49913af4a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00247.warc.gz"} |
Variance Goal
Limit white-noise impact on specified output signals, when using Control System Tuner.
Variance Goal imposes a noise attenuation constraint that limits the impact on specified output signals of white noise applied at specified inputs. The noise attenuation is measured by the ratio of
the noise variance to the output variance.
For stochastic inputs with a nonuniform spectrum (colored noise), use Weighted Variance Goal instead.
In the Tuning tab of Control System Tuner, select New Goal > Signal variance attenuation to create a Variance Goal.
Command-Line Equivalent
When tuning control systems at the command line, use TuningGoal.Variance to specify a constraint on noise amplification.
I/O Transfer Selection
Use this section of the dialog box to specify noise input locations and response outputs. Also specify any locations at which to open loops for evaluating the tuning goal.
• Specify stochastic inputs
Select one or more signal locations in your model as noise inputs. To constrain a SISO response, select a single-valued input signal. For example, to constrain the gain from a location named 'u'
to a location named 'y', click Add signal to list and select 'u'. To constrain the noise amplification of a MIMO response, select multiple signals or a vector-valued signal.
• Specify stochastic outputs
Select one or more signal locations in your model as outputs for computing response to the noise inputs. To constrain a SISO response, select a single-valued output signal. For example, to
constrain the gain from a location named 'u' to a location named 'y', click Add signal to list and select 'y'. To constrain the noise amplification of a MIMO response, select multiple signals or
a vector-valued signal.
• Compute output variance with the following loops open
Select one or more signal locations in your model at which to open a feedback loop for the purpose of evaluating this tuning goal. The tuning goal is evaluated against the open-loop configuration
created by opening feedback loops at the locations you identify. For example, to evaluate the tuning goal with an opening at a location named 'x', click Add signal to list and select 'x'.
To highlight any selected signal in the Simulink^® model, click . To remove a signal from the input or output list, click . When you have selected multiple signals, you can reorder them using and .
For more information on how to specify signal locations for a tuning goal, see Specify Goals for Interactive Tuning.
Use this section of the dialog box to specify additional characteristics of the variance goal.
• Attenuate input variance by a factor
Enter the desired noise attenuation from the specified inputs to outputs. This value specifies the maximum ratio of noise variance to output variance.
When you tune a control system in discrete time, this requirement assumes that the physical plant and noise process are continuous, and interprets the desired noise attenuation as a bound on the
continuous-time H[2] norm. This assumption ensures that continuous-time and discrete-time tuning give consistent results. If the plant and noise processes are truly discrete, and you want to
bound the discrete-time H[2] norm instead, divide the desired attenuation value by $\sqrt{{T}_{s}}$, where T[s] is the sample time of the model you are tuning.
• Adjust for signal amplitude
When this option is set to No, the closed-loop transfer function being constrained is not scaled for relative signal amplitudes. When the choice of units results in a mix of small and large
signals, using an unscaled transfer function can lead to poor tuning results. Set the option to Yes to provide the relative amplitudes of the input signals and output signals of your transfer
For example, suppose the tuning goal constrains a 2-input, 2-output transfer function. Suppose further that second input signal to the transfer function tends to be about 100 times greater than
the first signal. In that case, select Yes and enter [1,100] in the Amplitude of input signals text box.
Adjusting signal amplitude causes the tuning goal to be evaluated on the scaled transfer function D[o]^–1T(s)D[i], where T(s) is the unscaled transfer function. D[o] and D[i] are diagonal
matrices with the Amplitude of output signals and Amplitude of input signals values on the diagonal, respectively.
• Apply goal to
Use this option when tuning multiple models at once, such as an array of models obtained by linearizing a Simulink model at different operating points or block-parameter values. By default,
active tuning goals are enforced for all models. To enforce a tuning requirement for a subset of models in an array, select Only Models. Then, enter the array indices of the models for which the
goal is enforced. For example, suppose you want to apply the tuning goal to the second, third, and fourth models in a model array. To restrict enforcement of the requirement, enter 2:4 in the
Only Models text box.
For more information about tuning for multiple models, see Robust Tuning Approaches (Robust Control Toolbox).
• When you use this requirement to tune a control system, Control System Tuner attempts to enforce zero feedthrough (D = 0) on the transfer that the requirement constrains. Zero feedthrough is
imposed because the H[2] norm, and therefore the value of the tuning goal (see Algorithms), is infinite for continuous-time systems with nonzero feedthrough.
Control System Tuner enforces zero feedthrough by fixing to zero all tunable parameters that contribute to the feedthrough term. Control System Tuner returns an error when fixing these tunable
parameters is insufficient to enforce zero feedthrough. In such cases, you must modify the requirement or the control structure, or manually fix some tunable parameters of your system to values
that eliminate the feedthrough term.
When the constrained transfer function has several tunable blocks in series, the software’s approach of zeroing all parameters that contribute to the overall feedthrough might be conservative. In
that case, it is sufficient to zero the feedthrough term of one of the blocks. If you want to control which block has feedthrough fixed to zero, you can manually fix the feedthrough of the tuned
block of your choice.
To fix parameters of tunable blocks to specified values, see View and Change Block Parameterization in Control System Tuner.
• This tuning goal also imposes an implicit stability constraint on the closed-loop transfer function between the specified inputs to outputs, evaluated with loops opened at the specified
loop-opening locations. The dynamics affected by this implicit constraint are the stabilized dynamics for this tuning goal. The Minimum decay rate and Maximum natural frequency tuning options
control the lower and upper bounds on these implicitly constrained dynamics. If the optimization fails to meet the default bounds, or if the default bounds conflict with other requirements, on
the Tuning tab, use Tuning Options to change the defaults.
When you tune a control system, the software converts each tuning goal into a normalized scalar value f(x). Here, x is the vector of free (tunable) parameters in the control system. The software then
adjusts the parameter values to minimize f(x) or to drive f(x) below 1 if the tuning goal is a hard constraint.
For Variance Goal, f(x) is given by:
$f\left(x\right)={‖\text{Attenuation}\cdot T\left(s,x\right)‖}_{2}.$
T(s,x) is the closed-loop transfer function from Input to Output. ${‖\text{\hspace{0.17em}}\cdot \text{\hspace{0.17em}}‖}_{2}$ denotes the H[2] norm (see norm).
For tuning discrete-time control systems, f(x) is given by:
T[s] is the sample time of the discrete-time transfer function T(z,x).
Related Topics | {"url":"https://la.mathworks.com/help/slcontrol/ug/variance-goal.html","timestamp":"2024-11-14T01:24:45Z","content_type":"text/html","content_length":"81461","record_id":"<urn:uuid:73127cea-3fc3-4722-b513-ab3d7d6b39b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00457.warc.gz"} |
Crinkler secrets, 4k intro executable compressor at its best
(Edit 5 Jan 2011: New Compression results section and small crinkler x86 decompressor analysis)
If you are not familiar with 4k intros, you may wonder how things are organized at the executable level to achieve this kind of packing-performance. Probably the most important and essential aspect
of 4k-64k intros is the compressor, and surprisingly, 4k intros have been well equipped for the past five years, as
is the best compressor developed so far for this category. It has been created by Blueberry (Loonies) and Mentor (tbc), two of the greatest demomakers around.
Last year, I started to learn a bit more about the compression technique used in Crinkler. It started from some pouet's comments that intrigued me, like "crinkler needs several hundred of mega-bytes
to compress/decompress a 4k intros" (wow) or "when you want to compress an executable, It can take hours, depending on the compressor parameters"... I observed also
bad comrpession result
, while trying to convert some part of C++ code to asm code using crinkler... With this silly question, I realized that in order to achieve better compression ratio, you better need a code that is
comrpession friendly but is not necessarily smaller. Or in other term, the smaller asm code is not always the best candidate for better compression under crinkler... so right, I needed to understand
how crinkler was working in order to code crinkler-friendly code...
I just had a basic knowledge about compression, probably the last book I bought about compression was more than 15 years ago to make a presentation about jpeg compression for a physics courses (that
was a way to talk about computer related things in a non-computer course!)... I remember that I didn't go further in the book, and stopped just before arithmetic encoding. Too bad, that's exactly one
part of crinkler's compression technique, and has been widely used for the past few years (and studied for the past 40 years!), especially in compressors like H.264!
So wow, It took me a substantial amount of time to jump again on the compressor's train and to read all those complicated-statistical articles to understand how things are working... but that was
worth it! In the same time, I spent a bit of my time to dissect crinkler's decompressor, extract the code decompressor in order to comment it and to compare its implementation with my little-own-test
in this field... I had a great time to do this, although, in the end, I found that whatever I could do, under 4k, Crinkler is probably the best compressor ever.
You will find here an attempt to explain a little bit more what's behind Crinkler. I'm far from being a compressor expert, so if you are familiar with context-modeling, this post may sounds a bit
light, but I'm sure It could be of some interest for people like me, that are discovering things like this and want to understand how they make 4k intros possible!
Crinkler main principles
If you want a bit more information, you should have a look at the "manual.txt" file in the crinkler's archive. You will find here lots of valuable information ranging from why this project was
created to what kind of options you can setup for crinkler. There is also an old but still accurate and worth to look at powerpoint presentation from the author themselves that is available
First of all, you will find that crinkler is not strictly speaking an executable compressor but is rather
an integrated linker-compressor
. In fact, in the intro dev tool chain, It's used as part of the building process and is used inplace of your traditional linker.... while crinkler has the ability to compress its output. Why
crinkler is better suited at this place? Most notably because at the linker level, crinkler has access to portions of your code, your data, and is able to move them around in order to achieve better
compression. Though, for this choice, I'm not completely sure, but this could be also implemented as a standard exe compressor, relying on relocation tables in the PE sections of the executable and a
good disassembler like
in order to move the code around and update references... So, crinkler, cr-linker, compressor-linker, is a linker with an integrated compressor.
Secondly, crinkler is using a compression method that is far more aggressive and efficient than any old
dictionary-coder-LZ methods
: it's called
context modeling coupled with an arithmetic coder
. As mentioned in the crinkler's manual, the best place I found to learn about this was
Matt Mahoney resource website
. This is definitely the place to start when you want to play with context modeling, as there are lots of sourcecode, previous version of
program, from which you can learn gradually how to build such a compressor (more particularly in earlier version of the program, when the design was still simple to handle). Building a
context-modelling based compressor/decompressor is almost accessible from any developer, but
one of the strength of crinkler is its decompressor size
: around
210-220 bytes
, which makes it probably the most efficient and smaller context-modelling decompressor in the world. We will see also that crinkler made one of the simplest choice for a context-modelling
compressor, using a semi-static model in order to achieve better compression for 4k of datas, resulting in a less complex decompressor code as well.
crinkler is optimizing the usage of the exe-PE file
(which is the Windows Portable Executable format, the binary format of the a windows executable file, official description is available
). Mostly by removing the standard import table and dll loading in favor of a custom loader that exploit internal windows structure as well as storing function hashing in the header of the PE files
to recover dll functions.
Compression method
Arithmetic coding
The whole compression problem in crinkler can be summarized like this: what is the probability of the next bit to compress/decompress to be 1? The better is the probability (meaning by matching the
expecting result bit), the better is the compression ratio. Hence, Crinkler needs to be a little bit psychic?!
First of all, you probably wonder why probability is important here. This is mainly due to one compression technique called
arithmetic coding
. I won't go into the detail here and encourage the reader to read about the wikipedia article and related links. The main principle of arithmetic coding is its ability to encode into a single number
a set of symbols for which you know their probability to occur. The higher the probability is for a known symbol, the lower the number of bits will be required to encode its compressed counterpart.
At the bit level, things are getting even simpler, since the symbols are only 1 or 0. So if you can provide a probability for the next bit (even if this probability is completely wrong), you are able
to encode it through an arithmetic coder.
A simple binary arithmetic coder interface could look like this:
/// Simple ArithmeticCoder interface
class ArithmeticCoder {
/// Decode a bit for a given probability.
/// Decode returns the decoded bit 1 or 0
int Decode(Bitstream inputStream, double probabilityForNextBit);
/// Encode a bit (nextBit) with a given probability
void Encode(Bitstream outputStream, int nextBit, double probabilityForNextBit);
And a simple usage of this ArithmeticCoder could look like this:
// Initialize variables
Bitstream inputCompressedStream = ...;
Bitstream outputStream = ...;
ArithmeticCoder coder;
Context context = ...;
// Simple decoder implem using an arithmetic coder
for(int i = 0; i < numberOfBitsToDecode; i++) {
// Made usage of our psychic alias Context class
double nextProbability = context.ComputeProbability();
// Decode the next bit from the compressed stream, based on this
// probability
int nextBit = coder.Decode( inputCompressedStream, nextProbability);
// Update the psychic and tell him, how much wrong or right he was!
context.UpdateModel( nextBit, nextProbability);
// Output the decoded bit
So a Binary Arithmetic Coder is able to compress a stream of bits, if you are able to tell him what's the probability for the next bit in the stream. Its usage is fairly simple, although their
implementations are often really tricky and sometimes quite obscure (a real arithmetic implementation should face lots of small problems : renormalization, underflow, overflow...etc.).
Working at the bit level here wouldn't have been possible 20 years ago, as It requires a tremendous amount of CPU (and memory for the psychic-context) in order to calculate/encode a single bit, but
with nowadays computer power, It's less a problem... Lots of implem are working at the byte level for better performance, some of them can work at the bit level while still batching the decoding/
encoding results at the byte level. Crinkler doesn't care about this and is working at the bit level, making the arithmetic decoder in less than 20 x86 ASM instructions.
The C++ pseudo-code for an arithmetic decoder is like this:
int ArithmeticCoder::Decode(Bitstream inputStream, double nextProbability) {
int output = 0; // the decoded symbol
// renormalization
while (range < 0x80000000) {
range <<= 1;
value <<= 1;
value += inputStream.GetNextBit();
unsigned int subRange = (range * nextProbability);
range = range - subRange;
if (value >= range) { // we have the symbol 1
value = value - range;
range = subRange;
output++; // output = 1
return output;
This is almost exactly what is used in crinkler, but this done in only 18 asm instructions! The crinkler arithmetic coder is using a 33 bit precision. The decoder only needs to handle up to
0x80000000 limit renormalization while the encoder needs to work on 64 bit to handle the 33 bit precision. This is much more convenient to work at this precision for the decoder, as it is able to
easily detect renormalization (0x80000000 is in fact a negative number. The loop could have been formulated like while (range >= 0), and this is how it is done in asm).
So the arithmetic coder is the basic component used in crinkler. You will find plenty of arithmetic coder examples on Internet. Even if you don't fully understand the theory behind them, you can use
them quite easily. I found for example an interesting project called
, which provides a tool to produce some arithmetic coders code based on a formal description (For example, a
32bit precision arithmetic coder
description in flavor), pretty handy to understand how things are translated from different coder behaviors.
But, ok, the real brain here is not the arithmetic coder... but the psychic-context (the Context class above) which is responsible to provide a probability and to update its model based on the
previous expectation. This is where a compressor is making the difference.
Context modeling - Context mixing
This is one great point about using an arithmetic coder: they can be decoupled from the component responsible to provide the probability for the next symbol. This component is called a
What is the context? It is whatever data can help your context-modeler to evaluate the probability for the next symbol to occur. Thus, the most obvious data for a compressor-decompressor is to use
previous decoded data to update its internal probability table.
Suppose you have the following sequence of 8 bytes
that is already decoded. What will be the next bit? It is certainly to be a 1, and you could bet on it as high as 98% of probability.
So this is not a surprise that using history of data is the key point for the context modeler to predict next bit (and well, we have to admit that our computer-psychic is not as good as he claims, as
he needs to know the past to predict the future!).
Now that we know that to produce a probability for the next bit, we need to use historic data, how crinkler is using them? Crinkler is in fact maintaining a table of probability, up to 8 bytes + the
current bits already read before the next bit. In the context-modeling jargon, it's often called the order (before context modeling, there was technique developped like
for Partial Predition Matching and
for dynamic markov compression). But crinkler is using not only the last x bytes (up to 8), but sparse mode (as it is mentioned in PAQ compressors), a combination of the last 8 bytes + the current
bits already read. Crinkler calls this a model: It is stored into a single byte :
• The 0x00 model says that It doesn't use any previous bytes other than the current bits being read.
• The 0x80 model says that it is using the previous byte + the current bits being read.
• The 0x81 model says that is is using the previous byte and the -8th byte + the current bits being read.
• The 0xFF model says that all 8 previous bytes are used
You probably don't see yet how this is used. We are going to take a simple case here: Use the previous byte to predict the next bit (called the model 0x80).
Suppose the sequence of datas :
0xFF, 0x80, 0xFF, 0x85, 0xFF, 0x88, 0xFF, ???nextBit???
(0) (1) (2) (3) | => decoder position
• At position 0, we know that 0xFF is followed by bit 1 (0x80 <=> 10000000b). So n0 = 0, n1 = 1 (n0 denotes the number of 0 that follows 0xFF, n1 denotes the number of 1 that usually follows 0xFF)
• At position 1, we know that 0xFF is still followed by bit 1: n0 = 0, n1 = 2
• At position 2, n0 = 0, n1 = 3
• At position 3, we have n0 = 0, n1 = 3, making the probability for one p(1) = (n1 + eps) / ( n0+eps + n1+eps). eps for epsilon, lets take 0.01. We have p(1) = (2+0.01)/(0+0.01 + 2+0.01) = 99,50%
So we have the probability of 99,50% at position (3) that the next bit is a 1.
The principle here is simple: For each model and an historic value, we associate n0 and n1, the number of bits found for bit 0 (n0) and bit 1 (n1). Updating those n0/n1 counters needs to be done
carefully : a naive approach would be to increment according values when a particular training bit is found... but there is more chance that recent values are more relevant than olders.... Matt
Mahoney explained this in
The PAQ1 Data Compression Program
, 2002. (Describes PAQ1), and describes how to efficiently update those counters for a non-stationary source of data :
• If the training bit is y (0 or 1) then increment ny (n0 or n1).
• If n(1-y) > 2, then set n(1-y) = n(1-y) / 2 + 1 (rounding down if odd).
Suppose for example that n0 = 3 and n1 = 4 and we have a new bit 1. Then n0 will be = n0/2 + 1 = 3/2+1=2 and n1 = n1 + 1 = 5
Now, we know how to produce a single probability for a single model... but working with a single model (for exemple, only the previous byte) wouldn't be enough to evaluate correctly the next bit.
Instead, we need a way to combine different models (different selection of historic data). This is called
, and this is the real power of context modeling: whatever is your method to collect and calculate a probability, you can, at some point, mix severals estimator to calculate a single probability.
There are several ways to mix those probabilities. In the pure context-modeling jargon, the model is the way you mix probabilities and for each model, you have a weight :
• static: you determine the weights whatever the data are.
• semi-static: you perform a 1st pass over the data to compress to determine the weights for each model, and them a 2nd pass with the best weights
• adaptive: weights are updated dynamically as new bits are discovered.
Crinkler is using a semi-static context-mixing
but is somewhat also "semi-adaptive", because It is using different weights for the code of your exe, and the data of your exe, as they have a different binary layout.
So how this is mixed-up? Crinkler needs to determine the best context-models (the combination of historic data) that It will use, assign for each of those context a weight. The weight is then used to
calculate the final probability.
For each selected historic model (i) with an associated model weight wi, and ni0/ni1 bit counters, the final probability p(1) is calculated like this :
p(1) = Sum( wi * ni1 / (ni0 + ni1)) / Sum ( wi )
This is exactly what is done in the code above for
, and this is exactly what crinkler is doing.
In the end, crinkler is selecting a list of models for each type of section in your exe: a set of models for the code section, a set of models for the data section.
How many models crinkler is selecting? It depends on your data. For example, for
intro,crinklers is selecting the following models:
For the code section:
Model {0x00,0x20,0x60,0x40,0x80,0x90,0x58,0x4a,0xc0,0xa8,0xa2,0xc5,0x9e,0xed,}
Weight { 0, 0, 0, 1, 2, 2, 2, 2, 3, 3, 3, 4, 6, 6,}
For the data section:
Model {0x40,0x60,0x44,0x22,0x08,0x84,0x07,0x00,0xa0,0x80,0x98,0x54,0xc0,0xe0,0x91,0xba,0xf0,0xad,0xc3,0xcd,}
Weight { 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5,}
(note that in crinkler, the final weight used to multiply n1/n0+n1 is by 2^w, and not wi itself).
Wow, does it means that crinkler needs to store those datas in your exe. (14 bytes + 20 bytes) * 2 = 68 bytes? Well, crinkler authors are smarter than this! In fact the models are stored, but weights
are only store in a single int (32 bits for each section). Yep, a single int to stored those weights? Indeed: if you look at those weights, they are increasing, sometimes they are equal... So they
found a clever way to store a compact representation of those weights in a 32 bit form. Starting with a weight of 1, the 32bit weight is shifted by one bit to the left : If this is 0, than the
currentWeight doesn't change, if bit is 1, than currentWeight is incremented by 1 : (in this pseudo-code, shift is done to the right)
int currentWeight = 1;
int compactWeight = ....;
foreach (model in models) {
if ( compactWeight & 1 )
compactWeight = compactWeight >> 1;
// ... used currentWeight for current model
This way, crinkler is able to store a compact form of pairs (model/weight) for each type of data in your executable (code or pure data).
Model selection
Model selection is one of the key process of crinkler. For a particular set of datas, what is the best selection of models? You start with 256 models (all the combinations of the 8 previous bytes)
and you need to determine the best selection of models. You have to take into account that each time you are using a model, you need to use 1 byte in your final executable to store this model. Model
selection is part of crinkler compressor but is not part of crinkler decompressor. The decompressor just need to know the list of the final models used to compress the data, but doesn't care about
intermediate results. On the other hand, the compressor needs to test every combination of model, and find an appropriate weight for each model.
I have tested several methods in my test code and try to recover the method used in crinkler, without achieving comparable compression ratio... I tried some brute force algo without any success...
The selection algorithm is probably a bit clever than the one I have tested, and would probably require to layout mathematics/statistics formulas/combination to select an accurate method.
Finally, blueberry has given their method (thanks!)
To answer your question about the model selection process, it is actually not very clever. We step through the models in bit-mirrored numerical order (i.e. 00, 80, 40, C0, 20 etc.) and for each step
do the following:
- Check if compression improves by adding the model to the current set of models (taking into account the one extra byte to store the model).
- If so, add the model, and then step through every model in the current set and remove it if compression improves by doing so.
The difference between FAST and SLOW compression is that SLOW optimizes the model weights for every comparison between model sets, whereas FAST uses a heuristic for the model weights (number of bits
set in the model mask).
On the other hand, I tried a fully adaptive context modelling approach, using dynamic weight calculation explained by Matt Mahoney with neural networks and stretch/squash functions (look at PAQ on
wikipedia). It was really promising, as I was able to achieve sometimes better compression ratio than crinkler... but at the cost of a decompressor 100 bytes heavier... and even I was able to save 30
to 60 bytes for the compressed data, I was still off by 40-70 bytes... so under 4k, this approach was definitely not as efficient as a semi-static approach chosen by crinkler.
Storing probabilities
If you have correctly followed the previous model selection, crinkler is now working with a set of models (selection of history data), for each bit that is found, each model probabilities must be
But think about it: for example, if to predict the following bit, we are using the probabilities for the 8 previous bytes, it means that for every combination of 8 bytes already found in the decoded
data, we would have a pair of n0/n1 counters?
That would mean that we could have the folowing probabilities to update for the context 0xFF (8 previous bytes):
- "00 00 00 00 c0 00 00
" => some n0/n1
- "00 00 70 00 00 00 00
F2 01
" => another n0/n1
- "00 00 00 40 00 00 00
" => another n0/n1
and if we have other models like 0x80 (previous byte), or 0xC0 (the last 2 previous bytes), we would have also different counters for them:
// For model 0x80 // For model 0xC0
- "00" => some n0/n1 - "50 00" => some bis n0/n1
- "01" => another n0/n1 - "F2 01" => another bis n0/n1
- "02" => yet another n0/n1 - "30 02" => yet another bis n0/n1
... ...
From the previous model context, I have slightly over simplified the fact that not only the previous bytes is used, but also the current bits being read. In fact, when we are using for example the
model 0x80 (using the previous byte), the context of the historic data is composed not only by the previous byte, but also by the bits being read on the current octet. This implies obviously that for
every bit read, there is a different context. Suppose we have the sequence 0x75, 0x86 (in binary 10000110b), the position of the encoded bits is just after the 0x75 value and that we are using the
previous byte + the bits currently read:
First, we start on a byte boundary
- 0x75 with 0 bit (we start with 0) is followed by bit 1 (the 8 of 0x85). The context is 0x75 + 0 bit read
- We read one more bit, we have a new context : 0x75 + bit 1. This context is followed by a 0
- We read one more bit, we have a new context : 0x75 + bit 10. This context is followed by a 0.
- We read one more bit, we have a new context : 0x75 + bit 1000011, that is followed by a 0 (and we are ending on a byte boundary).
Reading 0x75 followed by 0x86, with a model using only the previous byte, we finally have 8 context with their own n0/n1 to store in the probability table.
As you can see, It is obvious that It's difficult to store all context found (.i.e for each single bit decoded, there is a different context of historic bytes) and their respective exact probability
counters, without exploding the RAM. Moreover if you think about the number of models that are used by crinkler: 14 types of different historic previous bytes selection for ergon's code!
This kind of problem is often handled using a hashtable while handling collisions. This is what is done in some of the PAQ compressors.
Crinkler is also using an hashtable to store counter probabilities
, with the association context_history_of_bytes = > (n0/n1), but It is not handling collision in order to keep minimal the size of the decompressor. As usual, the hash function used by crinkler is
really tiny while still giving really good results.
So instead of having the association between context_history_of_bytes => n0/n1, we are using a hashing function, hash(context_history_of_bytes) => n0/n1. Then, the dictionary that is storing all
those associations needs to be correctly dimensioned, large enough, to store as much as possible associations found while decoding/encoding the data.
Like in PAQ compressors,
crinkler is using one byte for each counter
, meaning that n0 and n1 together are taking 16 bit, 2 bytes. So if you instruct crinkler to use a hashtable of 100Mo, It will be possible to store 50 millions of different keys, meaning different
historic context of bytes and their respective probability counters. There is a little remark about crinkler and the byte counter: in PAQ compressors, limits are handled, meaning that if a counter is
going above 255, It will stuck to 255... but crinkler made the choice to not test the limits in order to keep the code smaller (although, that would take less than 6 bytes to test the limit). What is
the impact of this choice? Well, if you know crinkler, you are aware that crinkler doesn't handle large section of "zeros" or whatever empty initialized data. This is just because the probabilities
are looping from 255 to 0, meaning that you jump from a 100% probability (probably accurate) to almost a 0% probability (probably wrong) every 256 bytes. Is this really hurting the compression? Well,
It would hurt a lot if crinkler was used for larger executable, but in a 4k, It's not hurting so much (although, It could hurt if you really have large portions of initialized data). Also, not all
the context are reseted at the same time (a 8 byte context will not probably reset as often as a 1 byte context), so it means that final probability calculation is still accurate... while there is a
probability that is reseted, other models with their own probabilities are still counting there... so this is not a huge issue.
What happens also if the hash for a different context is giving the same value? Well, the model is then updating the wrong probability counters. If the hashtable is too small the probability counters
may really be too much disturbed and they would provide a less accurate final probability. But if the hashtable is large enough, collisions are less likely to happen.
Thus, it is quite common to use a hashtable as large as 256 to 512Mo if you want, although 256Mo is often enough, but the larger is your hashtable, the less are collisions, the more accurate is your
probability. Recall from the beginning of this post, and you should understand now why "crinkler can take several hundreds of megabytes to decompress"... simply because of this hashtable that store
all the probabilities for the next bit for all models combination used.
If you are familiar with crinkler, you already know the option to find a best possible hashsize for an initial hashtable size and a number of tries (hashtries option). This part is responsible to
test different size of hashtable (like starting from 100Mo, and reducing the size by 2 bytes 30 times, and test the final compression) and test final compression result. This is a way to empirically
reduce collision effects by selecting the hashsize that is giving the better compression ratio (meaning less collisions in the hash). Although this option is only able to help you save a couple of
bytes, no more.
Data reordering and type of data
Reordering or organizing differently the data to have a better compression is one of the common technique in compression methods. Sometimes for example, Its better to store deltas of values than to
store values themselves...etc.
Crinkler is using this principle to perform data reordering
. At the linker level, crinkler has access to portion of datas and code, and is able to move those portions around in order to achieve a better compression ratio. This is really easy to understand :
suppose that you have a series initialized zero values in your data section. If those values are interleaved with non zero values, the counter probabilities will switch from "there are plenty of zero
there" to "ooops, there are some other datas"... and the final probability will balance between 90% to 20%. Grouping data that are similar is a way to improve the overall probability correctness.
This part is the most time consuming, as It needs to move and arrange all portions of your executable around, and test which arrangement is giving the best compression result. But It's paying to use
this option, as you may be able to save 100 bytes in the end just with this option.
One thing that is also related to data reordering is the way
crinkler is handling separately the binary code and the data of your executable
. Why?, because their binary representation is different, leading to a completely different set of probabilities. If you look at the selected models for ergon, you will find that code and data models
are quite different. Crinkler is using this to achieve better performance here. In fact, crinkler is compressing completely separately the code and the datas. Code has its own models and weights,
Data another set of models and weights. What does it means internally? Crinkler is using a set of model and weights to decode the code section of your exectuable. Once finished, It will erase the
probability counters stored in the hashtable-dictionary, and go to the data section, with new models and weights. Reseting all counters to 0 in the middle of decompressing is improving compression by
a factor of 2-4%, which is quite impressive and valuable for a 4k (around 100 to 150 bytes).
I found that even with an adaptive model (with a neural networks dynamically updating the weights), It is still worth to reset the probabilities between code and data decompression. In fact, reseting
the probabilities is an empirical way to instruct the context modeling that datas are so different that It's better to start from scratch with new probability counters. If you think about it, an
improved demo compressor (for larger exectuable, for example under 64k) could be clever to detect those portions of datas that are enough different that It would be better to reset the dictionary
than to keep it as it is.
There is just one last thing about weights handling in crinkler. When decoding/encoding, It seems that crinkler is artificially increasing the weights for the first discovered bit. This little trick
is improving compression ratio by about 1 to 2% which is not bad. Having higher weights at the beginning enable to have a better response of the compressor/decompressor, even If it doesn't still have
enough data to compute a correct probability. Increasing the weights is helping the compression ratio at cold start.
Crinkler is also able to transform the x86 code for the executable part to improve compression ratio. This technique is widely used and consist of replacing relative jump (conditionnal, function
calls...etc.) to absolute jump, leading to a better compression ratio.
Custom DLL LoadLibrary and PE file optimization
In order to strip down the size of an executable, It's necessary to exploit as much as possible the organization of a PE file.
First thing that crinkler is using is that lots of part in a PE files are not used at all. If you want to know how a windows executable PE files can be reduced, I suggest you read
Tiny PE
article, which is a good way to understand what is actually used by a PE loader. Unlike the Tiny PE sample, where the author is moving the PE header to the dos header, crinkler made the choice to use
this unused place to store hash values that are used to reference DLL functions used.
This trick is called import by hashing and is quite common in intro's compressor. Probably what make crinkler a little bit more advanced is that to perform the "GetProcAddress" (which is responsible
to get the pointer to a function from a function name),
crinkler is navigating inside internal windows process structure in order to directly get the address of the functions from the in-memory import table.
Indeed, you won't find any import section table in a crinklerized executable. Everything is re-discovered through internal windows structures. Those structures are not officially documented but you
can find some valuable information around, most notably
If you look at crinkler's code stored in the crinkler import section, which is the code injected just before the intros start, in order to load all dll functions, you will find those cryptics calls
(0) MOV EAX, FS:[BX+0x30]
(1) MOV EAX, [EAX+0xC]
(2) MOV EAX, [EAX+0xC]
(3) MOV EAX, [EAX]
(4) MOV EAX, [EAX]
(5) MOV EBP, [EAX+0x18]
This is done by going through internal structures:
• (0) first crinklers gets a pointer to the "PROCESS ENVIRONMENT BLOCK (PEB)" with the instruction MOV EAX, FS:[BX+0x30]. EAX is now pointing to the PEB
Public Type PEB
InheritedAddressSpace As Byte
ReadImageFileExecOptions As Byte
BeingDebugged As Byte
Spare As Byte
Mutant As Long
SectionBaseAddress As Long
ProcessModuleInfo As Long ‘ // <---- PEB_LDR_DATA
ProcessParameters As Long ‘ // RTL_USER_PROCESS_PARAMETERS
SubSystemData As Long
ProcessHeap As Long
... struct continue
• (1) Then it gets a pointer to the "ProcessModuleInfo/PEB_LDR_DATA" MOV EAX, [EAX+0xC]
Public Type _PEB_LDR_DATA
Length As Integer
Initialized As Long
SsHandle As Long
InLoadOrderModuleList As LIST_ENTRY // <---- LIST_ENTRY InLoadOrderModuleList
InMemoryOrderModuleList As LIST_ENTRY
InInitOrderModuleList As LIST_ENTRY
EntryInProgress As Long
End Type
• (2) Then it gets a pointer to get a pointer to the next "InLoadOrderModuleList/LIST_ENTRY" MOV EAX, [EAX+0xC].
Public Type LIST_ENTRY Flink As LIST_ENTRY
Blink As LIST_ENTRY
End Type
• (3) and (4) Then it navigates through the LIST_ENTRY linked list MOV EAX, [EAX]. This is done 2 times. First time, we get a pointer to the NTDLL.dll, second with get a pointer to the KERNEL.DLL.
Each LIST_ENTRY is in fact followed by the structure LDR_MODULE :
Public Type LDR_MODULE
InLoadOrderModuleList As LIST_ENTRY
InMemoryOrderModuleList As LIST_ENTRY
InInitOrderModuleList As LIST_ENTRY
BaseAddress As Long
EntryPoint As Long
SizeOfImage As Long
FullDllName As UNICODE_STRING
BaseDllName As UNICODE_STRING
Flags As Long
LoadCount As Integer
TlsIndex As Integer
HashTableEntry As LIST_ENTRY
TimeDateStamp As Long
LoadedImports As Long
EntryActivationContext As Long ‘ // ACTIVATION_CONTEXT
PatchInformation As Long
End Type
Then from the BaseAddress of the Kernel.dll module, crinkler is going to the section where functions are already loaded in memory. From there, the first hashed function that is stored by crinkler is
LoadLibrary function. After this, crinkler is able to load all the depend dll and navigate through the import tables, recomputing the hash for all functions names for dependent dlls, and is trying to
match the hash stored in the PE header. If a match is found, then the function entry point is stored.
This way, crinkler is able to call some OS functions stored in the Kernel.DLL, without even linking explicitly to those DLL, as they are automatically loaded whenever a DLL is loaded. Thus achieving
a way to import all functions used by an intro with a custom import loader.
Compression results
So finally, you may ask, how much crinkler is good at compressing? How does it compare to other compression method? How does look like the entropy in a crinklerized exe?
I'll take the example of Ergon exe. You can already find a
detailed analysis
for this particular exe.
Comparison with other compression methods
In order to make a fair comparison between crinkler and other compressors, I have used the data that are actually compressed by crinkler after the reordering of code and data (This is done by
unpacking a crinklerized ergon.exe and extracting only the compressed data). This comparison is accurate in that all compressors are using exactly the same data.
In order also to be fair with crinkler, the size of 3652 is not taking into account the PE header + the crinkler decompressor code (which in total is 432 bytes for crinkler).
To perform this comparison, I have only used 7z which has at least 3 interesting methods to test against :
• Standard Deflate Zip
• PPMd with 256Mo of dictionary
• LZMA with 256Mo of dictionary
I have also included a comparison with a more advanced packing method from Matt Mahoney resource,
which is one of the version of PAQ methods, using neural networks and several context modeling methods.
Program Compression Method Size in bytes Ratio vs Crinkler
none uncompressed 9796
crinkler ctx-model 256Mo 3652 +0,00%
7z deflate 32Ko 4526 +23,93%
7z PPMd 256Mo 4334 +18,67%
7z LZMA 256Mo 4380 +19,93%
Paq8l dyn-ctx-model 256Mo 3521 -3,59%
As you can see,
crinkler is far more efficient than any of the "standard" compression method
(Zip, PPMd, LZMA). I'm not even talking about the fact that
a true comparison would be to include the decompressor size, so the ratio should certainly be worse for all standard methods!
Paq8l is of course slightly better... but if you take into account that Paq8l decompressor is itself an exe of 37Ko... compare to the 220 byte of crinkler... you should understand now how much
crinkler is highly efficient in its own domain! (remember? 4k!)
In order to measure the entropy of crinkler, I have developed a very small program in C# that is displaying the entropy of an exe. From green color (low entropy, less bits necessary to encode this
information) to red color (high entropy, more bits necessary to encode this information).
I have done this on 3 different ergon executable :
• The uncompressed ergon.exe (28Ko). It is the standard output of a binary exe with MSVC++ 2008.
• The raw-crinklerized ergon.exe extracted code and data section, but not compressed (9796 bytes)
• The final crinklerized ergon.exe file (4070 bytes)
Ergon standard exe entropy
Ergon code and data crinklerized, uncompressed reordered data
Ergon executable crinklerized
As expected,
the entropy is fairly massive in a crinklerized exe
. Compare with the waste of information in a standard windows executable. Also, you can appreciate how much is important the reordering and packing of data (no compression) that is perform by
Some notes about the x86 crinkler decompressor asm code
I have often talked about how much crinkler decompressor is truly a piece of x86 art. It is hard to describe the technique used here, there are lots of x86 standard optimization and some really nice
trick. Most notably:
1. using all the registers
2. using intensively the stack to save/restore all the registers with pushad/popad x86. This is for example done (1 + number_of_model) per bit. If you have 15 models, there will be a total of 16
pushad/popad instructions for a single bit to be decoded! You may wonder why making so many pushes? Its the only way to efficiently use all the registers (rule #1) without having to store
particular registers in a buffer. Of course, push/pop instruction is also used at several places in the code as well.
3. As a result of 1) and 2), apart from the hash dictionnary, no intermediate structure are used to perform the context modeling calculation.
4. Deferred conditional jump: Usually, when you perform some conditional testing with x86, this is often immediately followed by a conditional jump (like cmp eax, 0; jne go_for_bla). In crinkler,
sometimes, a conditionnal test is done, and is used several instruction laters. (for example. cmp eax,0; push eax; mov eax, 5; jne go_for_bla <---- this is using the result of cmp eax,0
comparison). It makes the code to read a LOT harder. Sometimes, the conditional is even used after a direct jump! This is probably one part of crinkler's decompressor that impressed me the most.
This is of course something quite common if you are programming heavily optimized-size x86 asm code... you need to know of course which instructions is not modifying CPU flags in order to achieve
this kind of optimization!
Final words
I would like to apologize for the lack of charts, pictures to explain a little bit how things are working. This article is probably still obscure for a casual reader, and should be considered as a
draft version. This was a quick and dirty post. I wanted to write this for a long time, so here it is, not perfect as it should be, but this may be improved in future versions!
As you can see, crinkler is really worth to look at. The effort to make it so efficient is impressive and there is almost no doubt that there won't be any other crinkler competitor for a long time!
At least for a 4k executable. Above 4k, I'm quite confident that there are still lots of area that could be improved, and probably kkrunchy is far from being the ultimate packer under 64k... Still,
if you want a packer, you need to code it, and that's not so trivial!
19 comments:
1. great article, thx!
2. Nice article, I've been meaning to write one like this for a while too!
3. Nice write up. Got a headache now but still. well done... ;-)
4. Wow, best artikel ever read about compressing methode !!!
5. Great article, and amazingly accurate! Quite impressive work you have done there dissecting all the little tidbits of Crinkler's header and import code. Must have been loads of fun. ;)
I only spotted three minor errors:
- To get the actual model weight used, rather than adding one, it is exponentiated, i.e. for weight w, the counter values for that model are multiplied by 2^w.
- Since a particular context can only occur once per byte, the counters wrap once every 256 bytes, rather than 256 bits (for areas where all bytes have the same value).
- The import code does not call GetProcAddress as such. Rather, it digs into internal DLL structures to imitate the functionality of GetProcAddress. If we were to use GetProcAddress for this, we
would run into a chicken-and-egg problem in getting the address of GetProcAddress in the first place. This is also the reason Crinkler does not support forwarded RVAs, which is a different way of
storing the function reference internally in the DLL.
To answer your question about the model selection process, it is actually not very clever. We step through the models in bit-mirrored numerical order (i.e. 00, 80, 40, C0, 20 etc.) and for each
step do the following:
- Check if compression improves by adding the model to the current set of models (taking into account the one extra byte to store the model).
- If so, add the model, and then step through every model in the current set and remove it if compression improves by doing so.
The difference between FAST and SLOW compression is that SLOW optimizes the model weights for every comparison between model sets, whereas FAST uses a heuristic for the model weights (number of
bits set in the model mask).
6. Thanks all for your kind comments and happy to see that It was interesting although It is still lacking lots of explanation, diagrams to be easier to read!
Hey Blueberry! glad to see you around and to learn that this post is not completely wrong! :)
Thanks for spotting these errors, I have fixed them! For the weights, indeed, I now recall the 2^w, but I didn't look at the code when I wrote this article... and probably, one and a half year
later, I have oversimplified the way it was calculated!
Woops, of course for the counter wrapping... there is a context for every single bit that is read... so yep, It's every 256 byte that the probability is reseted!
GetProcAddress, oops, again, you are right, you are mimicking GetProcAddress but you are not calling it. The only function that is called explicitly from the import code is the LoadLibrary
function which was recovered from the handwritten GetProcAddress.
And thanks for the model selection explanation! Weird, this is almost the method I tried while using dynamic neural networks, removing other models and testing compression once a model was
selected... but I did it probably wrong... need to check this.
7. Great article! Didn't know this blog - and subscribed!
8. @lx: sorry for being a noob, but how would you use crinkler with .net executables? I tried but crinkler complains about "unsupported file type"
9. @Jason: crinkler is a linker-compressor and is only working directly with plain native .obj generated from a c/c++/asm compiler.
Thus, It won't work with .net executables. Although you could use crinkler (or better kkrunchy, as crinkler is targetted to 4Ko exe and won't work for any compressed exe > 64k) with .net
executables, but it requires to develop a mini-native exe acting as a CLR host that will load the executable assembly (embedded into the native exe in whatever mode, resources, plain property...
etc.). The drawback of this solution is that you won't be able to run in anycpu but only in x86 platform.
It's better to rely on pure .Net solution for .Net assemblies. Unfortunately, none of the current existing solutions are good packers (netz, sixxpack, exepack, mpress) compare to context modeling
technique. I have developed already a .net exe compressor which is using context-modeling, with far better compression ratio than other methods, but It's not released yet (I will try to release
it during this year)
10. ahh okay, i see. yes I was wondering about how it would compare with other zip based packers, especially currious about how it would work with the metadata stored in the .net assembly, but I
guess it doesnt ;)
yah i'd be very interested to see how your content modeling .net compressor works!!
11. >> I was wondering about how it would compare with other zip based packers
From my test, It's a lot more efficient. On the test I did on a .net executables of 64Ko:
- mpress 32Ko
- zip-method 50Ko
- context-modeling-method 20Ko
12. Superb article, ever since I first encountered Crinkler I have wanted to learn more about how it works. Thanks for taking the time to collect all this information and sharing it.
13. Fantastic article, really well written, researched, and presented. It really made my day to read this, and your analysis/explanation of PAQ-style encoders is great for those who haven't been
exposed to them.
14. Fantastic article :) I would've appreciated some references for further review and possibly some "bug fixing" (things like mentioning how Crinkler uses GetProcAddress is wrong, for example), but
aside from that, this is ace! Thanks for the read :)
15. Amazing article. It was extremely well worded and yet absolutely fantastic with a lot of details. Kudos for the thorough and detailed research.
16. Thanks Ferris and Decipher!
Ferris, could you elaborate on missing references, what would be helpful? (I put mainly Mahoney that is the main reference here for context-modeling with neural networks). Also, what is exactly
wrong in description of crinkler simulating GetProcAddress? (I though It was incomplete but not wrong! :) For example, I'm not detailing how crinkler is navigating inside the IAT module
in-memory, after getting the address of the DLL module... but apart from that, I should not be wrong)
17. What you had was correct about moving through the loaded .dll structures :) I was referring to the use of GetProcAddress in the first place - one of the benefits of import by hash is the complete
bypass of this function! With the actual hash and moving through the .dll header, you're actually emulating GetProcAddress' functionality altogether :) .
Another correction (if I might add) is that Crinkler actually DOES embed the PE header inside the MZ header, just a few bytes after where the TinyPE example does. If you look at a Crinkler .exe's
entry point, it's between the "MZ" and "PE" symbols near the very beginning of the file. But you're completely right about it using most of this space to store hash :) .
And about references, I'm not accusing you of plagiarism or anything, I was just left more curious about some of the things (particularly arithmetic encoding and context modelling) and failed to
notice many of the links you've included. Apologies :) .
I'd like to reiterate, though, how awesome I find this article :) . Seriously, well done.
18. ah, apologies again, it seems you said exactly what I just wrote about emulating GetProcAddress. Sorry again :)
19. Oh ok, thanks for your feedback!
I agree that GetProcAddress part and PE file optim is far from being complete. I wrote this part in a couple of minutes just before releasing the article... and was a bit too lazy to explain more
in-depth how things are tightly organized! So you are right about mentioning the PE/MZ header collapsing, as this is the first thing you can see when you inspect a crinklerized PE.
About references, yeah, links are spread into the document and I didn't assemble them at the bottom of this post. The thing also is that I worked on dissecting crinkler and coding my own
context-modeling compressor around mid-2009, while I wrote this article in dec-2010... in the meantime, I forgot lots of details that would have been even more helpful here! Also, I worked a lot
more on trying several context-modeling compression/technique than to dissect crinkler (that was just done in a few days)... all this work on context-modeling for developing small compressors is
worth an article in itself, as context-modeling compressor is quite a fascinating method! | {"url":"http://code4k.blogspot.com/2010/12/crinkler-secrets-4k-intro-executable.html","timestamp":"2024-11-03T16:59:58Z","content_type":"application/xhtml+xml","content_length":"174809","record_id":"<urn:uuid:bbb67d12-4f20-4aab-bec6-a34e3dc8724f>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00778.warc.gz"} |
Estimating disease severity while correcting for reporting delays
Understanding disease severity, and especially the case fatality risk (CFR), is key to outbreak response. During an outbreak there is often a delay between cases being reported, and the outcomes (for
CFR, deaths) of those cases being known. Simply dividing total deaths to date by total cases to date may lead to an underestimate of the CFR rate in real-time, because many cases have outcomes that
are not yet known.
Knowing the distribution of these delays from previous outbreaks of the same (or similar) diseases, and accounting for them, can therefore help ensure less biased estimates of disease severity. See
the Concept section at the end of this vignette for more on how reporting delays bias CFR estimates.
The severity of a disease can be estimated while correcting for delays in reporting using methods outlines in Nishiura et al. (2009), and which are implemented in the cfr package.
Use case
A disease outbreak is underway. We want to know how severe the disease is in terms of the case fatality risk (CFR), but there is a delay between cases being reported, and the outcomes of those cases
— whether recovery or death — being known. This is the reporting delay, and can be accounted for by knowing the reporting delay from past outbreaks.
What we have
• A time-series of cases and deaths, (cases may be substituted by another indicator of infections over time);
• Data on the distribution of delays, describing the probability an individual will die \(t\) days after they were initially infected.
What we assume
• That data on reporting delays from past outbreaks is informative about reporting delays in the current outbreak.
First we load the cfr package.
Case and death data
Data on cases and deaths may be obtained from a number of publicly accessible sources, such as the global Covid-19 dataset curated by Our World in Data, a similar dataset made available through the R
package covidregionaldata (Palmer et al. 2021), or data on outbreaks of other infections made available in outbreaks.
In an outbreak response scenario, such data may also be compiled and shared locally. See the vignette on working with data from incidence2 on working with a common format of incidence data which can
help interoperability with other formats.
The cfr package requires only a data frame with three columns, “date”, “cases”, and “deaths”, giving the daily number of reported cases and deaths.
Here, we use some data from the first Ebola outbreak, in the Democratic Republic of the Congo in 1976, that is included with this package (Camacho et al. 2014).
Obtaining data on reporting delays
We obtain the disease’s onset-to-death distribution from a more recent Ebola outbreak, reported in Barry et al. (2018). The onset-to-death distribution is considered to be Gamma distributed, with a
shape \(k\) = 2.40 and a scale of \(\theta\) = 3.33.
Note that while we use a continuous distribution here, it is more appropriate to use a discrete distribution instead as we are working with daily data.
Note also that we use the central estimates for each distribution parameter, and by ignoring uncertainty in these parameters the uncertainty in the resulting CFR is likely to be underestimated.
The forthcoming epiparameter package aims to be a library of epidemiological delay distributions, which can be accessed easily from within workflows. See the vignette on using delay distributions for
more information on how to use this and other distribution objects supported by R to prepare delay density functions.
Estimate disease severity
We use the function cfr_static() to calculate overall disease severity at the latest date of the outbreak.
The cfr_static() function is well suited to small outbreaks where there are relatively few events and the time period under consideration if relatively brief, so the severity is unlikely to have
changed over time.
To understand how severity has changed over time (e.g. following vaccination or pathogen evolution), use the function cfr_time_varying(). This function is however not well suited to small outbreaks
because it requires sufficiently many cases over time to estimate how CFR changes. More on this can be found on the vignette on estimating how disease severity varies over the course of an outbreak.
Estimate ascertainment ratio
It is important to know what proportion of cases in an outbreak are being ascertained to muster the appropriate response, and to estimate the overall burden of the outbreak.
Note that the ascertainment ratio may be affected by a number of factors. When the main factor in low ascertainment is the lack of (access to) testing capacity, we refer to this as reporting or
The estimate_ascertainment() function estimates the ascertainment ratio using daily case and death data, the known severity of the disease from previous outbreaks, and optionally a delay distribution
of onset-to-death.
Here, we estimate reporting in the 1976 Ebola outbreak in the Congo, assuming that Ebola virus disease (at that time) had a baseline severity of about 0.7 (70% of cases result in deaths), based on
CFR values estimated in later, larger datasets. We use the onset-to-death distribution from Barry et al. (2018).
This analysis suggests that between 70% and 83% of cases were reported in this outbreak.
More details can be found in the vignette on estimating the proportion of cases that are reported during an outbreak.
Concept: How reporting delays bias CFR estimates
Simply dividing the number of deaths by the number of cases would obtain a CFR that is a naive estimator of the true CFR.
Suppose 10 people start showing symptoms of a disease on a given day and the end of that day all remain alive. Suppose that for the next 5 days, the numbers of new cases continue to rise until they
reach 100 new cases on day 5. However, suppose that by day 5, all infected individuals remain alive.
The naive estimate of the CFR calculated at the end of the first 5 days would be zero, because there would have been zero deaths in total — at that point. That is to say, the outcomes of cases
(deaths) would not be known.
Even after deaths begin to occur, this lag between the ascertainment of a case or hospitalisation and outcome leads to a consistently biased estimate. Hence, adjusting for such delays using an
appropriate delay distribution is essential for accurate estimates of severity.
Barry, Ahmadou, Steve Ahuka-Mundeke, Yahaya Ali Ahmed, Yokouide Allarangar, Julienne Anoko, Brett Nicholas Archer, Aaron Aruna Abedi, et al. 2018.
“Outbreak of Ebola virus disease in the Democratic Republic of the Congo, April–May, 2018: an epidemiological study.” The Lancet
392 (10143): 213–21.
Camacho, A., A. J. Kucharski, S. Funk, J. Breman, P. Piot, and W. J. Edmunds. 2014.
“Potential for Large Outbreaks of Ebola Virus Disease.” Epidemics
9 (December): 70–78.
Nishiura, Hiroshi, Don Klinkenberg, Mick Roberts, and Johan A. P. Heesterbeek. 2009.
“Early Epidemiological Assessment of the Virulence of Emerging Infectious Diseases: A Case Study of an Influenza Pandemic.” PLOS ONE
4 (8): e6852.
Palmer, Joseph, Katharine Sherratt, Richard Martin-Nielsen, Jonnie Bevan, Hamish Gibbs, Cmmid Group, Sebastian Funk, and Sam Abbott. 2021.
“Covidregionaldata: Subnational Data for COVID-19 Epidemiology.” Journal of Open Source Software
6 (63): 3290. | {"url":"https://cran.ma.imperial.ac.uk/web/packages/cfr/vignettes/cfr.html","timestamp":"2024-11-05T06:05:51Z","content_type":"text/html","content_length":"23400","record_id":"<urn:uuid:7da25243-0b7e-4307-b876-b1aab78c2845>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00843.warc.gz"} |
RISC-V: Fast Scalar strchrnul() in Assembly
What is strchrnul() ?⌗
char* strchrnul(const char* s, int c); is a FreeBSD libc string function that was first implemented as a GNU extension in glibc 2.1.1. This function is an extension of the strchr() function. The
difference between the two functions can be found in the RETURN VALUES section of the manpage:
The functions strchr() and strrchr() return a pointer to the located character, or NULL if the character does not appear in the string.
strchrnul() returns a pointer to the terminating '\0' if the character does not appear in the string.
This makes it really easy to use strchrnul() to implement the regular strchr() by checking the input character c and the return value of strchrnul()
char* strchr(const char* s, int c) {
char* ptr = strchrnul(s, c);
return *ptr == (char)c ? ptr : NULL;
In fact, this is how strchr() is implemented in FreeBSD libc.
Writing a fast strchrnul() leads to a fast implemenation of strchr(), without duplicating code. To completely understand how this implemntation works, I suggest reading the strlen() article.
Spliting the work⌗
Similarly to what we did for the strlen() implementation, we want to do utilize the ld instruction to load up to 8 characters at a time and check for c and '\0' using has_zero(). As was previously
disscussed, a ld instruction behaves consistently on all platforms only when the address of the load is aligned to an eightbyte (address % 8 == 0).
* a0 - const char* s
* a1 - int c;
.globl strchrnul
.type strchrnul, @function
* a0 - const char* ptr;
* a1 - char cccccccc[8];
* a2 - char iter[8];
* a3 - char iter_end[8]
/* ... */
According to the RISC-V calling convention we accept the s pointer thru the a0 register, and c thru the a1 register. We will aditionally use the a2 calculations needed to find c. We will also use the
a3 register calculations needed for finding '\0' (end of string).
First thing we need to take care of is the type of c. Because the C declaration of strchrnul declares c as an int and not a char, The calling code will pass a 32-bit value thru a1. FreeBSD works with
8-bit characters, so we will first mask everything that doesn’t belong to the first byte.
/* ... */
/* int -> char */
andi a1, a1, 0xFF
/* t0 = 0x0101010101010101 */
li t0, 0x01010101
slli t1, t0, 32
or t0, t0, t1
/* t1 = 0x8080808080808080 */
slli t1, t0, 7
/* spread char across bytes */
mul a1, a1, t0
/* ... */
If we take a careful look at the declaration of strchrnul() we will see that c is declared as an int.This will cause the C code that calls our function to pass a 32-bit (platform dependant) value in
a1. Because of this we “cast” that value to a char which is a 8-bit (platform dependant) value, trimming the rest of the bits.
After this we load the 0x0101010101010101 and 0x8080808080808080 constants, which are going to be used for has_zero.
The mul instruction will clone c eight times, filling the register.
Now we need to handle the alignment of the start of the string.
/* ... */
/* 1. align_offset */
andi t2, a0, 0b111
/* 2. align pointer */
andi a0, a0, ~0b111
/* 3. if pointer is aligned skip to loop */
beqz t2, .Lloop
/* 4. iter = *ptr */
ld a2, (a0)
/* 5. mask_start calculation */
slli t2, t2, 3
neg t2, t2
srl t2, t0, t2
/* 6. fill bytes before start with non-zero */
or a3, a2, t2
/* 7. find c setup */
xor a2, a2, a1
or a2, a2, t2
/* 8. has_zero for \0 */
not t3, a3
sub a3, a3, t0
and a3, a3, t3
and a3, a3, t1
/* 9. has_zero for c */
not t2, a2
sub a2, a2, t0
and a2, a2, t2
and a2, a2, t1
/* 10. if \0 or c was found, exit */
or a2, a2, a3
bnez a2, .Lfind_idx
/* 11. move to next eightbyte */
addi a0, a0, 8
/* ... */
1. Calculate offset of s from the previous aligned location.
□ align_offset = s % 8 <=> align_offset = s & 0b111.
2. Set pointer to previous aligned location.
□ ptr = s - (s % 8) <=> ptr = s - (s & 0b111) <=> ptr = s & ~0b111
3. In case that align_offset == 0, ptr = s is already aligned, and we can skip to the loop that doesn’t need to consider alignment.
4. Now that ptr is guaranteed to be aligned, we can load data from ptr into iter
5. Because we aligned ptr backwards before loading into iter, it now contains a few bytes that don’t belong to the start of the string in the lower bytes of the register. To prevent has_zero from
finding c or '\0' in these bytes, we will mask them with a non-zero value. The mask is created by shifting the t0 register containing 0x01...01 to the right by the appropriate number of bits. The
calculaton is:
□ bits = align_offset * 8 <=> bits = align_offset << 3 <=> slli t2, t2, 3
☆ Convert offset from number of bytes to number of bits.
□ shift = 64 - bits <=> shift = (-bits) & 0b111111 <=> neg t2, t2
☆ the srl instruction considers only the low 6 bits of rs2 for the shift amount, removing the need for & 0b111111 in the assembly, resulting in a single neg instruction.
□ mask = 0x0101010101010101 >> shift <=> srl t2, t0, t2
6. Write the masked value to iter_end.
7. We want to find the first c in iter. To do this we first iter = iter ^ cccccccc which will cause all bytes that contain c to become '\0'. Afterwards we mask the bytes before the start of the
string, to prevent finding c before the start of the string.
8. Use has_zero to check for '\0'.
9. Use has_zero to check for c.
10. If either of the has_zero calculations returned a non-zero we want to find the first set bit, because it corresponds to the first appearance of '\0' or c. or the two registers together and run
find_idx on that value.
11. Neither '\0' or c was found, prepare for loop by moving ptr to the next eightbyte
Removing data hazards⌗
Generally, pipelined processors execute instructions faster when we avoid data hazards. One example of getting better performance by avoiding data hazards is the SiFive FU740:
The pipeline implements a flexible dual-instruction-issue scheme. Provided there are no data hazards between a pair of instructions, the two instructions may issue in the same cycle, provided the
following constraints are met: …
In case of the FU740 core, we can get up to double the performance out of the same code, if we reorder it to avoid data hazards. The two iterations of has_zero we wrote above have data hazards
between every instruction. This will lead to the core executing one instruction at a time. We can fix this by interleaving the two iterations of has_zero together, resulting in the following code:
/* ... */
/* 1. align_offset */
andi t2, a0, 0b111
/* 2. align pointer */
andi a0, a0, ~0b111
/* 3. if pointer is aligned skip to loop */
beqz t2, .Lloop
/* 4. iter = *ptr */
ld a2, (a0)
/* 5. mask_start calculation */
slli t2, t2, 3
neg t2, t2
srl t2, t0, t2
/* 6. fill bytes before start with non-zero */
or a3, a2, t2
/* 7. find c setup */
xor a2, a2, a1
or a2, a2, t2
/* 8. and 9. has_zero for both \0 and c */
not t3, a3
not t2, a2
sub a3, a3, t0
sub a2, a2, t0
and a3, a3, t3
and a2, a2, t2
and a3, a3, t1
and a2, a2, t1
or a2, a2, a3
addi a0, a0, 8
bnez a2, .Lfind_idx
/* ... */
We also avoided the data hazard between the or a2, a2, a3 and bnez a2, .Lfind_idx instruction, by moving the addi a0, a0, 8 in between them. We just need to remember to first addi a0, a0 -8 inside
.Lfind_idx. These data hazards will make a small difference here, because the code runs at most once per strchrnul() call, but will make a lot of difference inside of .Lloop.
The loop⌗
The loop is pretty much the same code as with the start, except we can remove all the extra computations for mask_start leading to the following code:
/* ... */
ld a2, (a0)
/* has_zero for both \0 or c */
xor a3, a2, a1
not t2, a2
not t3, a3
sub a2, a2, t0
sub a3, a3, t0
and a2, a2, t2
and a3, a3, t3
and a2, a2, t1
and a3, a3, t1
/* if \0 or c was found, exit */
or a2, a2, a3
addi a0, a0, 8
beqz a2, .Lloop
/* ... */
Finding the index of the first charachter⌗
The code for finding the index of the first character ('\0' or c). We already solved this problem in strlen(), so we are just going to adapt that code. First we need to add the addi a0, a0, -8
instruction to move ptr back because it is pointing to the eightbyte for the next iteration, and not the one we terminated on. Also after finding the byte index inside of iter we want to add that
index to ptr to find the resulting pointer and return that pointer thru a0.
/* ... */
addi a0, a0, -8
/* isolate lowest set bit */
neg t0, a2
and a2, a2, t0
li t0, 0x0001020304050607
srli a2, a2, 7
/* lowest set bit is 2^(8*k)
* multiplying by it shifts the idx array in t0 by k bytes to the left */
mul a2, a2, t0
/* highest byte contains idx of first zero */
srli a2, a2, 56
add a0, a0, a2
Now that we understand the algorithm we can complete the implementation.
Final code⌗
* a0 - const char* s
* a1 - int c;
.globl strchrnul
.type strchrnul, @function
* a0 - const char* ptr;
* a1 - char cccccccc[8];
* a2 - char iter[8];
* a3 - char iter_end[8]
/* int -> char */
andi a1, a1, 0xFF
/* t0 = 0x0101010101010101 */
li t0, 0x01010101
slli t1, t0, 32
or t0, t0, t1
/* t1 = 0x8080808080808080 */
slli t1, t0, 7
/* spread char across bytes */
mul a1, a1, t0
/* align_offset */
andi t2, a0, 0b111
/* align pointer */
andi a0, a0, ~0b111
/* if pointer is aligned skip to loop */
beqz t2, .Lloop
/* iter = *ptr */
ld a2, (a0)
/* mask_start calculation */
slli t2, t2, 3
neg t2, t2
srl t2, t0, t2
/* fill bytes before start with non-zero */
or a3, a2, t2
/* find c setup */
xor a2, a2, a1
or a2, a2, t2
/* has_zero for both \0 and c */
not t3, a3
not t2, a2
sub a3, a3, t0
sub a2, a2, t0
and a3, a3, t3
and a2, a2, t2
and a3, a3, t1
and a2, a2, t1
or a2, a2, a3
addi a0, a0, 8
bnez a2, .Lfind_idx
ld a2, (a0)
/* has_zero for both \0 or c */
xor a3, a2, a1
not t2, a2
not t3, a3
sub a2, a2, t0
sub a3, a3, t0
and a2, a2, t2
and a3, a3, t3
and a2, a2, t1
and a3, a3, t1
/* if \0 or c was found, exit */
or a2, a2, a3
addi a0, a0, 8
beqz a2, .Lloop
addi a0, a0, -8
/* isolate lowest set bit */
neg t0, a2
and a2, a2, t0
li t0, 0x0001020304050607
srli a2, a2, 7
/* lowest set bit is 2^(8*k)
* multiplying by it shifts the idx array in t0 by k bytes to the left */
mul a2, a2, t0
/* highest byte contains idx of first zero */
srli a2, a2, 56
add a0, a0, a2
Performance metrics⌗
Performance was measured on a SiFive Unmatched development board using strperf.
• The FreeBSD C base implementation is the multiplatfrom implementation. It’s used for the kernel, and when a machine dependent implementation doesn’t exist.
• RISC-V optimized implementation is the one from this article.
| In B/s | FreeBSD base impl | RISC-V optimized impl |
| Short | 183.8Mi ± 5% | 287.2Mi ± 0% +56.27% (p=0.000 n=20) |
| Mid | 397.3Mi ± 3% | 564.6Mi ± 0% +42.12% (p=0.000 n=20) |
| Long | 820.5Mi ± 0% | 902.5Mi ± 0% +9.99% (p=0.000 n=20) |
| geomean | 391.3Mi | 527.0Mi +34.68% |
Further reading⌗ | {"url":"http://strajabot.com/posts/fast-scalar-strchrnul-riscv/","timestamp":"2024-11-06T14:00:21Z","content_type":"text/html","content_length":"21152","record_id":"<urn:uuid:cc996c26-ebc1-4600-81b1-8d94a4720f1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00765.warc.gz"} |
Michael Tiemann
Mar 01, 2023
Abstract:Modeling an unknown dynamical system is crucial in order to predict the future behavior of the system. A standard approach is training recurrent models on measurement data. While these
models typically provide exact short-term predictions, accumulating errors yield deteriorated long-term behavior. In contrast, models with reliable long-term predictions can often be obtained, either
by training a robust but less detailed model, or by leveraging physics-based simulations. In both cases, inaccuracies in the models yield a lack of short-time details. Thus, different models with
contrastive properties on different time horizons are available. This observation immediately raises the question: Can we obtain predictions that combine the best of both worlds? Inspired by sensor
fusion tasks, we interpret the problem in the frequency domain and leverage classical methods from signal processing, in particular complementary filters. This filtering technique combines two
signals by applying a high-pass filter to one signal, and low-pass filtering the other. Essentially, the high-pass filter extracts high-frequencies, whereas the low-pass filter extracts low
frequencies. Applying this concept to dynamics model learning enables the construction of models that yield accurate long- and short-term predictions. Here, we propose two methods, one being purely
learning-based and the other one being a hybrid model that requires an additional physics-based simulator. | {"url":"https://www.catalyzex.com/author/Michael%20Tiemann","timestamp":"2024-11-13T23:11:51Z","content_type":"text/html","content_length":"148007","record_id":"<urn:uuid:9775b8c6-81bb-4e79-8d51-75c3eb2689d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00010.warc.gz"} |
An all-speed relaxation scheme for gases and compressible materialsJournal of Computational PhysicsFluid--solid Floquet stability analysis of self-propelled heaving foilsJournal of Fluid MechanicsEnablers for robust POD modelsJournal of Computational PhysicsAn accurate cartesian method for incompressible flows with moving boundariesCommunications in Computational PhysicsBioinspired swimming simulationsJournal of Computational PhysicsModeling and simulation of fish-like swimmingJournal of Computational PhysicsAccurate Asymptotic Preserving Boundary Conditions for Kinetic Equations on Cartesian GridsJournal of Scientific ComputingNumerical solution of the Monge--Kantorovich problem by density lift-up continuationESAIM: Mathematical Modelling and Numerical AnalysisA Cartesian Scheme for Compressible Multimaterial Models in 3DJournal of Computational PhysicsEnablers for high-order level set methods in fluid mechanicsInternational Journal for Numerical Methods in FluidsSecond Order ADER Scheme for Unsteady Advection-Diffusion on Moving Overset Grids with a Compact Transmission ConditionSIAM Journal on Scientific ComputingADER scheme for incompressible Navier-Stokes equations on overset grids with a compact transmission conditionJournal of Computational PhysicsEditorial: Data-driven modeling and optimization in fluid dynamics: From physics-based to machine learning approachesFrontiers in PhysicsAn Eulerian finite-volume approach of fluid-structure interaction problems on quadtree meshesJournal of Computational PhysicsNumerical modeling of a self propelled dolphin jump out of waterBioinspiration and BiomimeticsOn the influence of multidirectional irregular waves on the PeWEC deviceFrontiers in Energy ResearchImpact of physical model error on state estimation for neutronics applicationsESAIM: Proceedings and SurveysRegistration-based model reduction of parameterized two-dimensional conservation lawsJournal of Computational PhysicsA one-shot overlapping Schwarz method for component-based model reduction: application to nonlinear elasticityComputer Methods in Applied Mechanics and EngineeringA projection-based model reduction method for nonlinear mechanics with internal variables: application to thermo-hydro-mechanical systemsInternational Journal for Numerical Methods in EngineeringMapping of coherent structures in parameterized flows by learning optimal transportation with Gaussian modelsJournal of Computational PhysicsInferring characteristics of bacterial swimming in biofilm matrix from time-lapse confocal laser scanning microscopyeLifeAutomatic branch detection of the arterial system from abdominal aortic segmentationMedical and Biological Engineering and ComputingData-driven wall models for Reynolds Averaged Navier-Stokes simulationsInternational Journal of Heat and Fluid FlowRéduction de modèles de problèmes paramétriques en mécanique non linéaire à l'aide de Code Aster et Mordicus15ème colloque national en calcul des structuresInfluence of hydrodynamic interactions on the productivity of PeWEC wave energy converter array2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME)Deep learning-based wall models for aerodynamic simulations: A new approach inspired by classical wall lawsODAS 2022 - Onera-DLR Aerospace SymposiumMulti-fidelity modelling of wave energy converter farmsA Component-Based Data Assimilation Strategy with Applications to Vascular FlowsA projection-based reduced-order model for parametric quasi-static nonlinear mechanics using an open-source industrial codeA KINETIC DISCONTINUOUS GALERKIN METHOD FOR THE NONCONSERVATIVE BITEMPERATURE EULER MODELWasserstein model reduction approach for parametrized flow problems in porous mediaA one-shot overlapping Schwarz method for component-based model reduction: application to nonlinear elasticityInferring characteristics of bacterial swimming in biofilm matrix from time-lapse confocal laser scanning microscopyAutomatic Rigid Registration of Aortic Aneurysm Arterial SystemLocalized model reduction for nonlinear elliptic partial differential equations: localized training, partition of unity, and adaptive enrichmentAn optimization-based registration approach to geometry reductionA penalization method to take into account obstacles in a incompressible flowNumerische MathematikExact and approximate solutions of Riemann problems in non-linear elasticityJournal of Computational PhysicsA Cartesian scheme for compressible multimaterial models in 3DJournal of Computational PhysicsAn experimental study of entrainment and transport in the turbulent near wake of a circular cylinderJournal of fluid mechanicsModelling wave dynamics of compressible elastic materialsJournal of Computational PhysicsElements of continuum mechanicsConstruction d'une chaîne d'outils numériques pour la conception aérodynamique de pales d'éoliennesA Conservative Three-Dimensional Eulerian Method for Coupled Solid-Fluid Shock CapturingJournal of Computational PhysicsLevel Set Methods and Fast Marching MethodsA registration method for model order reduction: data compression and geometry reductionSIAM Journal on Scientific ComputingRegistration-based model reduction in complex two-dimensional geometriessubmitted to Journal of Scientific ComputingSpace-time registration-based model reduction of parameterized one-dimensional hyperbolic PDEsESAIM: Mathematical Modelling and Numerical Analysis (accepted)
Research program
Coherently with our investigation approach, we start from real-world applications to identify key methodological problems, then, we study those problems and develop new methods to address them;
finally, we implement these methods for representative test cases to demonstrate their practical relevance.
Numerical models
We aim to further develop automated model-order reduction (MOR) procedures for large-scale systems in computational mechanics — here, automated refers to the ability to complete the analysis with
minimal user intervention. First, we wish to combine nonlinear MOR with mesh adaptation to simultaneously learn rapid and reliable ROMs and effective high-fidelity discretizations over a range of
parameters. Second, we wish to develop component-based MOR procedures to build inter-operable components for steady and unsteady nonlinear PDEs: towards this end, we should develop efficient
localized training procedures to build local ROMs for each archetype component, and also domain decomposition techniques to glue together the local models for prediction. We also wish to develop and
analyze hybrid approaches that combine and merge first-principle models with data-fit models, and also full-order and reduced-order models for prediction of global engineering quantities of interest.
We envision that several methods that are currently developed in the team can be complemented by available tools from machine learning: representative examples include — but are not limited to —
solution clustering, optimal sampling, classification. In this respect, a leap forward in industrial applications that we will pursue is without doubts the possibility of capitalizing on previous
experience drawn from already acquired simulations to build non-intrusive models that combine non-linear interpolations and non-linear regression. New perspectives in this direction are offered by
the Chair Onera-Nouvelle Aquitaine (cf. Highlights).
As regards the work on numerical discretization of PDEs, compared to the previous evaluation, we focus on the representation of the solution in each computational cell by adopting a DG/ADER approach
to improve the resolution of solution's discontinuities. This approach is complemented with a Chimera grid at the boundaries in order to improve accuracy by a body fitted mesh avoiding grid
generation complexity for a general, possibly varying, geometrical topology. The thesis of Alexis Tardieu, which started in October 2021 and is funded by the University of Bordeaux, studies this
approach. Still in this direction, we will continue to study asymptotic schemes for multi-materials applications: our aim is to devise a unified approach to handle both compressible and
incompressible materials.
In parallel, we will continue our exploration of schemes that circumvent the problem of accuracy and time stepping in the asymptotic regimes such as low- and high Mach numbers for multi-material
flows: the ultimate goal is to devise asymptotic-preserving schemes that are able to capture phenomena at the time scale of the fast waves and of the material waves with the same accuracy,
exclusively choosing the appropriate time-scale.
For energy applications, we will continue our investigations on wave energy converters and windturbines. Relative to wave energy converters, we are developing multifidelty models that couple the
incompressible Navier-Stoke equations (NSE) around the floater with a Proper Orthogonal (POD) ROM or a simplified-physics model elsewhere.
In October 2021, Nishant Kumar started a two-year postdoctoral fellowship in the team, which was funded by the Inria-IfpEN program; the aim is to couple an high-fidelity model and a POD model based
on the LES Navier-Stokes equations; the coupling is implemented in the SOWFA framework of OpenFOAM.
In December 2021, Caroline Le Guern started her PhD in the team, in the framework of the Inria-IfpEN program; Caroline works on the modeling and simulation of the fluid-structure interaction of
next-generation windturbines with up to 250 meter rotor; the numerical implementation is based on the software Deeplines that is co-developed by IfpEN.
In April 2022, Umberto Bosi started a two-year postdoctoral fellowship in the team, in collaboration with CARDAMOM: the project of Umberto, which was funded by Inria and the Region Nouvelle
Aquitaine, focuses on the coupling between an high-fidelity (e.g., Navier-Stokes) model and an asymptotic (e.g., shallow water or Boussinesq) model.
We are also collaborating with EDF to devise effective ROMs for parametric studies. In this collaboration, we emphasize the implementation of projection-based ROMs for real-world applications
exploiting industrial codes.
In April 2021, Eki Agouzal started an industrial thesis to develop projection-based ROMs for nonlinear structural mechanics problems in Code Aster, with emphasis on thermo-hydro-mechanical (THM)
A PhD thesis is expected to start in October 2023 on the development of MOR techniques for the shallow water equations in the EDF code Telemac-Mascaret.
Within the ARIA project, in collaboration with Nurea and the bio-mechanics lab of the Politecnico di Torino, we investigate the idea of data augmentation starting from a given aneurysm database. We
will construct statistically relevant synthetic aneurysms that can provide both heterogeneity and closeness to reality to test new bio-markers for aneurysm rupture. The thesis of Ludovica Saccaro
funded by Inria is dedicated to this subject.
In the framework of the ANR DRAGON, we also increase our interactions with researchers in biology and physical science. in the center of biological studies in Chizé (centre d'études biologiques de
Chizé). The ANR funds the PhD thesis of Karl Maroun at University of Poitiers.
The software development will be continued. We will pursue the development of the NEOS library: NEOS will be distributed in open source LGPL-3.0. The HIWIND software will be rewritten based on NEOS
Application domains Energy conversion
We apply the methods developed in our team to the domain of wind engineering and sea-wave converters. In Figure 1, we show results of a numerical model for a sea-wave energy converter. We here rely
on a monolithic model to describe the interaction between the rigid floater, air and water; material properties such as densities, viscosities and rigidity vary across the domain. The appropriate
boundary conditions are imposed at interfaces that arbitrarily cross the grid using adapted schemes built thanks to geometrical information computed via level set functions 46. The background method
for fluid-structure interface is the volume penalization method 38 where the level set functions is used to improve the degree of accuracy of the method 4 and also to follow the object. The
underlined mathematical model is unsteady, and three dimensional; numerical simulations based on a grid with $𝒪\left({10}^{8}\right)$ degrees of freedom are executed in parallel using 512 CPUs.
In the context of the Aerogust (Aeroelastic gust modelling) European project, together with Valorem, we investigated the behavior of wind turbine blades under gust loading. The aim of the project was
to optimize the design of wind turbine blades to maximize the power extracted. A meteorological mast (Figure 2(a)) has been installed in March 2017 in Brittany to measure wind on-site: data provided
by the mast have been exploited to initialize the mathematical model. Due to the large cost of the full-order mathematical model, we relied on a simplified model 44 to optimize the global twist.
Then, we validated the optimal configuration using the full-order Cartesian model based on the NaSCar solver. Figure 2(b) shows the flow around the optimized optimized wind turbine rotor.
Schemes for turbulent flow simulations using Octrees
We have initially developed and tested a 3D first-order Octree code for unsteady incompressible Navier-Stokes equations for full windmill simulations with an LES model and wall laws. We have
validated this code on Occigen for complex flows at increasing Reynolds numbers. This step implied identifying stable and feasible schemes compatible with the parallel linear Octree structure. The
validation has been conducted with respect to the results of a fully Cartesian code (NaSCAR) that we run on Turing (with significantly more degrees of freedom) and with respect to experimental
Subsequently, we have developed a second-order Octree scheme that has been validated on Occigen for a sphere at a moderate Reynolds number ($\mathrm{Re}=500$), see Table 1. Then, for a cylinder at ($
\mathrm{Re}=140000$) (Figures 3(a) and 3(b)), close to real applications, we have preliminary validation results for the second-order scheme with respect to experimental drag coefficient (Table 2).
Additional resources will be asked on Occigen to complete the study.
Vascular flows
A new research direction pursued by the team is the mathematical modelling of vascular blood flows in arteries. Together with the start-up Nurea and the surgeon Eric Ducasse, we aim at developing
reliable and automatic procedures for aneurysm segmentation and for the prediction of aneurysm rupture risk. Our approach exploits two sources of information: (i) numerical simulations of blood flows
in complex geometries, based on an octree discretization, and (ii) computed tomography angiography (CTA) data. Figure 4 shows the force distribution on the walls of the abdominal aorta in presence of
an aneurysm; results are obtained using a parallelized hierarchical Cartesian scheme based on octrees.
Fluid-structure interactions using Eulerian non-linear elasticity models
Mathematical and numerical modeling of continuum systems undergoing extreme regimes is challenging due to the presence of large deformations and displacements of the solid part, and due to the
strongly non-linear behavior of the fluid part. At the same time, proper experiments of impact phenomena are particularly dangerous and require expensive facilities, which make them largely
impractical. For this reason, there is a growing interest in the development of predictive models for impact phenomena.
In MEMPHIS, we rely on a fully Eulerian approach based on conservation laws, where the different materials are characterized by their specific constitutive laws, to address these tasks. This approach
was introduced in 43 and subsequently pursued and extended in 45, 42, 39, 40 and 9. In Figure 5, we show the results of the numerical simulation of the impact of a copper projectile immersed in air
over a copper shield. Results are obtained using a fully parallel monolithic Cartesian method, based on a ${4000}^{2}$ fixed Cartesian grid. Simulations are performed on a cluster of 512 processors,
and benefits from the isomorphism between grid partitioning and processor topology.
In figure 6, we show the results of a three dimensional simulation of a cardiac pump (LVAD, left ventricule assisted device).
Other examples are given in the sections dedicated to the new results.
Social and environmental responsibility
As discussed in the previous section, we are particularly interested in the development of mathematical models and numerical methods to study problems related to renewable energies, and ultimately
contribute to next-generation sustainable solutions for energy extraction.
Impact of research results
We are studying two types of green energy extractors: wave energy converters (WECs) and wind energy.
As regards WECs, we are working with the PoliTO (Torino, Italy) to model the behavior of inertial sea wave energy converters (ISWEC), and we are also starting to work with a Bordeaux-based start-up
for another device to extract energy from waves via an Inria-Tech project and a Nouvelle-Aquitaine Regional Project submitted by Memphis in collaboration with the CARDAMOM team.
As regards wind energy, we focus on the analysis of wind turbines. In the past, we have supervised two PhD CIFRE theses with VALOREM-Valeol, and are currently working with them in a European RISE
ARIA project led by Memphis. We also work with IFPEN on the aeroelastic modeling of large wind turbines and the study and optimization of turbines farms in the framework of the joint laboratory
Inria-IFPEN with a thesis funded by IFPEN and a post-doc funded by Inria (which started in October 2021).
In conjunction with these activities, in collaboration with ANDRA (the national agency for storage of nuclear waste), we investigate the development of reduced-order models to allow efficient and
accurate simulations for deep geological storage planning. This activity is the subject of the PhD thesis of Giulia Sambataro.
New results Component-based model order reduction for radioactive waste management AngeloIolloGiuliaSambataroTommasoTaddei
At the end of their cycle, radioactive materials are placed in arrays of cylindrical boreholes (dubbed alveoli) deep underground; due to the large temperatures of the radioactive waste, the thermal
flux generated by the alveoli drives a complex time-dependent phenomenon which involves the thermal, hydraulic and mechanical (THM) response of the medium. The role of simulations is to predict the
long-term system response and ultimately assess the impact of the repository site to the surrounding areas: Figure 7(a) shows a typical system configuration considered for numerical investigations.
Due to the complex nature of the equations (a system of five coupled nonlinear time-dependent three-dimensional equations) and due to the uncertainty in several parameters of the model and on
boundary conditions, MOR techniques are important to reduce the computational burden associated with thorough parametric studies. In particular, it is important to study the system behavior for
different numbers of alveoli: it is possible to show that changing the number of alveoli induces a change in the topology of the problem and thus prevents the application of standard monolithic MOR
techniques developed for fixed domains or diffeomorphic families of parametric domains. We should thus devise component-based MOR procedures that are compatible with topology changes.
The PhD project of Giulia Sambataro aimed to devise a rapid and reliable component-based MOR technique for THM systems, for radioactive waste management applications. During the first year of her
PhD, Giulia developed a monolithic MOR technique for THM systems which relies on a POD-Greedy algorithm to sample the parameter domain and to hyper-reduction based on empirical quadrature to reduce
online prediction costs. During the second and third year, Giulia developed a component-based MOR formulation — which is dubbed one-shot overlapping Schwartz (OS2) method — for nonlinear steady PDEs
and finally she extended the approach to THM systems with varying numbers of alveoli.
Giulia successfully defended her PhD thesis in December 2022; her work led to the publication of two articles on peer-reviewed journals, 20 and 33. Figure 7(b) shows the temporal behavior of the HF
and predicted pressure and temperature in a select point in the proximity of one alveolus for an out-of-sample configuration. We observe that the ROM is able to adequately predict the solution
behavior; in our numerical experiments, we experienced an average 20x speed-up over the range of configurations.
Registration methods for advection-dominated PDEs AngeloIolloTommasoTaddei
A major issue of state-of-the-art MOR techniques based on linear approximation spaces is the inability to deal with parameter-dependent sharp gradients, which characterize the solutions to
advection-dominated problems. To address this issue, we propose a registration technique to align local features in a fixed reference domain. In computer vision and pattern recognition, registration
refers to the process of finding a transformation that aligns two datasets; here, registration refers to the process of finding a parametric spatio-temporal transformation that improves the linear
compressibility of the solution manifold.
A registration procedure has been proposed in 47 and then further developed in 49, 48, 18. In particular, in 49, we considered the application to one-dimensional applications in hydraulics; in an
ongoing collaboration with EDF, we aim to extend the approach to two-dimensional steady and unsteady problems. Figure 8 shows results for a Saint-Venant problem (flow past a bump): Figures 8(a) and 8
(b) show the free surface $z$ for two different parameters and two time instants, while Figure 8(c) shows the behavior of the out-of-sample projection error associated with a snapshot-based POD space
with and without registration. We observe that registration is key to improve performance of linear compression strategies such as POD.
In 21, Iollo and Taddei proposed a general (i.e., independent of the underlying PDE) nonlinear interpolation technique based on optimal transportation of Gaussian models of coherent structures of the
flow. Given the domain $\Omega$ and the states ${U}_{0},{U}_{1}:\Omega \to ℝ$, we aim to determine an interpolation $\stackrel{^}{U}:\left[0,1\right]×\Omega \to ℝ$ such that $\stackrel{^}{U}\left(0,·
\right)={U}_{0}$ and $\stackrel{^}{U}\left(1,·\right)={U}_{1}$. The key features of the approach are (i) a scalar testing function that selects relevant features of the flow; (ii) an explicit mapping
procedure that exploits explicit formulas valid for Gaussian distributions; (iii) a nonlinear interpolation dubbed “convex displacement interpolation” to define $\stackrel{^}{U}$. The mapping built
at step (ii) might not satisfy the bijectivity constraint in $\Omega$: to address this issue, a nonlinear projection procedure over a space of admissible maps based on registration is proposed.
Figure 9 illustrates performance of our procedure for a compressible inviscid flow past a NACA0012 profile at angle of attack 4o for varying free-stream Mach number between $Ma=0.77$ and $Ma=0.83$.
Figures 9(a) and 9(b) show the fluid density for $Ma=0.77$ and $Ma=0.83$, while Figure 9(c) shows an interpolation for an intermediate Mach number: we observe that the nonlinear interpolation
smoothly deforms the shock attached to the airfoil. Figure 9(d) compares performance of the nonlinear interpolation with the linear convex interpolation ${\stackrel{^}{U}}^{\mathrm{co}}\left(s\right)
=\left(1-s\right){U}_{0}+s{U}_{1}$: we observe that the proposed nonlinear interpolation is significantly more accurate than linear interpolation, for the same amount of high-fidelity information.
Fluid-structure interactions on AMR-enabled quadree grids MichelBergmannAntoineFontanecheAngeloIollo
A versatile fully Eulerian method has been developed for the simulation of fluid-structure interaction problems in two dimensions, involving stiff hyper-elastic materials 14. The unified single
continuum model is solved in a monolithic way using a quadtree-based Finite Volume scheme, built on very compact discretizations. In the context of fictitious domain methods, the geometry of a
structure is captured through a level-set formalism, which enables to define a diffuse fluid-structure interface.
The numerical method has been validated with respect to the literature and the benefits obtained in terms of computational costs through the use of dynamic adaptive meshes has been highlighted. The
low impact of coarsening on the structure deformation has been emphasized and the results suggest that the numerical method offers a valuable compromise between accuracy and feasibility of the
simulation. As depicted in Figure 10, the simulation of a two-dimensional axi-symmetric flow in a cardiac assist device (LVAD geometry) has finally been proposed as a biomedical application. One
paper is submitted.
Aortic aneurysms: automatic segmentation and registration AngeloIolloGwladyRavonSebastienRiffaudLudovicaSaccaro
In 35, we developed a new artificial neural network to automatically segment aortic aneurysm. The main idea with this approach was to consider each pixel of the image individually and to see if a
model could learn how to categorize it as lumen only from its own intensity and the intensity of its 26 neighbors. We tested different inputs (values, means, variances...) and architectures: a
sequential model was retained. For the input, each sample is a vector of 27 intensity values. Only pixels whose intensity is between 100 and 700 are kept for training and prediction.
The second axis of development concerned registration. When a patient have several scans taken at different times, the segmentations are not in the same frame so any comparison would be complicated.
The objective was to bring the second segmentation in the frame of the first one. We tested different points-based approaches: register the centerline of the segmentation or the geometry; consider
only the lumen or the entire aneurysm. The best results were obtained with the surface of the aneurysm and the iterative closest point algorithm. Once the registration is performed (Figure 11) we can
visualize how the aneurysm evolved.
In her PhD, Ludovica Saccaro is developing a data augmentation procedure that takes as input a dataset of patient-specific geometries of aortic aneurisms and returns a larger dataset of in-silico
geometries: the objective is to generate large datasets of simulations for statistical analyses. The key elements of the approach are twofold: first, a registration technique based on the
identification of the vessel's centerline and of the aortic wall, to determine a rigorous parameterization of the geometries; second, a machine learning technique for data augmentation based on
Gaussian mixture models.
Deep learning wall laws for aerodynamic simulations MichelBergmannThomasPhilibertAngeloIolloMicheleRomanelli
The availability of reliable and accurate wall laws is one of the main challenges of modern CFD. Considering the wide range of phenomena that can be modeled thanks to the flexibility of neural
networks, they show an undeniable potential in their application to the modeling of wall flows. Our goal is, therefore, to propose a wall law based on deep learning algorithms with input and output
variables of neural networks that conform to classical wall models. Neural networks are thus trained using wall-resolved data in order to reconstruct dimensionless velocity evolution within the
boundary layer. Our first methodological approach consists in a new wall law based on deep learning, which shows good performance in modeling the near-equilibrium boundary layer. Near-separation
boundary layers are found to be more challenging for deep learning; they are hence the subject of ongoing investigations in the next months.
The SST k-omega model is one of the most used models in industry but suffers from limitations especially when there are separations or transitions. In fig. 13 the error for the Reynold tensor
obtained with the SST k-omega model (left) and the error of the Reynold tensor of the same model (compared to the tensor obtained by DNS) but where a correction was added(right) obtained thanks to a
neural network. The correction model is based on the strain and stress tensors to build a tensor base starting from algebraic invariants. This basis ensures Galilean invariance. The neural network
was trained on 23 different geometries. We present here the results for 2 geometries belonging to the test set (not used for training). The results show a clear improvement of the error compared to
the DNS data.
Projection-based model order reduction for parametric quasi-static nonlinear mechanics using an open-source industrial code EkiAgouzalMichelBergmannTommasoTaddei
In 30, we proposed a projection-based model order reduction procedure for a general class of parametric quasi-static problems in nonlinear mechanics with internal variables; the methodology is
integrated in the industrial finite element code Code Aster. We developed an adaptive algorithm based on a POD-Greedy strategy, and we developed an hyper-reduction strategy based on an element-wise
empirical quadrature, in order to speed up the assembly costs of the ROM by building an appropriate reduced mesh. We introduced a cost-efficient error indicator which relies on the reconstruction of
the stress field by a Gappy-POD strategy. We presented numerical results for a three-dimensional elastoplastic system in order to illustrate and validate the methodology. | {"url":"https://radar.inria.fr/rapportsactivite/RA2022/memphis/memphis.xml","timestamp":"2024-11-11T16:58:00Z","content_type":"application/xml","content_length":"157146","record_id":"<urn:uuid:b6e89776-28c8-4973-ad15-48a315306c37>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00847.warc.gz"} |
Clock Angles Worksheet
Clock Angles Worksheet. Angles (from clocks) worksheet and answers. The first worksheet uses only hour and half hour.
Clock Angles Worksheets from www.unmisravle.com
Web in this packet you will find four worksheets which can help students find the measure of the angle between the hands of a clock. These worksheets can be used with a. This is because 360° divided
by 12 equals 30. | {"url":"http://studydblamb123.s3-website-us-east-1.amazonaws.com/clock-angles-worksheet.html","timestamp":"2024-11-08T18:13:10Z","content_type":"text/html","content_length":"25878","record_id":"<urn:uuid:da2f5b09-c38a-413c-ba53-0971952d8e96>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00441.warc.gz"} |
Blog: Logs for the Monero Research Lab Meeting Held on 2019-04-22
Logs for the Monero Research Lab Meeting Held on 2019-04-22
April 22, 2019
<sarang> OK, let's begin. Logs of this meeting will be posted to the GitHub link afterward
<sarang> 1. GREETINGS
<sarang> hello
<sgp_> hello!
<sarang> I assume suraeNoether is also here
<sarang> I suppose we can move to 2. ROUNDTABLE
<sarang> The new output selection algorithm was put into a PR by moneromooo with some additional tests added; many thanks to moneromooo for that work
<sarang> and it's now merged
<sarang> This helps to mitigate the block weighting issues and provide better selection
<sarang> The attempted CLSAG proof reduction to LSAG was not successful because of the way key images are computed, unfortunately
<sarang> However, I've been working on applying the MLSAG proof techniques more directly
<suraeNoether> howdy, sorry
<suraeNoether> distrated
<suraeNoether> I'm here
<sarang> I'm still working with the Lelantus paper author, a Zcoin cryptographer, to offer some collaborative insight on efficiency gains
<sarang> Just about finished with test code refactoring, and some additional fixes
<suraeNoether> what sort of gains are you talking about there?
<suraeNoether> what is lelantus, and why is it interesting?
<sarang> It's a transaction protocol produced by a Zcoin researcher that uses some of the techniques that StringCT also used
<sarang> but with a more direct balance proof
<sarang> The paper suggested some batching speedups, but I observed that you could apply them much more broadly to the entire set of transaction proofs (there are several)
<suraeNoether> neat! are you two narrowing down on some concrete numbers for comparison in efficiency?
<suraeNoether> what sort of work remains, in that regard?
<sarang> A lot of the batching gains depend heavily on the anonymity set used
<sarang> and there are plenty of open questions about that
<sarang> But, for example, the bulk of a spend proof (ignoring balance proof and range proofs) for a 1024-size anonymity set is probably about 100 ms using Monero timing operations
<suraeNoether> wow
<suraeNoether> wowow
<sarang> This is purely back-of-the-envelope
<suraeNoether> for those in the audience, i tend to think of 50 ms is on the border of too slow
<suraeNoether> does 512 anon set take 50 ms?
<suraeNoether> is it logarithmic?
<sarang> mostly linear
<sarang> keep in mind that batching verification reduces the cost per proof
<suraeNoether> oh man, we should talk about the space-time tradeoff
<sarang> I don't have good numbers for that yet
<suraeNoether> asymptotically
<sarang> Yeah, I'll be working up a set of charts for this
<suraeNoether> fantastic.
<sarang> For reference, an MLSAG verification with 1024 ringsize takes 1.2 s
<sarang> Also the kind of batching that I'm thinking of as being most useful requires the batch to use the same decoy anonymity set
<suraeNoether> oh that's sort of an omniring property
<sarang> You still get some gains without this assumption, but not nearly to the same degree
<suraeNoether> which we'll be learning about at the konferenco, apparently :) real_or_random
<sarang> Using a common set means your multiexp operation uses the same generators across proofs (mostly)
<suraeNoether> of course, i haven't seen omniring yet…
<sarang> Anyway, other questions for me?
<suraeNoether> Good recap, sarang, thanks for describing it to us
<suraeNoether> *claps*
<sarang> Over to you suraeNoether
<suraeNoether> well, the past week has been busy for surae
<sarang> https://www.youtube.com/watch?v=0TeFrpLL4-E
<suraeNoether> MRL 11 update: I finally found the problem with my simulation code, which had to do with passing around an identity for objects like nodes and edges that can be used to retrieve the
object, versus passing around the object itself. it's a silly and embarassing mistake, and it took me way too much time to figure out why things were going wrong. :P
<suraeNoether> I made a push this morning to my mrl-skunkworks powerpuff branch after finally realizing the problem
<suraeNoether> that includes the new experimenter class that is actually generating confusion tables… the data it spits out (information for a confusion table) is junk and incorrect, but it doesn't
break anything, so I pushed it…. The overall infrastructure is at the point where it may be of interest to #noncesense-research-lab and Isthmus for independent work.
<suraeNoether> or anyone else who is interested
<suraeNoether> https://github.com/b-g-goodell/mrl-skunkworks/tree/matching-powerpuff/Matching
<sarang> What are your next steps for this (jumping ahead a bit)?
<suraeNoether> the main idea is this
<suraeNoether> how accurate is the matching approach as ring size scales up? how accurate is the matching approach as churn number increases? how can we use the answers to these questions to
formulate best practices for monero churners?
<sarang> for sure
<suraeNoether> is there a way we can define some concrete threshold we want to attain?
<sarang> Understanding how ring size and some specified churn behaviors affect these matching heuristics can give a much clearer picture of what it would take to hit certain thresholds
<suraeNoether> anyway, that's my progress update on MRL11: soon^TM. I'm actually getting results without breaking anything, and now it's a matter of debugging the code and writing new tests to ensure
that the results I'm getting are consistent
<suraeNoether> but I also have a collaborative update, as described in the agenda
<suraeNoether> long story short: Clemson University's School of Mathematical and Statistical Sciences is interested in starting a general center for blockchain and cryptocurrency studies.
<suraeNoether> and they are interested in involving Monero Research Lab in their efforts.
<endogenic> coolio
<suraeNoether> we have a few interesting research collaboration possibilities with clemson just stand-alone, new shiny blockchain center notwithstanding
<suraeNoether> mainly: Professor Shuhong Gao is in the middle of writing several papers that promise to be rather groundbreaking
<suraeNoether> one of these is reporting a purported attack upon two of the post-quantum nist candidate encryption algorithms
<suraeNoether> one of these is a new approach to fully homomorphic encryption
<sarang> sounds very interesting
<suraeNoether> previous attempts at FHE suffer weird problems. if you want to add two ciphertexts together, it's easy to retain the number of bits. but to do something like multiplication, you need a
larger number of bits than either ciphertext… and so previous approahces sort of use this expanding scratchpad of bits and take up lots of space to perform a computation
<suraeNoether> gao has developed a way that improves the space efficiency by several orders of magnitude, bringing *practical* FHE into reality
<suraeNoether> his approaches use the RLWE cryptographic setting, which I've been looking into recently due to its speed (big keys but very fast algorithms)
<sarang> sounds suspiciously interesting
<suraeNoether> yeah, no kidding
<suraeNoether> he has four visiting scholars interested in blockchain and a handful of students, and the next thing on their plate is RLWE-based STARKs efficient enough for use in something like
<suraeNoether> so, basically: I'm super excited about the possibility of collaborating with these folks!
<sarang> Nice
<suraeNoether> conflict of interest disclosure: Clemson flew me out to South Carolina last week and put me up in a hotel and fed me. I gave a talk. I received a per diem for food. This is all rather
ordinary in that regard.
<suraeNoether> so, i'm encouraging that Clemson have a presence at the Monero Konferenco, to come meet members of the monero community in person
<suraeNoether> and I'm encouraging their graduate students to jump in on our research meetings
<endogenic> awesome!
<sarang> Totally; getting more researchers involved is great for the project
<suraeNoether> I want them to come to the Konferenco, meet some of the folks in the Monero community face to face, and contribute to Monero's development. I think this is a good thing both for Monero
and Clemson University, and I think a more formal academic collaboration with Monero Research Lab is long overdue.
<suraeNoether> also, for what it's worth
<suraeNoether> the last time I was at clemson, speaking with people about cryptocurrency or privacy as a human right was a hard conversation to have
<suraeNoether> this time, the conversations went… very… very …. differently.
<suraeNoether> a lot changes in 3 years.
<suraeNoether> people are excited about this.
<endogenic> what would you say is the origin of the change in their reactoin?
<endogenic> fungibility?
<suraeNoether> uhm
<endogenic> cause the Snowden disclosures etc came out a long time ago
<suraeNoether> actually, i think it's structural
<suraeNoether> meaning: the right people are in control right now for this to move, if that makes sense
<endogenic> gotcha
<sarang> Any other new work to share suraeNoether ?
<suraeNoether> different people standing between the department and their goals than last time, like deans and provosts…
<suraeNoether> uhm, also, on a totally wild and weird research note
<suraeNoether> it turns out a homological algebra construction called Ext that Sarang and I studied in grad school together may be the key to forcing my silly signature scheme using commutative/
cartesian squares from last year to work
<suraeNoether> so I'm discussing a paper with another clemson professor
<sarang> very clever
<suraeNoether> that's pretty close to the "pure math" end of the spectrum for this room, so i'm not sure whether i should talk about it before we have some more results
<suraeNoether> it turns out that Ext can be used to parameterize the zero function between two modules (like, for example, elliptic curve groups)
<suraeNoether> so we are trying to use that to hide information in a function from one to the other
<suraeNoether> it's… bizarre, and it might just work!
<suraeNoether> and tha'ts all I have other than action items
<sarang> neato
<suraeNoether> but this is round-table and perhaps andytoshi or real_or_random or ArticMine have some thoughts on stuff they've been working o.
<sarang> Does anyone else have research work of interest to share?
<suraeNoether> or anyone else, for that matter
<sarang> righto
<sarang> I guess we can move to 3. QUESTIONS and/or 4. ACTION ITEMS
<endogenic> eyyyyy
<Isthmus> *late
<suraeNoether> oooh isthmus
<suraeNoether> do you have an update beefore we move onto 3?
<Isthmus> Been working on playing around with camel emission curves, though that window has probably shut for Monero
<Isthmus> Also, @n3ptune and I looked at single-transaction outputs in recentish history. There are O(1000) of them
<Isthmus> https://github.com/monero-project/monero/issues/5399
<Isthmus> https://usercontent.irccloud-cdn.com/file/YyM3h9KG/image.png
<Isthmus> TL;DR:
<Isthmus> There have been over 2500+ single-output transactions since 2017
<Isthmus> Single-output transactions (1OTXs) are a persistent intermittent phenomen
<Isthmus> *phenomena
<Isthmus> There was a surge of 1OTXs around height 1562000
<Isthmus> 1OTXs are observed to this day (data includes 2019)
<suraeNoether> hm
<sarang> Could be made consensus, which has been brought up before without any movement
<Isthmus> They're also linked to a lot of other nasty heuristics - odd ring sizes, fees that stick out by an order of magnitude, etc.
<sarang> that spike is crazy
<Isthmus> Yeah, epic churn event. Should be pretty easy to dissect and trace, but I left that as an exercise for the reader.
<endogenic> bet that was sgp
<sarang> All right, let's go to action items
<sarang> suraeNoether: ?
<sgp_> This should be consensus. Isthmus opened a GitHub issue, and I don't think anyone has voiced opposition to it
<luigi1111w> 2 output min is good
<sarang> absolutely
<Isthmus> Oh yea, if you have thoughts, leave them here: https://github.com/monero-project/monero/issues/5399
<Isthmus> Otherwise, good to move on
<charuto> could the spike on 1output transactions be somehow related to monero classic/original ? date seems to almost coincide.
<sarang> OK, in the interest of hitting our 1-hour target, I'll work up numbers for batch Lelantus verification at varying anonymity set sizes, and finish up some example code refactoring to complete
that project
<sarang> I'm also looking into how a new transaction type could be used to transition RingCT outputs to this
<sarang> Submission of the DLSAG paper is finally happening
<sarang> suraeNoether: ?
<suraeNoether> woops, sorry, my action items all revolve around getting my simulations done and some confusion tables pushed out
<sarang> great, thanks
<suraeNoether> after that i can go do other things
<sarang> Thanks to Isthmus for opening that issue and getting the conversation started again on 1-out txns
<Isthmus> 👍
<sarang> Any other final thoughts before we adjourn and return to general discussion?
<Isthmus> Also, interesting hypothesis @charuto - might be connected. XMC forked off at 1546000 and the 1OTX spike is around 1560000
<sarang> OK, we are now adjourned! Logs will be posted to the GitHub issue shortly. Thanks to everyone for attending; let the discussions continue
Post tags : Dev Diaries, Cryptography, Monero Research Lab | {"url":"https://web.getmonero.org/2019/04/22/logs-for-the-Monero-Research-Lab-meeting-held-on-2019-04-22.html","timestamp":"2024-11-10T14:42:21Z","content_type":"text/html","content_length":"49149","record_id":"<urn:uuid:cc47b51b-8e4f-46a3-aca6-a264dc3cbb93>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00728.warc.gz"} |
Stacks Project Blog
Since the last update we have added the following material:
1. A chapter on crystalline cohomology. This is based on a course I gave here at Columbia University. The idea, following a preprint by Bhargav and myself, is to develop crystalline cohomology
avoiding stratifications and linearizations. This was discussed here, here, and here. I’m rather happy with the discussion, at the end of the chapter, of the Frobenius action on cohomology. On
the other hand, some more work needs to be done to earlier parts of the chapter.
2. An example showing the category of p-adically complete abelian groups has kernels and cokernels but isn’t abelian, see Section Tag 07JQ.
3. Strong lifting property smooth ring maps, see Lemma Tag 07K4.
4. Compact = perfect in D(R), see Proposition Tag 07LT.
5. Lifting perfect complexes through thickenings, see Lemma Tag 07LU.
6. A section on lifting algebra constructions from A/I to A, culminating in
□ Elkik’s result (as improved by others) that a smooth algebra over A/I can be lifted to a smooth algebra over A, see Proposition Tag 07M8.
□ Given B smooth over A and a section σ : B/IB —> A/I then there exists an etale ring map A —>A’ with A/I = A’/IA’ and a lift of σ to a section B ⊗ A’ —> A’, see Lemma Tag 07M7.
7. We added some more advanced material on Noetherian rings; in particular we added the following sections of the chapter More on Algebra:
8. You’re going to laugh, but we now finally have a proof of Nakayama’s lemma.
9. We started a chapter on Artin’s Axioms but it is currently almost empty.
10. We made some changes to the results produced by a tag lookup. This change is a big improvement, but I’m hoping for further improvements later this summer. Stay tuned!
11. We added some material on pushouts; for the moment we only look at pushouts where one of the morphisms is affine and the other is a thickening, see Section Tag 07RS for the case of schemes and
see Section Tag 07SW for the case of algebraic spaces.
12. Some quotients of schemes by etale equivalence relations are schemes, see Obtaining a scheme.
13. We added a chapter on limits of algebraic spaces. It contains absolute Noetherian approximation of quasi-compact and quasi-separated algebraic spaces due to David Rydh and independently
Conrad-Lieblich-Olsson, see Proposition Tag 07SU.
The last result mentioned will allow us to replicate many results for quasi-compact and quasi-separated algebraic spaces that we’ve already proven for schemes. Most of the results I am thinking of
are contained in David Rydh’s papers, where they are proven actually for algebraic stacks. I think there is some merit in the choice we’ve made to work through the material in algebraic spaces first,
namely, it becomes very clear as we work through this material how very close (qc + qs) algebraic spaces really are to (qc + qs) schemes.
Thunks tu zee Svedeesh Cheff
Hupeffoolly thees duesn’t feeulete-a uny cupyreeghts! Bork Bork Bork!
Geefee a cless ooff elgebreeec fereeeties, it is reesuneble-a tu esk iff zeere-a ere-a oonly fineetely muny members de-afined oofer a geefee fineete-a fiild. Vheele-a thees is cleerly zee cese-a vhee zee
epprupreeete-a mudoolee foonctur is buoonded, metters ere-a oofftee nut su seemple-a. Fur ixemple-a, cunseeder zee cese-a ooff ebeleeun fereeeties ooff a geefee deemensiun g. Zeere-a is nu seengle-a
mudoolee spece-a peremetereezing zeem; rezeer, fur iech integer d ≥ 1 zeere-a is a mudoolee spece-a peremetereezing ebeleeun fereeeties ooff deemensiun g veet a pulereezeshun ooff degree-a d. It is
neferzeeless pusseeble-a tu shoo (see-a [Z, Zeeurem 4.1], [Mee, Curullery 13.13]) thet zeere-a ere-a oonly fineetely muny ebeleeun fereeeties oofer a geefee fineete-a fiild, up tu isumurpheesm. Unuzeer
netoorel cless ooff fereeeties vhere-a thees deeffficoolty ereeses is zee cese-a ooff K3 soorffeces. Es veet ebeleeun fereeeties, zeere-a is nut a seengle-a mudoolee spece-a boot rezeer a mudoolee
spece-a fur iech ifee integer d ≥ 2, peremetreezing K3 soorffeces veet a pulereezeshun ooff degree-a d.
Wirklich, Ich sollte Deutsch verwenden!
A property of the structure sheaf
Furthering the quest of making this the most technical blog with the highest abstraction level, I offer the following for your perusal.
Let (X, O_X) be a ringed space. For any open U and any f ∈ O_X(U) there is a largest open U_f where f is invertible. Then X is a locally ringed space if and only if
U = U_f ∪ U_{1-f}
for all U and f. Denoting j : U_f —> U the inclusion map, there is a natural map
O_U[1/f] —-> j_*O_{U_f}
where the left hand side is the sheafification of the naive thing. If X is a scheme, then this map is an isomorphism of sheaves of rings on U. Furthermore, we can ask if every point of X has a
neighbourhood U such that
U is quasi-compact and a basis for the topology on U is given by the U_f
If X is a scheme this is true because we can take an affine neighbourhood. If U is quasi-affine (quasi-compact open in affine), then U also has this property, however, so this condition does not
characterize affine opens.
We ask the question: Do these three properties characterize schemes among ringed spaces? The answer is no, for example because we can take a Jacobson scheme (e.g., affine n-space over a field) and
throw out the nonclosed points. We can get around this issue by asking the question: Is the ringed topos of such an X equivalent to the ringed topos of a scheme? I think the answer is yes, but I
haven’t worked out all the details.
You can formulate each of the three properties in the setting of a ringed topos. (There are several variants of the third condition; we choose the strongest one.) An example would be the big Zariski
topos of a scheme.
Flat is not enough
The title of this blog post is the opposite of this post. But don’t click through yet, because it may be more fun to read this one first.
I claim there exists a functor F on the category of schemes such that
1. F is a sheaf for the etale topology,
2. the diagonal of F is representable by schemes, and
3. there exists a scheme U and a surjective, finitely presented, flat morphism U —> F
but F is not an algebraic space. Namely, let k be a field of characteristic p > 0 and let k ⊂ k’ be a nontrivial finite purely inseparable extension. Define
F(S) = {f : S —> Spec(k), f factors through Spec(k’) etale locally on S}
It is easy to see that F satisfies (1). It satisfies (2) as F —> Spec(k) is a monomorphism. It satisfies (3) because U = Spec(k’) —> F works. But F is not an algebraic space, because if it were, then
F would be isomorphic to Spec(k) by Lemma Tag 06MG.
Ok, now go back and read the other blog post I linked to above. Conclusion: to get Artin’s result as stated in that blog post you definitively need to work with the fppf topology.
(Thanks to Bhargav for a discussion.)
Non zero-divisors
Following the example of xkcd I somtimes try to figure out what is the correct terminology by searching different spellings and observing the number of hits. I did this for variants on the phrase in
the title but I didn’t find the results convincing. (Google thinks of “-” and ” ” both as whitespace.)
“non zero divisor” 8,630 results
“non zero-divisor” 7,490 results
“non zerodivisor” 9,190 results
“nonzero divisor” 1,900 results
“nonzerodivisor” 9,560 results
“non zero divisor” 5K results
“non zero-divisor” 4K results
“non zerodivisor” 5K results
“nonzero divisor” 2K results
“nonzerodivisor” 61 results
“non zero divisor” 75,300 results
“non zero-divisor” 75,300 results
“non zerodivisor” no results found
“nonzero divisor” 6,490 results
“nonzerodivisor” 4,120 results
Yuhao Huang emailed to say he prefers “non zero-divisor”. I guess that is better and I’ll probably make a global change in the stacks project later today. Any objections or suggestions?
Update (3PM): I’ve decided to go with Jason’s suggestion, see here for changes.
Universal homeomorphisms
To set the stage, I first state a well known result. Namely, suppose that A ⊂ B is a ring extension such that \Spec(B) —> \Spec(A) is universally closed. Then A —> B is integral, i.e., every element
b of B satisfies a monic polynomial over A.
Now suppose that A ⊂ B is a ring extension such that Spec(B) —> Spec(A) is a universal homeomorphism. Then what kind of equation does every element b of B satisfy? The answer seems to be: there exist
p > 0 and elements a_1, a_2, … in A such that for each n > p we have
b^n + \sum_{i = 1, …, n} (-1)^i (n choose i) a_i b^{n – i} = 0
This is a result of Reid, Roberts, Singh, see [1, equation 5.1]. These authors use weakly subintegral extension to indicate a A ⊂ B which is (a) integral, (b) induces a bijection on spectra, and (c)
purely inseparable extensions of residue fields. By the characterization of universal homeomorphisms of Lemma Tag 04DF this means that \Spec(B) —> \Spec(A) is a universal homeomorphism. By the same
token, if φ : A —> B is a ring map inducing a universal homeomorphism on spectra, then φ(A) ⊂ B is weakly subintegral.
[1] Reid, Les; Roberts, Leslie G., Singh, Balwant, On weak subintegrality, J. Pure Appl. Algebra 114 (1996), no. 1, 93–109.
This is a post with ideas I’ve considered over time regarding a comment system for the stacks project. The most important thing is that I don’t know what will work. I think initially we want
something where it is relatively easy to leave comments, where comments can be tracked (maybe by an rss feed), and where it isn’t too hard to work the comments back into the stacks project.
Structure that exists now:
• lots of tex files coded in a neurotic way so it is (relatively) easy to parse them
• the tags system (Cathy came up with this) which tracks mathematical results as they move around the tex files
• a query page for tags where looking up a tag 0123 returns the results page
• the results page contains links to the mathematical result in the corresponding pdf and moreover (a more recent feature) the latex code of the environment.
Cathy made some suggestions for what a comment system could look like: Besides the “results page” for each tag have a “comments page” where
• the address of the comments page is something like http://commentson0123 for direct access,
• on the query page you can choose to end up on the comments page or the results page for the tag,
• links between comments page and results page,
• Cathy’s ideas about the comments page are:
□ at the top of the page a link to the result in one of the pdfs
□ have the statement of the lemma/proposition/theorem/remark/exercise there
□ under this a small strip where if you click it expands to show you the proof
□ under this comments by visitors
□ to the right of the statement a small column with two lists:
1. the results that rely on this tag and
2. the results that are used in the proof of this tag.
I have the following thoughts on this:
• Go for functionality over looks,
• statements of theorems, etc are updated over time and the comments page should always have the current version.
• It may seem that we don’t really need both the results page and the comments page. But I think we do. I want the web addresses of the “results pages” to remain the same forever. These addresses
are supposed to be used if you want to give a direct url to a result in the stacks project, and they should not be used for more than a link to the result in the pdf and the statement. I think
that in the future all of the stacks project will be directly online (i.e., not in pdf form anymore) and then the “results page” may become a redirect (?) to the result in the project — so we
will need another page with the comments.
• I would like there to be a way for the maintainer of the project to check (or be notified) if there is a comment.
• I want there to be a very low threshold for leaving comments, so have minimal protection to comment spam
• Math rendering issues: This is a real problem and I don’t think it has been solved by mathjax. But eventually some software package will take care of this. We can in any case generate png from
latex code and put that up on the “comments page”.
Different ideas I have toyed and experimented with in the past
• Have a bug tracking system. What is good about this is that there is standard system that works. My feeling is that it won’t work because the average mathematician has never used or even looked
at a bug tracking system.
• Sign off system: Try to get people like Brian Conrad to sign off on tags: “I, Brian Conrad, declare this theorem is correct”. Again I think that this won’t work, yet, but I think it would be a
really cool thing to have in the future.
• Layers: Have different “layers” of comments, some historical, some references, some sign-off, some bug, etc. This could probably be managed simply by having different types of comments.
• Mailing list. I actually think this isn’t needed until I convince more people to be active on the stacks project
• Use blogging software with one page for each tag. I actually kind of like this idea, but I do not see how to make it work.
• Use wiki software with one page for each tag. I think this could actually work and be easier to implement then Cathy’s above suggestion provided somebody knows how to set-up wikis. My preference
would be a wiki which is file based, or a wiki which uses git to keep track of files, etc.
• Online latex editor for the stacks project
Common feature of all of these ideas: Use already existing, open source, software and just write scripts to interface the stacks project with this. One of my problems with this is that most of the
wiki and blog software I looked at will not allow automatic page generation/updating as far as I could tell.
If you have any ideas about this, please leave a comment or email me.
Summer projects
The spring semester has just ended here at Columbia. I assume for most of you similarly the summer will start soonish. Besides running an REU, supervising undergraduates, talking to graduate
students, going to conferences, and doing research, I plan to write about Artin’s axioms for algebraic spaces and algebraic stacks this summer.
What are your summer plans? Maybe you intend to work through some commutative algebra or algebraic geometry topic over the summer. In that case take a look at the exposition of this material in the
stacks project (if it is in there) and see if you can improve it. Or, if it isn’t there, write it up and send it over. If you want to coordinate with me, feel free to send me an email.
Another (more nerdy?) project is to devise an online comment system for the stacks project. It could be something simple (like a comment box on the lookup page) or something more serious such as a
wiki, blog, or bug-tracker, etc. If you are interested in creating something (and you have the skills to do it), please contact me about it.
Finally, I’ve wondered about having mirrors of the stacks project in (very) different locations. If you know what this entails and you are interested in running one, please contact me.
Revamped tag lookup
Before I get into the actual topic of this post, a plea. Please reference results in the stacks project by their tags (which are stable over time) and not in any other way. The tags system is
explained here and here and latex instructions can be found here.
Just today I changed the output of the result returned if you lookup tags. See this sample output. You’ll see that currently the result of looking up a tag gives you to corresponding latex code.
Moreover, cross references in the proofs and statements are now hyperlinks to the corresponding lookup page. This means that you can quickly follow all the references in proofs of lemmas etc down to
the simplest possible lemmas.
Of course this isn’t perfect. In many cases the lemmas need more context and you’ll have to open the pdf to read more. Another complaint is that it is cumbersome to have to parse the latex. I think
having the latex code available for quick inspection is good in that people can edit locally and send their changes here. In the future I expect to have a variant page where the latex code is parsed
(ideally something like a png file with embedded links for cross references). A requirement is that lookup is fast! (Lookup is already not instantaneous, but I’m sure that is due to poor coding
skills of yours truly. I’m not convinced mathjax or anything like it is the answer: I don’t like the way it looks and I don’t like how long it takes to load.)
Please let me know suggestions, bugs (things that don’t work), etc.
For a lark I compiled a version of the stacks project with the XITS font package. (No Max not the zits font package!) You can do this for any of your papers by following the instructions in the user
guide that comes with the package. It kind of looks nice. For a sample take a look at the chapter stacks-introduction or the whole project.
Let me know what you think! | {"url":"https://www.math.columbia.edu/~dejong/wordpress/?m=201205","timestamp":"2024-11-05T23:20:48Z","content_type":"text/html","content_length":"74516","record_id":"<urn:uuid:e12c2651-718e-458b-9e5b-018696420992>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00589.warc.gz"} |
Unions of Function Return Types
Unions of Function Return Types
Let's look at how unions of function return types behave.
Consider two functions idToUppercase and idToInt. The former accepts an object with a string id and converts it to uppercase, while the latter converts it into a number. Both functions are then added
to an array called funcs:
Loading explainer
00:00 Let's take a look at how unions of function return types behave. We have an id2 uppercase function, which takes in an id string and returns a string. id2 integer basically takes in an id string
and returns a number. And then we have a set of functions, one of which returns a string, one of which returns a number. Then we have a resolveAll function,
00:19 which if we look at it, returns an array of either string or number by mapping over each function and then calling that function. If we look at the function here, you can see that the function
is resolved to an object, which has an id string on it, fantastic, and then returns string or number.
00:37 So finally, the result after we resolve all of this is an array of string or number. So if you think about it, then when you have two, if you have unions of functions, then the parameters get
intersected together and the return types get unioned together.
00:55 So we're not getting never at the end of this function, we're getting a union of string or number. And of course, this makes sense, right? So if we were to just like return, let's say id2
uppercase or id2 int obj and then return id2 uppercase obj, then this of course checks out
01:14 because the result would still be the same. It doesn't matter whether we're using a union or not, we still get the same behavior. And if we just, if I go back to all of this and I remove one of
those, if I remove id2 int from the functions, then of course this is just an array of strings because that's the only thing that can be produced from these functions.
01:33 So that's your mental model how it should be is when you have a union of functions, the parameters get intersected and the return types get unioned. | {"url":"https://www.totaltypescript.com/workshops/typescript-pro-essentials/the-weird-parts-of-typescript/unions-of-function-return-types","timestamp":"2024-11-05T17:12:20Z","content_type":"text/html","content_length":"267295","record_id":"<urn:uuid:a23d5275-28b5-4e58-9c23-78f119210738>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00252.warc.gz"} |
@Generated("org.ejml.dense.row.decomposition.qr.QrUpdate_DDRM") public class QrUpdate_FDRM extends Object
The effects of adding and removing rows from the A matrix in a QR decomposition can be computed much faster than simply recomputing the whole decomposition. There are many real world situations where
this is useful. For example, when computing a rolling solution to the most recent N measurements.
Definitions: A ∈ ℜ ^m × n, m ≥ n, rank(A) = n and that A = QR, where Q ∈ ℜ ^m × m is orthogonal, and R ∈ ℜ ^m × n is upper triangular.
** IMPORTANT USAGE NOTE ** If auto grow is set to true then the internal data structures will grow automatically to accommodate the matrices passed in. When adding elements to the decomposition the
matrices must have enough data elements to grow before hand.
For more information see David S. Watkins, "Fundamentals of Matrix Computations" 2nd edition, pages 249-259. It is also possible to add and remove columns efficiently, but this is less common and is
not supported at this time.
• Constructor Summary
Does not predeclare data and it will autogrow.
Creates an update which can decompose matrices up to the specified size.
Creates an update which can decompose matrices up to the specified size.
• Method Summary
Modifier and Type
Adjusts the values of the Q and R matrices to take in account the effects of inserting a row to the 'A' matrix at the specified location.
Declares the internal data structures so that it can process matrices up to the specified size.
Adjusts the values of the Q and R matrices to take in account the effects of removing a row from the 'A' matrix at the specified location.
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
• Constructor Details
□ QrUpdate_FDRM
public QrUpdate_FDRM(int maxRows, int maxCols)
Creates an update which can decompose matrices up to the specified size. Autogrow is set to false.
□ QrUpdate_FDRM
public QrUpdate_FDRM(int maxRows, int maxCols, boolean autoGrow)
Creates an update which can decompose matrices up to the specified size. Autogrow is configurable.
□ QrUpdate_FDRM
public QrUpdate_FDRM()
Does not predeclare data and it will autogrow.
• Method Details
□ declareInternalData
public void declareInternalData(int maxRows, int maxCols)
Declares the internal data structures so that it can process matrices up to the specified size.
□ addRow
Adjusts the values of the Q and R matrices to take in account the effects of inserting a row to the 'A' matrix at the specified location. This operation requires about 6mn + O(n) flops.
If Q and/or R does not have enough data elements to grow then an exception is thrown.
The adjustment done is by computing a series of planar Givens rotations that make the adjusted R matrix upper triangular again. This is then used to modify the Q matrix.
Q - The Q matrix which is to be modified, must be big enough to grow. Must be n by n.. Is modified.
R - The R matrix which is to be modified, must be big enough to grow. Must be m by n. Is modified.
row - The row being inserted. Not modified.
rowIndex - Which row index it is to be inserted at.
resizeR - Should the number of rows in R be changed? The additional rows are all zero.
□ deleteRow
Adjusts the values of the Q and R matrices to take in account the effects of removing a row from the 'A' matrix at the specified location. This operation requires about 6mn + O(n) flops.
The adjustment is done by computing a series of planar Givens rotations that make the removed row in Q equal to [1 0 ... 0].
Q - The Q matrix. Is modified.
R - The R matrix. Is modified.
rowIndex - Which index of the row that is being removed.
resizeR - should the shape of R be adjusted? | {"url":"https://ejml.org/javadoc/org/ejml/dense/row/decomposition/qr/QrUpdate_FDRM.html","timestamp":"2024-11-04T20:20:39Z","content_type":"text/html","content_length":"19693","record_id":"<urn:uuid:996f9e55-b730-46a6-b9ad-75f79964489a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00488.warc.gz"} |
[U|Ao100|84%|YAVP] Level cap so close, yet so far away
Finished this run a few days ago, might as well make a forum thread now.
I fully admit that I used the wiki to tell me the assembly for nanomanufacture ammo, along with a bunch of other info. Without two infinite-ammo guns this run would have been a lot harder.
Most notable bit of good RNG was getting the Enviroboots on level 20. Total fluid immunity takes a lot of the sting out of Ao100, though I kept my modded plasteel boots for speed. None of the other
uniques I found were even worth carrying. The most amusing RNG was when I found the Nuclear BFG on two levels in a row, long after it would be useful.
I was really frustrated when I did the math around level 80 and realized it was impossible to hit level 25 and get the medal for that; the most optimistic estimate put me midway through level 24
despite killing almost everything before that point. So I said screw it and just stairdived for the last 15 levels or so. I guess if I want that medal I have to either do it on nightmare or play 666,
neither of which sound like a good use of time.
DoomRL (0.9.9.7) roguelike post-mortem character dump
Dave, level 23 Hell Baron Colonel Scout,
completed 100 levels of torture on level 100 of Hell.
He survived 306053 turns and scored 1883646 points.
He played for 17 hours, 45 minutes and 38 seconds.
He was a man of Ultra-Violence!
He killed 3335 out of 3954 hellspawn. (84%)
He was an Angel of 100!
He saved himself 9 times.
-- Special levels --------------------------------------------
Levels generated : 0
Levels visited : 0
Levels completed : 0
-- Awards ----------------------------------------------------
UAC Star (gold cluster)
Experience Medal
Centurial Platinum Badge
-- Graveyard -------------------------------------------------
-- Statistics ------------------------------------------------
Health 143/100 Experience 638392/23
ToHit Ranged +4 ToHit Melee +4 ToDmg Ranged +2 ToDmg Melee +2
-- Traits ----------------------------------------------------
Class : Scout
Ironman (Level 5)
Finesse (Level 3)
Hellrunner (Level 3)
Son of a bitch (Level 2)
Eagle Eye (Level 2)
Juggler (Level 1)
Dodgemaster (Level 1)
Intuition (Level 2)
Whizkid (Level 2)
Triggerhappy (Level 1)
Cateye (Level 1)
-- Equipment -------------------------------------------------
[a] [ Armor ] red armor [6/6] (68%) (AP)
[b] [ Weapon ] nanomachic rocket launcher (6d6) (T1)
[c] [ Boots ] Enviroboots [0]
[d] [ Prepared ] shell box (x100)
-- Inventory -------------------------------------------------
[a] nanomachic plasma rifle (1d7)x6 (T1)
[b] hyperblaster (2d4)x3 [40/40] (T1)
[c] missile launcher (6d8) [4/4] (P2T3)
[d] nuclear BFG 9000 (10d6) [40/40] (P2)
[e] nuclear BFG 9000 (8d6) [40/40]
[f] Jackhammer (8d3)x3 [9/9]
[g] cerberus phaseshift armor [2/2] (100%) (P)
[h] energy shield [0/0] (100%)
[i] large med-pack
[j] large med-pack
[k] large med-pack
[l] large med-pack
[m] homing phase device
[n] homing phase device
[o] shockwave pack
[p] plasteel boots [4/4] (100%) (APT)
[q] power battery (x120)
-- Resistances -----------------------------------------------
Acid - internal 0% torso 0% feet 100%
Fire - internal 0% torso 25% feet 100%
-- Kills -----------------------------------------------------
233 former humans
299 former sergeants
357 former captains
154 imps
53 demons
446 lost souls
93 cacodemons
112 hell knights
331 barons of hell
183 arachnotrons
12 former commandos
48 pain elementals
249 revenants
200 mancubi
264 arch-viles
41 nightmare imps
84 nightmare cacodemons
73 nightmare demons
33 nightmare arachnotrons
3 nightmare arch-viles
6 elite former humans
7 elite former sergeants
4 elite former captains
3 elite former commandos
31 bruiser brothers
5 shamblers
3 lava elemental
6 agony elementals
2 Cyberdemons
-- History ---------------------------------------------------
On level 9 he assembled a tactical shotgun!
On level 15 he assembled a hyperblaster!
He left level 20 as soon as possible.
On level 21 he found the Enviroboots!
On level 24 he was targeted for extermination!
On level 24 he found the Cybernetic Armor!
He sounded the alarm on level 25!
On level 27 he found the Butcher's Cleaver!
On level 30 he found the Hellwave Pack!
On level 34 he encountered an armed nuke!
On level 38 he ran for his life from lava!
On level 41 he assembled a nanomanufacture ammo!
On level 41 he found the Trigun!
On level 43 he found the Anti-Freak Jackal!
On level 44 he was targeted for extermination!
On level 48 he found the Necroarmor!
On level 49 he stumbled into a nightmare arachnotron cave!
On level 50 he assembled a environmental boots!
On level 52 he was targeted for extermination!
On level 53 he ran for his life from lava!
On level 54 he assembled a cerberus armor!
On level 56 he found the Acid Spitter!
On level 59 he stumbled into a nightmare demon cave!
On level 61 he assembled a nanomanufacture ammo!
On level 62 he stumbled into a complex full of revenants!
On level 64 he assembled a assault rifle!
On level 66 he encountered an armed nuke!
On level 67 he found the Jackhammer!
On level 70 he stumbled into a nightmare cacodemon cave!
Level 72 was a hard nut to crack!
Level 80 was a hard nut to crack!
On level 84 he was targeted for extermination!
He left level 84 as soon as possible.
Level 85 was a hard nut to crack!
On level 86 he stumbled into a complex full of revenants!
On level 87 he found the Medical Powerarmor!
He left level 90 as soon as possible.
He left level 91 as soon as possible.
Level 94 was a hard nut to crack!
He left level 95 as soon as possible.
He left level 97 as soon as possible.
He left level 98 as soon as possible.
On level 100 he finally completed 100 levels of torture.
-- Messages --------------------------------------------------
You hear the scream of a freed soul! You hear the scream of a freed soul! You
hear the scream of a freed soul! There are stairs leading downward here.
Fire -- Choose target...
You see : out of vision
The missile hits the hell knight. There are stairs leading downward here.
Fire -- Choose target...
You see : out of vision
You hear the scream of a freed soul! You hear the scream of a freed soul!
There are stairs leading downward here.
Fire -- Choose target...
You see : out of vision
There are stairs leading downward here.
You did it! You completed 100 levels of DoomRL! You're the champion! Press
-- General ---------------------------------------------------
104 brave souls have ventured into Phobos:
67 of those were killed.
2 didn't read the thermonuclear bomb manual.
And 2 couldn't handle the stress and committed a stupid suicide.
33 souls destroyed the Mastermind...
1 sacrificed itself for the good of mankind.
26 killed the bitch and survived.
6 showed that it can outsmart Hell itself. | {"url":"https://forum.chaosforge.org/index.php/topic,8376.0.html?PHPSESSID=03rmvsc425iugmrqi28o0mugq5","timestamp":"2024-11-14T04:11:18Z","content_type":"application/xhtml+xml","content_length":"48179","record_id":"<urn:uuid:df3aaf2e-cb53-4a4b-a116-d97e942bb2ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00030.warc.gz"} |
inverse trigonometric functions
Inverse Trigonometric Functions
Equations involving inverse trigonometric functions
Inverse Trigonometric Functions
Inverse Trigonometric Functions
inverse trigonometric functions
Evaluate Inverse Trigonometric Functions
Inverse Trigonometric Functions
Inverse trigonometric functions
Inverse Trigonometric Functions
Graphs of Inverse Trigonometric Functions
AS Math 5.5 Inverse Trigonometric functions
Unit 2 Quiz 1: Inverse Trigonometric Functions
DEMO integration of inverse trigonometric functions
Domain Restrictions of Inverse Trigonometric Functions
Inverse Trigonometric Functions (G10 - 11.9)
Inverse Trigonometric Functions
Inverse Trigonometric Functions
Properties of the Graphs of Inverse Trigonometric Functions
Inverse Trigonometric functions (G10 - 11.9 - 2)
Unit 3| Lesson 30| Inverse Trigonometric Functions|Exit Ticket
Right Triangle Trig (solving for an angle)
Inverse Trig Function (HW)
Explore inverse trigonometric functions Worksheets by Grades
Explore Other Subject Worksheets for class 10
Explore printable inverse trigonometric functions worksheets for 10th Class
Inverse trigonometric functions worksheets for Class 10 are an essential resource for teachers looking to enhance their students' understanding of trigonometry in Math. These worksheets provide a
variety of problems that challenge students to apply their knowledge of inverse trigonometric functions, such as arcsine, arccosine, and arctangent, to solve real-world problems. With a focus on
Class 10 Math curriculum, these worksheets are designed to help students build a strong foundation in trigonometry, preparing them for more advanced mathematical concepts in the future. Teachers can
utilize these worksheets in their lesson plans, as homework assignments, or as supplementary material for students who need additional practice. Inverse trigonometric functions worksheets for Class
10 are a valuable tool for educators striving to improve their students' trigonometry skills.
Quizizz is an excellent platform that offers a wide range of resources, including inverse trigonometric functions worksheets for Class 10, to help teachers create engaging and interactive learning
experiences for their students. In addition to worksheets, Quizizz provides teachers with an extensive library of quizzes, games, and other educational content that can be easily customized to align
with their specific Class 10 Math curriculum. The platform also offers valuable analytics and reporting tools, allowing teachers to track their students' progress and identify areas where they may
need additional support. By incorporating Quizizz into their lesson plans, teachers can create a dynamic and engaging learning environment that fosters a deeper understanding of trigonometry and
other essential Math concepts for their Class 10 students. | {"url":"https://quizizz.com/en-in/inverse-trigonometric-functions-worksheets-class-10","timestamp":"2024-11-10T17:39:03Z","content_type":"text/html","content_length":"141616","record_id":"<urn:uuid:b6db7dec-ba5f-420d-9554-032460de308a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00626.warc.gz"} |
How to Work With Lists In Haskell?
In Haskell, lists are a fundamental data structure used to store sequences of elements of the same type. The elements of a list are enclosed within square brackets and separated by commas. Lists are
immutable, meaning once created, their elements cannot be modified. There are various functions and operators available in Haskell to work with lists:
1. Creating Lists: An empty list: []. A list with elements: [1, 2, 3]. Using a range: [1..10] creates a list from 1 to 10.
2. Concatenating Lists: Using the ++ operator: [1, 2, 3] ++ [4, 5, 6] results in [1, 2, 3, 4, 5, 6].
3. Accessing Elements: Using indexing: [1, 2, 3] !! 0 refers to the element at index 0, which is 1.
4. List Operations: Adding an element to the beginning (cons operator): 1:[2, 3, 4] results in [1, 2, 3, 4]. Getting the length of a list: length [1, 2, 3] returns 3. Reversing a list: reverse [1,
2, 3] gives [3, 2, 1]. Checking if a list is empty: null [1, 2, 3] returns False.
5. List Comprehensions: List comprehensions provide a concise way to create or transform lists using generators and optional filters. For example: [x | x <- [1..10], even x] generates a list of even
numbers from 1 to 10, resulting in [2, 4, 6, 8, 10].
6. Pattern Matching on Lists: Using pattern matching, you can destructure and manipulate lists easily. For example: sumList :: [Int] -> Int sumList [] = 0 sumList (x:xs) = x + sumList xs The
function sumList recursively sums the elements of a list by pattern matching on its structure.
These are just some of the operations that can be performed on lists in Haskell. Working with lists is an essential skill for any Haskell programmer, and it allows for powerful list processing and
functional programming techniques.
How to check if an element is present in a list in Haskell?
In Haskell, you can check if an element is present in a list using the elem function. The elem function takes two arguments: the element you want to check for and the list to search in.
Here is an example:
1 isElementPresent :: (Eq a) => a -> [a] -> Bool
2 isElementPresent x xs = elem x xs
In this example, x is the element you want to check for, and xs is the list you want to search in. The function isElementPresent simply calls the elem function with these arguments.
You can use this function as follows:
1 main = do
2 let myList = [1, 2, 3, 4, 5]
3 print (isElementPresent 3 myList) -- Output: True
4 print (isElementPresent 6 myList) -- Output: False
In this example, we have a list called myList. We call the isElementPresent function with 3 and myList as arguments to check if 3 is present in myList. The result will be True. We also check if 6 is
present in myList, and the result will be False.
What is the maximum function used for with lists in Haskell?
The maximum function in Haskell is used to find the largest element in a list. It takes a list of ordered items as input and returns the maximum element.
Here's the type signature of the maximum function:
1 maximum :: (Ord a) => [a] -> a
It takes a list of elements [a] and returns a single element a. The Ord a constraint ensures that the elements of the list can be compared in terms of ordering.
For example, if we have a list [4, 7, 2, 9, 1], the maximum function will return 9, as it is the largest element in the list.
How to find the minimum value in a list in Haskell?
In Haskell, you can use the minimum function to find the minimum value in a list. Here's an example:
1 minValue :: (Ord a) => [a] -> a
2 minValue = minimum
You can call this function by passing a list of values and it will return the minimum value. For example:
1 main :: IO ()
2 main = do
3 let list = [5, 3, 8, 1, 6]
4 putStrLn $ "Minimum value: " ++ show (minValue list)
6 -- Output: Minimum value: 1
Note that the Ord type constraint is required because the minimum function requires the values in the list to be orderable.
What is the tail function used for with lists in Haskell?
The tail function in Haskell is used to extract all but the first element from a list. It takes a list as input and returns a new list containing all the elements from the second element onwards.
The type signature of the tail function is as follows:
For example, consider the following list:
Using the tail function on this list would yield:
1 tail list
2 -- Output: [2, 3, 4, 5]
Note that if the input list is empty or contains only one element, the tail function will return an empty list. For example:
1 tail [1]
2 -- Output: []
4 tail []
5 -- Output: []
How to drop elements from a list in Haskell?
In Haskell, we can drop elements from a list using the drop function. The drop function takes two arguments: the number of elements to drop and the list from which to drop the elements. Here's an
1 dropElements :: Int -> [a] -> [a]
2 dropElements n xs = drop n xs
In this example, n represents the number of elements to drop and xs is the list. The drop function will return a new list with the specified number of elements dropped from the front. If n is greater
than the length of the list, an empty list will be returned.
Here's how you can use the dropElements function:
1 main :: IO ()
2 main = do
3 let myList = [1, 2, 3, 4, 5]
4 let droppedList = dropElements 2 myList
5 putStrLn $ show droppedList -- Output: [3, 4, 5]
In this example, we drop 2 elements from the list [1, 2, 3, 4, 5], resulting in the list [3, 4, 5].
What is the replicate function used for with lists in Haskell?
The replicate function in Haskell is used to create a new list by replicating a given element a specified number of times.
Its type signature is:
1 replicate :: Int -> a -> [a]
The Int argument specifies the number of times the element should be replicated, and the a argument represents the element itself. The function returns a list of a elements, where each element is a
copy of the given element.
For example:
This will return the list ['a', 'a', 'a']. | {"url":"https://almarefa.net/blog/how-to-work-with-lists-in-haskell","timestamp":"2024-11-08T15:27:31Z","content_type":"text/html","content_length":"365197","record_id":"<urn:uuid:957f0e42-61c0-45a5-9874-310b0380a698>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00122.warc.gz"} |
Calculate derivatives of vector of data — calc_deriv
Calculate derivatives of vector of data
Provided a vector of y values, this function returns either the plain or per-capita difference or derivative between sequential values
x = NULL,
return = "derivative",
percapita = FALSE,
x_scale = 1,
blank = NULL,
subset_by = NULL,
window_width = NULL,
window_width_n = NULL,
window_width_frac = NULL,
window_width_n_frac = NULL,
trans_y = "linear",
na.rm = TRUE,
warn_ungrouped = TRUE,
warn_logtransform_warnings = TRUE,
warn_logtransform_infinite = TRUE,
warn_window_toosmall = TRUE
Data to calculate difference or derivative of
Vector of x values provided as a simple numeric.
One of c("difference", "derivative") for whether the differences in y should be returned, or the derivative of y with respect to x
When percapita = TRUE, the per-capita difference or derivative is returned
Numeric to scale x by in derivative calculation
Set x_scale to the ratio of the units of x to the desired units. E.g. if x is in seconds, but the desired derivative is in units of /minute, set x_scale = 60 (since there are 60 seconds in 1
y-value associated with a "blank" where the density is 0. Is required when percapita = TRUE.
If a vector of blank values is specified, blank values are assumed to be in the same order as unique(subset_by)
An optional vector as long as y. y will be split by the unique values of this vector and the derivative for each group will be calculated independently of the others.
This provides an internally-implemented approach similar to group_by and mutate
Set how many data points are used to determine the slope at each point.
When all are NULL, calc_deriv calculates the difference or derivative of each point with the next point, appending NA at the end.
When one or multiple are specified, a linear regression is fit to all points in the window to determine the slope.
window_width_n specifies the width of the window in number of data points. window_width specifies the width of the window in units of x. window_width_n_frac specifies the width of the window as a
fraction of the total number of data points.
When using multiple window specifications at the same time, windows are conservative. Points included in each window will meet all of the window_width, window_width_n, and window_width_n_frac.
A value of window_width_n = 3 or window_width_n = 5 is often a good default.
One of c("linear", "log") specifying the transformation of y-values.
'log' is only available when calculating per-capita derivatives using a fitting approach (when non-default values are specified for window_width or window_width_n).
For per-capita growth expected to be exponential or nearly-exponential, "log" is recommended, since exponential growth is linear when log-transformed. However, log-transformations must be used
with care, since y-values at or below 0 will become undefined and results will be more sensitive to incorrect values of blank.
logical whether NA's should be removed before analyzing
logical whether warning should be issued when calc_deriv is being called on ungrouped data and subset_by = NULL.
logical whether warning should be issued when log(y) produced warnings.
logical whether warning should be issued when log(y) produced infinite values that will be treated as NA.
logical whether warning should be issued when only one data point is in the window set by window_width_n, window_width, or window_width_n_frac, and so NA will be returned.
A vector of values for the plain (if percapita = FALSE) or per-capita (if percapita = TRUE) difference (if return = "difference") or derivative (if return = "derivative") between y values. Vector
will be the same length as y, with NA values at the ends
For per-capita derivatives, trans_y = 'linear' and trans_y = 'log' approach the same value as time resolution increases.
For instance, let's assume exponential growth \(N = e^rt\) with per-capita growth rate \(r\).
With trans_y = 'linear', note that \(dN/dt = r e^rt = r N\). So we can calculate per-capita growth rate as \(r = dN/dt * 1/N\).
With trans_y = 'log', note that \(log(N) = log(e^rt) = rt\). So we can calculate per-capita growth rate as the slope of a linear fit of \(log(N)\) against time, \(r = log(N)/t\). | {"url":"https://mikeblazanin.github.io/gcplyr/reference/calc_deriv.html","timestamp":"2024-11-12T01:51:13Z","content_type":"text/html","content_length":"15282","record_id":"<urn:uuid:064da12e-bf75-4690-9265-85c3e612a015>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00444.warc.gz"} |
Doubly Linked List Introduction and Insertion | Linked List | Prepbytes
Last Updated on June 15, 2022 by Ria Pathak
The linked list is one of the most important concepts and data structures to learn while preparing for interviews. Having a good grasp of Linked Lists can be a huge plus point in a coding interview.
A doubly linked list is a list that contains links to the next and previous nodes. We know that in a singly linked list, we can only traverse forward. But, in a doubly linked list, we can traverse
both in a forward and backward manner.
Doubly Linked List Representation
Advantages over singly linked list
• A doubly linked list can be traversed both in the forward and backward directions.
• If the pointer to the node to be deleted is given, then the delete operation in a doubly-linked list is more efficient.
• Insertion of a new node before a given node is more efficient.
Disadvantages over singly linked list
• Extra space is required to store the previous pointer.
• All operations of a doubly-linked list require an extra previous pointer to be maintained. We have to modify the previous pointers together with the next pointer.
Insertion in a doubly-linked list
Insert at the beginning.
In this approach, the new node is always added at the start of the given doubly linked list, and the newly added node becomes the new head.
The approach is going to be very simple. We’ll make the next of the new node point to the head. Then, we will make the previous pointer of head point to the new node. Lastly, we will make the new
node the new head.
• Allocate the new node and put in the data
• Make the next of the new node as head and previous as NULL.
• If the head is not pointing to NULL, then change the previous pointer of the head node to the new node.
• Make the new node the new head.
Dry Run
Code Implementation
void push(struct Node** head_ref, int new_data)
struct Node* NewNode = (struct Node*)malloc(sizeof(struct Node));
NewNode->data = new_data;
NewNode->next = (*head_ref);
NewNode->prev = NULL;
if ((*head_ref) != NULL)
(*head_ref)->prev = NewNode;
(*head_ref) = NewNode;
public void push(int new_data)
Node new_Node = new Node(new_data);
new_Node.next = head;
new_Node.prev = null;
if (head != null)
head.prev = new_Node;
head = new_Node;
def push(self, new_data):
new_node = Node(data = new_data)
new_node.next = self.head
new_node.prev = None
if self.head is not None:
self.head.prev = new_node
self.head = new_node
Time Complexity: O(1), as no traversal is needed.
Space Complexity: O(1), as no extra space is required.
Insert at the end
In this approach, the new node will always be added after the last node of the given doubly linked list.
The approach is going to be very simple. Firstly, we have to traverse till the end of the list. After reaching the last node, we will make it point to the new node. After this step, we will make the
previous pointer of the new node point to the last node.
• Allocate the new node and put in the data.
• As the new node is going to be the last, so the next of this new node will be NULL.
• If the linked list is empty, then simply make the new node as the head node and return.
• Else, traverse till the end of the doubly linked list and make the next of the last node point to the new node.
• Lastly, make the prev of the new node point to the last node.
Code Implementation
void append(struct Node** head_ref, int new_data)
struct Node* NewNode = (struct Node*)malloc(sizeof(struct Node));
struct Node* last = *head_ref;
NewNode->data = new_data;
NewNode->next = NULL;
if (*head_ref == NULL) {
NewNode->prev = NULL;
*head_ref = NewNode;
while (last->next != NULL)
last = last->next;
last->next = NewNode;
NewNode->prev = last;
void append(int new_data)
Node new_node = new Node(new_data);
Node last = head;
new_node.next = null;
if (head == null) {
new_node.prev = null;
head = new_node;
while (last.next != null)
last = last.next;
last.next = new_node;
new_node.prev = last;
def append(self, new_data):
new_node = Node(data = new_data)
last = self.head
new_node.next = None
if self.head is None:
new_node.prev = None
self.head = new_node
while (last.next is not None):
last = last.next
last.next = new_node
new_node.prev = last
Time Complexity: O(n), as one traversal is needed.
Space Complexity: O(1), as no extra space is required apart from the creation of node, which takes 20 bytes.
Insert after a given node
In this approach, the new node will always be added after a given node.
The approach is going to be very simple. We are given a pointer to a node as prev_node. The new node is to be inserted after prev_node.
As we have the pointer to that node (prev_node), we can perform this operation in O(1) time. Firstly, we will create the new node. Now, we will make the new node point to the next of prev_node. By
doing this, we are making the new node point to the next node of prev_node.
Now, prev will point to the new node. Now, we just have to change the necessary links. So, we will make prev_node as the previous of the new node. In the end, if the new node is not the last node, we
will make the new node’ s next node’s prev as the new node. By doing this, we are making the previous of the next node of the new node point to the new node.
• Allocate the new node and put in the data.
• Make the new node point to the next of prev_node.
• Now, make the prev_node point to the new node.
• The previous of the new node will point to the prev_node.
• If the new node is not the last node, it’s next’s previous point to the new node. By doing this, the new node becomes the previous of the next of the new node.
• If the linked list is empty, then simply make the new node as the head node and return.
Dry Run
Code Implementation
void insertAfter(struct Node* prev_node, int new_data)
if (prev_node == NULL) {
printf("the given previous node cannot be NULL");
struct Node* new_node = (struct Node*)malloc(sizeof(struct Node));
new_node->data = new_data;
new_node->next = prev_node->next;
prev_node->next = new_node;
new_node->prev = prev_node;
if (new_node->next != NULL)
new_node->next->prev = new_node;
public void InsertAfter(Node prev_Node, int new_data)
if (prev_Node == null) {
System.out.println("The given previous node cannot be NULL ");
Node new_node = new Node(new_data);
new_node.next = prev_Node.next;
prev_Node.next = new_node;
new_node.prev = prev_Node;
if (new_node.next != null)
new_node.next.prev = new_node;
def insertAfter(self, prev_node, new_data):
if prev_node is None:
print("the given previous node cannot be NULL")
new_node = Node(data = new_data)
new_node.next = prev_node.next
prev_node.next = new_node
new_node.prev = prev_node
if new_node.next is not None:
new_node.next.prev = new_node
Time Complexity: O(1), as no traversal is needed.
Space Complexity: O(1), as no extra space is required.
Insert before a given node
In this approach, the new node will always be added before a given node.
The approach is going to be very simple. We are given a pointer to a node as next_node. The new node is to be inserted before next_node.
As we have the pointer to that node (next_node), we can perform this operation in O(1) time. Firstly, we will create the new node. Now, we will make the previous of the new node as the previous of
the next_node. By doing this, we are trying to add the new node in between the next_node and the previous node of the next node.
Now, the previous of the next node will point to the new node and the next of the new node will point to the next_node. Now, if the previous of the new node is not NULL, then the next of the previous
node of the new node will point to the new node. By doing this, the we are changing the appropriate links.
If the previous of the new node is NULL, it means that the new node is the first node in the list, hence it will become our new head.
• Allocate the new node and put in the data.
• Make the previous of the new node point to the previous of the next node.
• The previous of the next_node will point to the new node.
• The new node will point to the next_node
• If the new node is not the head of the list, then the next of the previous of the new node will point to the new node. Else, the new node will become the head of the list.
Dry Run
Code Implementation
void insertBefore(struct Node** head_ref,
struct Node* next_node, int new_data)
if (next_node == NULL) {
printf("the given next node cannot be NULL");
struct Node* new_node
= (struct Node*)malloc(sizeof(struct Node));
new_node->data = new_data;
new_node->prev = next_node->prev;
next_node->prev = new_node;
new_node->next = next_node;
if (new_node->prev != NULL)
new_node->prev->next = new_node;
(*head_ref) = new_node;
void insertBefore(Node head_ref, Node next_node,int new_data)
System.out.println("the display next node cannot be null");
Node new_node=new Node(new_data);
def insertBefore(self, next_node, new_data):
if next_node is None:
print("the given next node cannot be NULL")
new_node = Node(new_data)
new_node.prev = next_node.prev
next_node.prev = new_node
new_node.next = next_node
if new_node.prev:
new_node.prev.next = new_node
self.head = new_node
Time Complexity: O(1), as no traversal is needed.
Space Complexity: O(1), as no extra space is required.
So, in this article, we have tried to explain what a doubly-linked list is, and its insertion operations. Doubly Linked List is a very important topic when it comes to Coding interviews. If you want
to solve more questions on Linked List, which are curated by our expert mentors at PrepBytes, you can follow this link Linked List.
Leave a Reply Cancel reply | {"url":"https://www.prepbytes.com/blog/linked-list/doubly-linked-list-introduction-and-insertion/","timestamp":"2024-11-07T03:15:57Z","content_type":"text/html","content_length":"171690","record_id":"<urn:uuid:0a18cdf1-d59a-4017-b67f-b845aa2e8bfb>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00567.warc.gz"} |
Principle of Increase of Entropy » Scienceeureka.com
Energy can be transformed from one form to another in a closed system. But this type of process is irreversible. The direction of an irreversible process is determined by a change in a special
property called the entropy of the system.
The analysis of a reversible process, using the second law of thermodynamics, leads to the concept of entropy. It is denoted by the letter S and is a property of all thermodynamic systems. It is
defined by the relation.
dS = dQ/T
Where, dQ = heat exchange of a system in a reversible process at temperature T and dS = corresponding change in entropy of the system.
From dQ = TdS. Using this relation in the first law of thermodynamics, we get,
dQ = dU + dW
or, TdS = dU + pdV
Now, we note that dQ and dW are quantities exchanged between the system and the surroundings in a process. So, they depend on whether the process is reversible or irreversible.
Principle of Increase of Entropy:
Thermodynamic analysis shows that in every real process in nature, the sum of the entropies of a system and its surroundings always increases. The opposite process, in which the sum of the entropies
decreases isn’t allowed in nature.
We may compare the situation with the law of conservation of energy (the first law of thermodynamics). This law states that the total energy of the universe is constant. It can never increase or
decrease. In analogy, the second law of thermodynamics states that the total entropy of the universe increases in every process, it can never decrease. This is known as the principle of increase of
This principle states that every process in nature occurs in such a direction that the total entropy of the universe increases. Alternatively, no process, in which the total entropy of the universe
decrease can occur in nature. | {"url":"https://scienceeureka.com/principle-of-increase-of-entropy/","timestamp":"2024-11-07T02:31:20Z","content_type":"text/html","content_length":"71547","record_id":"<urn:uuid:ceafe857-6286-4283-96fd-2d8e4e97876f>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00051.warc.gz"} |
Topics: Bundles
In General
* Idea: A generalization of fiber bundles, in which the condition of a local product structure is dropped.
$ Def: A triple (E, B, π), with E, B ∈ Top and π: E → B continuous and surjective; B is called the base space, and π the projection map.
$ Cross section: Given a bundle (E, B, π), a cross section is a map f : B → E, such that π \(\circ\) f = id[B].
> Other special types: see Wikipedia page.
Special Types
* Examples: The most common ones are fiber bundles (> see fiber bundles).
> Other special types: see Path [bundles over path spaces]; posets [bundles over posets]; sheaves.
Related Concepts
$ Bundle map: A continuous map f : E → F, where E and F are two bundles, which carries each fiber of E isomorphically onto a fiber of F.
> Other related concepts: see Fibrations.
Bundle Gerbe > s.a. Gerbe.
* Idea: Every bundle gerbe gives rise to a gerbe, and most of the well-known examples of gerbes are bundle gerbes.
@ General references: Murray JLMS(96)dg/94; Murray & Stevenson JLMS(00)m.DG/99; Bouwknegt et al CMP(02)ht/01 [K-theory]; Gawedzki & Reis JGP(04) [over connected compact simple Lie groups]; Murray
a0712-fs [intro].
@ In field theory: Carey et al RVMP(00)ht/97; Ekstrand & Mickelsson CMP(00)ht/99; Gomi ht/01 [Chern-Simons theory]; Carey et al CMP(05)m.DG/04 [Chern-Simons and Wess-Zumino-Witten theories]; Bunk
a2102 [in geometry, field theory, and quantisation, rev].
@ Geometry: Stevenson PhD(00)m.DG.
main page – abbreviations – journals – comments – other sites – acknowledgements
send feedback and suggestions to bombelli at olemiss.edu – modified 24 feb 2021 | {"url":"https://www.phy.olemiss.edu/~luca/Topics/b/bundle.html","timestamp":"2024-11-02T14:04:55Z","content_type":"text/html","content_length":"5735","record_id":"<urn:uuid:3ed1c7b1-908a-4fa2-887f-98866bd6859c>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00408.warc.gz"} |
Universal Entanglement Dynamics followi
SciPost Submission Page
Universal Entanglement Dynamics following a Local Quench
by Romain Vasseur, Hubert Saleur
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): Romain Vasseur
Submission information
Preprint Link: http://arxiv.org/abs/1701.08866v2 (pdf)
Date submitted: 2017-05-09 02:00
Submitted by: Vasseur, Romain
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
• Condensed Matter Physics - Theory
Specialties: • Quantum Physics
Approach: Theoretical
We study the time dependence of the entanglement between two quantum wires after suddenly connecting them via tunneling through an impurity. The result at large times is given by the well known
formula $S(t) \approx {1\over 3}\ln {t}$. We show that the intermediate time regime can be described by a universal cross-over formula $S=F(tT_K)$, where $T_K$ is the crossover (Kondo) temperature:
the function $F$ describes the dynamical ``healing'' of the system at large times. We discuss how to obtain analytic information about $F$ in the case of an integrable quantum impurity problem using
the massless Form-Factors formalism for twist and boundary condition changing operators. Our results are confirmed by density matrix renormalization group calculations and exact free fermion
Author comments upon resubmission
Dear Editor,
We apologize for the delay in getting back to you: both authors were traveling until recently.
We are now happy to submit a modified version of our manuscript which, we hope, will be acceptable for publication.
First, we thank the referees for their careful reading of the manuscript and their constructive criticism.
Since the three referees seemed essentially in agreement about the qualities and weaknesses of our paper, we will take the liberty to paraphrase their comments rather than cite each of them in turn.
In a nutshell, the referees found the paper interesting and timely, found that it contained interesting results, but complained about the ``brutal'' way we obtained our analytical curve by performing
a seemingly arbitrary renormalization by a numerical factor ($4/3$) of the result of a form-factor calculation.
Before discussing this factor in detail, we would like to emphasize that we did not consider that the analytical calculation was the main point of the paper. Rather, we felt the most interesting
result was the existence of a well defined scaling function describing the crossover of the entanglement entropy from $S=0$ to the conformal asymptotic behavior $S\sim {c\over 3}\ln t$ after a local
quench involving a weakly coupled impurity. The existence of this scaling function is argued on general grounds in our paper, and amply verified by high quality computer simulations.
Coming to the analytical calculation presented in the paper, we also would like to emphasize that it is the {\bf only} calculation we are aware of that can produce any usable information about the
scaling curve for ${dS\over d\ln L}$. Perturbative approaches have been shown, in this kind of problem, to either converge extremely slowly (and thus to be unable to produce useful results), or to be
plagued with uncontrollable divergences. See for instance the paper arXiv:1305.1482 where some of these aspects are discussed in the related context of crossovers involving sizes instead of time.
Meanwhile, particle propagation pictures ``\`a la Cardy Calabrese'' do not seem to be able to recover the kind of fine structure present in the crossover either. Hence, we believe our calculation,
however unsatisfactory, has the merit of existing, and, by a strange coincidence which may well {\sl not be} a coincidence, does provide remarkable accurate results. This is why we decided to publish
it, despite the shortcomings mentioned by the referees.
Now the main shortcoming is that, at the order of form-factors expansion we are working, we get a satisfactory result only if we multiply the result of our calculation, by a numerical factor $4/3$.
While this may seem horribly ad hoc, there is in fact a rationale behind this. It originates from early calculations performed by F. Lesage and H. Saleur (J. Phys. A30 (1997) L457), themselves
inspired by calculations of F. Smirnov. In a nutshell, what happens in many problems involving ``massless form-factors'' or form-factors in the UV limit, is that a) the integrals over rapidities are
diverging and b) once these integrals are properly regulated, the expansion itself is divergent. The way to cure this second divergence is to focus on the {\sl ratio} of two quantities, for instance,
in the case of the paper by Lesage and Saleur, the ratio of a correlation function evaluated at finite value of the impurity coupling and the same correlation function evaluated in the conformal
limit. To put it schematically:
R(T_B)\equiv {\hbox{FF expansion of} \langle ...\rangle (T_B)\over
\hbox{FF expansion of} \langle ...\rangle (\hbox{CFT limit})}=\hbox{well defined and convergent}
The trick used in the paper of Lesage and Saleur is thus to calculate $R(T_B)$ using form-factors, and then multiply the result by the (known) result in the conformal limit to obtain the searched for
result at finite $T_B$.
There are, to our knowledge, no strong results as to why this should work, especially because the form-factors involved in this kind of calculation are extremely complicated.
What we did in our paper is, in spirit at least, identical. Instead of multiplying by the numerical factor $4/3$, we could say that what we have done is perform a calculation of the ratio
R(T_B)\equiv {\hbox{FF expansion of } S(T_B)\over
\hbox{FF expansion of } S(\hbox{CFT limit})}
and then multiplied by the known CFT result. In this case, the form-factors expansion of the entanglement is itself in fact well defined (at the price of taking a derivative wrt t). The numerator
goes as $S_{FF}\sim {1\over 4}\ln t$ at leading order, while $S_{CFT} \sim {1\over 3}\ln t$. Hence the overall ``renormalization'' by a factor $4/3$.
It would certainly be better to investigate in more detail the form-factors expansion, in order to see whether higher orders render this renormalization unnecessary indeed (by correcting the
denominator into ${1\over 4}\ln t$). But this is an extremely technical endeavor, and, as we pointed out already, we felt it was not the main point of the paper.
We have, in the present modified version, explained the ``historical'' origin of the renormalization, and toned down our claim of doing an analytical calculation some more, so as not to confuse the
reader into believing we accomplished more that we did. We have also, to answer the additional criticism of one referee, discussed a little more the behavior of the extra terms we had initially
discarded. We have added comments emphasizing how more difficult the time dependent case is, compared with the equilibrium cases we had studied so far. Despite one of the referees' suggestion, we
have preferred not to put the form-factors calculation in the main text, since a) it is not our main point and b) it is not that satisfactory anyhow. We have otherwise been happy to follow all the
minor changes suggested by the referees.
We hope our manuscript will be accepted for publication.
H. Saleur and R. Vasseur.
List of changes
1) We updated the abstract
2) We have explained the origin of the renormalization in detail in the main text and in appendix, and commented on the discarded terms
3) We made many minor changes and added references following the referees' recommendations
Current status:
Has been resubmitted
Reports on this Submission
Report #3 by Anonymous (Referee 6) on 2017-5-24 (Invited Report)
• Cite as: Anonymous, Report on arXiv:1701.08866v2, delivered 2017-05-24, doi: 10.21468/SciPost.Report.147
Interesting problem and attempt of an analytical calculation of entropy
4/3 deficiency in the form-factor calculation
The authors have provided an explanation for the 4/3-renormalization, based on earlier work, where similar behaviour was observed. The argument is essentially that the form-factor calculation could
only yield the ratio w.r.t the result in the conformal limit. Although there is no clear analytical justification behind this trick, it has also been used in earlier studies and miraculously yields
rather accurate results. Clearly, it would be nice to obtain a better understanding of the mechanism behind this trick. However, the authors have now given an honest account of the state of affairs
in the text, without overclaiming the utility of their method for the calculation of entropy. I believe that, despite the deficiency of the form-factor method, this is still a decent work with an
interesting result and I recommend the publication of the manuscript.
Requested changes
Typo in the definition of Renyi entropy in the beginning of paragraph before Eq. (4) is still there and should be corrected
answer to question
We thank the Referee for their useful comments and for recommending our work for publication in SciPost. Concerning the "typo in the definition of Renyi entropy in the beginning of paragraph before
Eq. (4)", we do not see a typo in this definition: would the referee prefer we define the Renyi entropy in a more standard way with a logarithm? We would be very grateful to the referee if he/she
could clarify what specific aspect of this definition is a "typo".
Report #2 by Anonymous (Referee 5) on 2017-5-21 (Invited Report)
• Cite as: Anonymous, Report on arXiv:1701.08866v2, delivered 2017-05-21, doi: 10.21468/SciPost.Report.142
The strengths of the paper are exactly the same as in my original report. The subject is timely, and interesting partial analytical results are provided on a difficult problem.
1) Several not so small terms are dropped to make the agreement with the numerics better, and the authors fail to mention that important fact in the main text. This problem should be fixed.
2) The "renormalization" by a factor 4/3 is still a weakness, but it is discussed clearly and honestly now.
In my previous report, which is quoted below, I requested the following changes:
1) Show in Figure 2 the full first order result, not just equation (7).
2) State more clearly in the main text what is shown in figure (2).
It is crucial that those two points be successfully addressed. Even though the manuscript is interesting, it lacks clarity in the present form.
Neither of the two changes has been made, I cannot recommend publication in the present form.
We are not talking about a minor detail here, but about the main statement of the paper. Equation (7), which is shown in figure 2, is the result of clear cherry-picking. The result to lowest order,
as defined in the last paragraph on page 4, is a sum of five terms. Four (!) of them are simply discarded, as admitted to shortly after equation (38), (39), (40) at the end of the appendix on page
15. The procedure, designed to get better agreement with the numerical data, is not acknowledged in the main text. Hiding this crucial piece of information in the appendix is of course unacceptable,
especially since the authors mention in the new version of the appendix that the discarded terms are not so small after all. [This is a separate issue from the "renormalization" by 4/3, the
discussion of this prescription is now satisfactory.]
As a result, several statements made in the main text are misleading. For example, the authors explain the various physical processes contributing to lowest order on page 4, name the next paragraph
"the result at lowest order", but show a result which is not the result at lowest order. At the beginning of the discussion several statements are also misleading, including again the use of "lowest
order" which does not mention the discarded terms.
I also have a few comments on the part of the appendix mentioned above:
"The only process which remains non-zero in the conformal limit is the sin term in (38): it is the
contribution we have used to obtain the curve on Fig. 2. The other contributions are extremely
tedious to evaluate numerically, because of the less favorable behavior of the integrals involved
that are naively divergent without additional regularization. We have checked however that,
while they do not add up to zero any longer, they remain small ($\lesssim$ 10%) and can be essentially
neglected for our purpose."
1) The integrals are convergent, and do not require additional regularization. Please clarify.
2) If the authors can guarantee the contributions are of the order of 10%, it means they are able to evaluate them numerically, at least to some reasonable degree of approximation. Therefore,
incorporating them in figure 2 as I requested in my previous report is possible. Otherwise please clarify, and mention these contributions in the main text.
3) 10% is not small, it is more than enough to significantly worsen the agreement shown in figure 2. Especially if these contributions happen to move the location of the maximum.
4) Is it 10% before, or after renormalization by 4/3?
Requested changes
The list of requested changes is exactly the same as in my previous report.
1) Show in figure 2 the full lowest order result, not just equation (7).
2) State clearly in the main text what is equation (7), and what is shown in figure 2. Failing to mention the discarded terms is not acceptable.
answer to question
reply to objection
Referee 2 is concerned about the "leading contribution" plotted in the main text being "cherry-picked" among other terms. We do agree with the referee that the proliferation of "subleading" first
order terms is one of the unsatisfactory aspects of this non-equilibrium massless Form Factor approach which would have to be clarified in future works. However, it is important to notice that the
massless Form Factor expansion is not controlled by a small parameter anyway (in sharp contrast with massive theories for which the FF program is much more controlled). In that sense, it is equally
(un)satisfactory to discard terms at first or second order in the expansion. Moreover, there is a clear sense in which the contribution we plot is the "leading" one, even though it is not the only
first-order term: it is the only term that does not vanish in the IR limit (and is clearly "dominant" in that limit). We also checked that the contribution of the other terms is roughly one order of
magnitude smaller. To be more accurate, we changed "lowest order" to "leading contribution" in the main text.
Incidentally, this contribution is the only integral that we were able to evaluate numerically in a satisfactory way (the other ones are less well-behaved in the IR, and/or are highly oscillatory),
which is why we did not include the other contributions in Fig. 2. For the few points where we were able to compute these other contributions, we checked that they are relatively small as mentioned
above, but are large enough to worsen the agreement with the numerical results. Although this is obviously unpleasant, the only way to clarify the situation will be to compute higher order
contributions, and to find a stable way to evaluate these contributions numerically. Given that the first order calculations were already quite involved, we defer these investigations to future work.
Contrary to what Referee 2 seems to imply, we feel we have been especially honest about this point from the first version of our paper. We commented on these issues only in the appendix as we thought
this would only be of interest to Form Factor experts, but following the referee’s recommendations, we were happy to add a few sentences in the main text as well. We also improved and clarified the
discussion in the appendix.
Regarding the specific comments:
“1) The integrals are convergent, and do not require additional regularization. Please clarify.”
Some of these integrals are divergent in the IR. We clarified this sentence and added the explicit form of the IR integrals to illustrate this.
“2) If the authors can guarantee the contributions are of the order of 10%, it means they are able to evaluate them numerically, at least to some reasonable degree of approximation. Therefore,
incorporating them in figure 2 as I requested in my previous report is possible. Otherwise please clarify, and mention these contributions in the main text.”
As we mentioned above, we were not able to evaluate these integrals in a satisfactory way, which is why we decided not to plot them. We were happy to clarify this point in the main text and appendix.
"3) 10% is not small, it is more than enough to significantly worsen the agreement shown in figure 2. Especially if these contributions happen to move the location of the maximum."
We agree, and as we clearly stated in the first version of our draft, these contributions do seem to worsen the agreement with the numerical results — even though we reiterate we could not find a way
to evaluate these integrals accurately. We added a sentence to emphasize this point in the main text. However, we feel that one order of magnitude is enough to justify calling eq 7 the ``leading’’
“4) Is it 10% before, or after renormalization by 4/3?”
10% refers to a relative error and is the same after and before renormalization.
Report #1 by Olalla Castro-Alvaredo (Referee 1) on 2017-5-10 (Invited Report)
• Cite as: Olalla Castro-Alvaredo, Report on arXiv:1701.08866v2, delivered 2017-05-10, doi: 10.21468/SciPost.Report.135
The strengths of the paper are the same as I had already pointed out in my original report. The results are timely and interesting and there is very good agreement between the non-trivial analytical
calculations and the numerics performed by the authors.
In my original report I had found one main weakness relating the re-normalisation factor that was introduced seemingly ad hoc to achieve good agreement between a form factor calculation, the CFT
prediction and the numerics. This weakness remains but the authors have made an effort to better explain where this prescription comes from and why it is expected to work.
The authors have engaged constructively with all my original comments and they have made an effort to explain the origin of their form factor calculation normalisation, while being honest about the
lack of a strong rational for this choice. I find both their answer and the changes they have introduced satisfactory. I consider that they have successfully addressed the points I had raised.
answer to question
We thank the Referee for their useful comments and for recommending our work for publication in SciPost. | {"url":"https://scipost.org/submissions/1701.08866v2/","timestamp":"2024-11-13T09:24:15Z","content_type":"text/html","content_length":"58902","record_id":"<urn:uuid:98c3f592-a62f-48d0-816a-d69e0375a7f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00014.warc.gz"} |
A Non-Linear Analysis of the Saturated MOSFET
Parent Category: 2018 HFE
By Dr. Alfred Grayzel
Abstract: The maximum power and the maximum efficiency at the maximum power have been derived for the MOSFET; when the MOSFET is in saturation over the entire cycle, using the square law theory.
Results are presented for the following three cases: the gate voltage is a sinusoid, a half sinusoid and a square wave. Results using the square law theory and bulk-charge theory for analysis of the
MOSFET are compared.
When a MOSFET is saturated, the drain current is not a function of the drain voltage; it is only a function of the gate voltage. The load impedance thus has no effect on the drain current. The MOSFET
amplifier can therefore be easily analyzed if the MOSFET is in saturation over the complete cycle. In this paper the n-type MOSFET amplifier is analyzed utilizing the non-linear equations that
describe the physics of the MOSFET, for the case where the MOSFET is in saturation for the entire cycle. A similar analysis is given in [1] for the JFET. For a given input excitation the drain
current can be calculated as a function of time and the Fourier components can be calculated. The output power and the efficiency can then be calculated for a given load resistance and DC bias
voltage. For the n-type MOSFET the maximum power and the maximum efficiency at the maximum power are presented for a given load resistance, when the gate voltage is a sinusoid, a half sinusoid and a
square wave. The maximum power and efficiency are compared for these three excitations.
The MOSFET: Sinusoidal Input Voltage Excitation -- Square Law Theory
There are two theories for obtaining a relationship for the drain current as a function of the gate voltage, when the MOSFET is in saturation; the square law theory and the bulk-charge theory. In the
following analysis the square law theory will be used as it is simpler and gives good insight into the performance of the MOSFET. The results will then be compared to the results using the
bulk-charge theory. I[dsat ]is given by Eq. 1 when using the square law theory [2].
I[dsat]= (Zµ’[n]C[0]/2L )(V[g] - V[T] )^2(1)
where I[dsat ]is the drain current when the MOSFET is in saturation, µ’[n ]is the average mobility of the inversion layer carriers, C[0] is the oxide capacitance, Z is the width of the channel and L
is its length.
The first case considered is the sinusoidal case where V[g] is given by Eq. 2.
V[g] = (V[gmax] + V[T])/2 + [ (V[gmax] - V[T ])/2] Cos(ωt)(2)
V[g] has a maximum value of V[gmax] and a minimum value of V[T]. Substituting Eq. 2 into Eq. 1 yields:
I[dsat] = (Zµ’[n]C[0]/2L)[(V[gmax] - V[T]) (1 + Cos(ωt) ) /2]^2(3)
I[dss], the maximum saturated drain current, is given by Eq. 1 with V[g] set equal to V[gmax].
I[dss] = (Zµ’[n]C[0]/2L)(V[gmax] - V[T])^2(4)
Dividing Eq. 3 by Eq. 4 yields Eq. 5.
I[dsat]/I[dss] =[ (1 + Cos(ωt) ) /2]^2
= 3/8 + 1/2 Cos(ωt) +1/8 Cos(2 ωt)(5)
Eq. 5 yields a DC current I[0]I[dss], equal to (3/8)I[dss] and a current at the fundamental frequency I[1]I[dss] equal to I[dss]/2. The DC power is given by Eq. 6.
P[0] =I[0]I[dss]V[0] = (3/8)I[dss]V[0](6)
where V[0 ]is the DC bias voltage. The output power at the fundamental frequency P[1], is given by Eq. 7.
P[1] = (I[dss]I[1])^2R[L]/2(7)
Figure 1 • I[dsat]/I[dss] as a function of theta computed using Eq. 5 and twelve cases analyzed by computer program.
The drain current is not a function of the drain voltage when the MOSFET is in saturation; it is only a function of the gate voltage and it therefore, is not affected by the load impedance. Since
only the fundamental frequency is desired at the output, the load impedance should present a short-circuit at all of the harmonic frequencies. Only voltage at the fundamental frequency will then
appear across the drain and across the load resistor. Since for a MOSFET, V[dsat ]is equal to (V[g ]– V[T]) [2], the constraint that the MOSFET is in saturation over the entire cycle and the
harmonics are short circuited to ground requires that the amplitude of the fundamental frequency can be no greater than V[0] – (V[gmax] - V[T]). The minimum value of the drain voltage is then equal
to (V[gmax]-V[T]); which is the saturation voltage when the gate voltage is equal to V[gmax] and the maximum value is equal to 2V[0 ]– (V[gmax] -V[T]). Since the drain current flowing through the
load resistor is sinusoidal and has a value I[1]I[dss],the load resistance is given by Eq. 8.
R[L] = (V[0] - (V[gmax] - V[T])|) /(I[1]I[dss]) (8)
Solving Eq. 8 for V[0] yields Eq. 9.
V[0] = (I[1]I[dss]) R[L] + (V[gmax] - V[T])(9)
P[1] = (I[1]I[dss])^2R[L]/2 = (I[1]I[dss]) (V[0] - (V[gmax] - V[T]))/2 (10)
The output power for the sinusoidal case is thus given by Eq. 11.
P[1] = .25( I[dss]) (V[0] - (V[gmax] - V[T]))
The efficiency is given by Eq. 12:
Eff =P[1] /P[0] = EFF0(V[0] – (V[gmax] - V[T]))/V[0]= (I[1] /2I[0] )(V[0] – (V[gmax] - V[T]))/V[0] (12)
where EFF0 is equal to I[1]/2I[0], which is equal to 2/3 for the sinusoidal case.
Computer Analysis
To compare the accuracy of the square law theory with that of the bulk–charge theory a simple program was written where the following instructions were performed.
Read from an input file V[gmax], X0, NA, Z/L and µ’[n] where V[gmax ]is the maximum value of the gate voltage, X0 is the thickness of the oxide layer, NA is the doping concentration of the p-type
semiconductor, Z/L is the ratio of the width to the length of the channel and µ’[n] is the effective mobility in the channel.
Using Eq. 13 find V[dsat]; the drain voltage at pinchoff when the gate voltage V[g] is equal to V[gmax]. The equations and definitions for the functions appearing in Eq. 13 are given in [3].
V[dsat] = V[g] – V[T] – V[W]{ [(V[g] – V[T])/2ф[F] + (1+ V[W]/4ф[F])^2 ]^1/2 - (1+ V[W]/4ф[F])}(13)
3. Using Eq. 14 find I[dss] by setting the gate voltage V[g] equal to V[gmax ][4].
I[dsat ]= (Z/L)( µ’[n])C0{ (V[g] – V[T])V[dsat]- V[dsat]^2/2- 4V[W]ф[F]/3[(1+ V[dsat]/2ф[F])^3/2 – (1+ 3V[dsat]/4ф[F])]}(14)
4. Define 1,440 points by incrementing theta from 0 to 360 degrees in increments of 360/1440 degrees.
At each of these points calculate V[g] given by Eq. 2 and using these values of V[g], calculate I[dsat] at these 1440 points using Eq. 14.
6. Calculate I[dsat]/I[dss] at these 1440 points and plot I[dsat]/I[dss] in Figure 1.
7. Calculate the Fourier Coefficients I[0 ]and I[1], using Simpsons rule.
8. Calculate EFF0 = I[1]/2I[0].
The results are shown for twelve cases in Table 1. The values of EFF0 in Table are within 0.5% of the value calculated using Eq. 12. Figure 1 shows plots of I[ds]/I[dss] as a function of THETA for
all twelve cases. I[dsat]/I[dss ]calculated using Eq. 5 is also plotted in Figure 1. The curves are so close to one another that they appear as a thickened curve. Thus, the square law theory is in
very good agreement with the bulk-charge theory for the calculation of I[dsat ]/I[dss]. The two theories differ however, in the values of I[dss ]and V[dsat ][5].
Table 1.
For a given value of R[L] increasing V[0] does not increase the output power since the drain current is saturated. It will however increase the D.C. power and hence reduce the efficiency. Decreasing
V[0] on the other hand will decrease the output power since for part of the cycle the drain current will be less than the saturation current. For a given value of R[L], Eqs. 9, 10 and 12 give the
maximum power and the efficiency at the maximum power. The power and efficiency will be increased by increasing R[L] and therefore the load resistance should be as large as possible without exceeding
the maximum allowed drain voltage.
The MOSFET: The Half Sinusoidal Input Voltage Excitation
The second case to be considered is the half sinusoidal case where for half of the cycle V[g ]is a sinusoid with a maximum value of V[gmax] and a minimum of V[T] and for the other half of the cycle
Vg is equal to V[T]. V[g] is given by Eq. 15
V[g ]=V[T ]+ (V[gmax] - V[T] )Cos(ωt)-90^o<ωt< 90^o
V[g ]= V[T]otherwise (15)
Substituting V[g ]into Eq.1 and dividing by Eq. 4 yields
I[dsat ]/I[dss]= Cos^2(ωt)-90^o<ωt< 90^o
I[dsat ]/ I[dss]= 0otherwise(16)
I[dsat]/I[dss ]is then equal to the product of Cos^2(ωt) and a square wave of unit magnitude centered at ωt=0. Using the Fourier expansion of the square wave and the trigonometric identity for Cos^2
(ωt), I[dsat]/I[dss ]can be written as:
I[dsat ]/I[dss]=.5(1+Cos(2ωt))(.5 + (2/π)(Cos(ωt) – Cos(3ωt)/3 + Cos(5ωt)/5...)(17)
Multiplying the two factors and using the trigonometric identity for the product of two cosines yields Eq. 18.
I[dsat ]/ I[dss]= .25 + (4/3π)Cos(ωt) + (.25) Cos(2ωt) + (4/17π)Cos(3ωt) + …(18)
It can be seen from Eq. 18 that I[1] is equal to 4/3π and I[0 ]is equal to .25. The DC power is given by Eq. 19;
P0 = I[0] I[dss] V[0] =I[dss]V[0]/4 .(19)
The drain current is not a function of the drain voltage when the MOSFET is in saturation; it is only a function of the gate voltage and It therefore is not affected by the load impedance. Since only
the fundamental frequency is desired at the output the load impedance should present a short-circuit at all of the harmonic frequencies. Only voltage at the fundamental frequency will then appear
across the drain and the load resistor. For a given value of V[0 ]the load resistance R[L] is given by Eq. 8. The output power is given by Eq. 10, which for I[1] = 4/3π is given by Eq. 20.
P[1]= I[dss] (4/3π) (V[0] - (V[gmax] - V[T] )/2 = I[dss](.212) (V[0] - (V[gmax] - V[T] ))(20)
The efficiency equal to P[1]/P[0] where I[1] = 4/3π and I[0] = .25 is given by Eq. 21.
Eff =EFF0(V[0] - (V[gmax] - V[T])/V[0] = (.849)( V[0] - (V[gmax] - V[T]))/V[0](21)
Comparison of Eqs. 20 and 21 with Eqs. 11 and 12 shows that the half sine wave input excitation will give greater efficiency than the sine wave excitation but less output power.
The MOSFET: Square Wave Input Voltage Excitation
The third case to be considered is a square wave excitation, where for half of the cycle V[g] is equal to V[gmax] and for the other half of the cycle V[g ]is equal to V[T]. V[g] is given by Eq. 22.
V[g ]= V[gmax]-90^o<ωt< 90^o
V[g ]= V[T]otherwise(22)
Substituting V[g ]into Eq.1 and dividing by Eq. 4 yields:
I[dsat ]/I[dss]= 1-90^o<ωt< 90^o
I[dsat ]/I[dss]= 0otherwise(23)
I[dsat ]/I[dss ]is a square wave of unity magnitude, centered at ωt equal to zero, whose Fourier series is given by Eq. 24.
I[dsat ]/ I[dss]= (.5 + (2/π)Cos(ωt) – (2/3π)Cos(3ωt) + (2/5π) Cos(5ωt)) ……… )(24)
It can be seen from Eq. 24 that I[1] is equal to 2/π and I[0 ]is equal to .5.
The drain current is not a function of the drain voltage when the MOSFET is in saturation; it is only a function of the gate voltage and It therefore, is not affected by the load impedance. Since
only the fundamental frequency is desired at the output the load impedance should present a short-circuit at all of the harmonic frequencies. Only voltage at the fundamental frequency will then
appear across the load resistor. For a given value of V[0] the load resistance R[L] is given by Eq. 8.The output power is given by Eq. 10, which for I[1] = 4/3π is given by Eq. 25.
P[1]= I[dss] (2/π) (V[0] - (V[gmax] - V[T])/2 =(.318) I[dss] (V[0] - (V[gmax] - V[T]))(25)
The efficiency equal to P[1]/P[0] is given by Eq. 26 for I[1] = 2/π and I[0] = .5.
Eff = [I[1] /(2 I[0])][(V[0] - (V[gmax] - V[T])/V[0]] = (.636)( V[0] - (V[gmax] - V[T]))/V[0](26)
Comparison of the values of EFF0 and P[1 ]for the sine wave, the half sine wave and square wave cases show that the square wave input excitation gives the greatest output power while the half sine
wave input excitation gives the greatest efficiency.
The maximum power and the maximum efficiency at the maximum power have been derived for the MOSFET amplifier when the MOSFET is in saturation over the entire cycle. Three cases were considered: the
gate voltage is a sinusoid, a half sinusoid and a square wave. The MOSFET was analyzed using the square law theory. The results were compared to the results using the bulk-charge theory and it was
found that the square law theory is in very good agreement with the bulk-charge theory for I[dsat]/I[dss]. The two theories differ in the value of I[dss ]and in the value of V[dsat]. Equations are
given for the maximum output power and the efficiency, load resistance and DC voltage and at the maximum output power.
About the Author
Dr. Alfred Grayzel is a consultant to Planar Monolithic Industries, Inc.
The author wishes to acknowledge the support given by Dr. Ashok Gorwara, CEO of Planar Monolithics Industries, Inc. (PMI), and the support provided by the staff of PMI.
[1] A. I. Grayzel, “Analyze RF JFETS for Large-Signal Behavior,” Microwaves&RF, February 2017, pp 50-55. (Available at:pmi-rf.com/tech-papers.htm)
[2] R. F. Pierret, Semiconductor Device Fundamentals, Pearson Education Noida, India, 2006, page 623.
[3] Ibid pp. 573, 579, 587, 617, 626.
[4] Ibid page 626.
[5] Ibid page 627
The October 2024 Online Edition is now available for viewing and download! | {"url":"https://www.highfrequencyelectronics.com/index.php?option=com_content&view=article&id=2018:a-non-linear-analysis-of-the-saturated-mosfet&catid=169&Itemid=189","timestamp":"2024-11-04T11:43:48Z","content_type":"application/xhtml+xml","content_length":"48415","record_id":"<urn:uuid:34d1ea20-1b54-44ef-893a-0bf594428b36>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00625.warc.gz"} |
Estimating Pi…
We will be exploring a very interesting, and simple for that matter, application of statistics to help us estimate the value of Pi.
For this method, we will be imagining a simple scenario. Imagine for a moment, that we have the unit circle, inside a square.
Unit circle inside a square
By having the unit circle, we immediately figure out that the area of the square will be four since the radius of the circle is defined at one, which means that our square will have sides with a
value of two. Now here’s where things get interesting, if we take the ratio of both of areas, we end up getting the following:
\(\frac{{{A_c}}}{{{A_s}}} = \frac{{\pi {r^2}}}{{{{\left( {2r} \right)}^2}}} = \frac{\pi }{4}\)
Both of these geometric figures end up having a ratio of pi over four between them, which is an important value for our next step in which we use a bit of imagination.
from “Ordine e disordine” by Luciano De Crescenzo
For a moment, imagine that you have a circle inside a square on the ground; suppose it starts raining. Some drops will most likely fall inside the circle and others will likely fall inside the square
but outside the circle. Using this concept is how we will code our estimator pi, throwing some random numbers into the unit circle equation, as shown below:
Furthermore, taking a ratio of throws that landed inside our circle and the total number of throws, we can then formulate the following:
\(\frac{{N{\text{of throws inside circle}}}}{{N {\text{of total throws}}}} = \frac{{{N_C}}}{{{N_T}}}\)
And by combining our ratio between the unit circle with the square with this new equation, we can assume the next equation
\(\frac{{{N_C}}}{{{N_T}}} \simeq \frac{\pi }{4} \Leftrightarrow \frac{{4{N_C}}}{{{N_T}}} \simeq \pi \)
With this equation, we can finally start coding our estimator and see how close we can get to pi’s actual value.
The code
import matplotlib.pyplot as plt
import numpy as np
import time as t
from progress.bar import Bar
tin = t.time()
# numeri di punti della simulazione
# Vettore coordinate x e y dei punti casuali
x = np.random.rand(n)
y = np.random.rand(n)
Pi_Greco = np.zeros(n)
#Vettore distanza
d= (x**2 + y**2)**(1/2)
Ps = 0
Pq = 0
bar = Bar('Processing', max=n)
for i in range(n):
Pq = Pq + 1
if d[i] < 1:
Ps = Ps+1
Pi_Greco [i] = 4*(Ps/Pq)
Pi_Greco_Reale = np.ones(n)*np.pi
tfin = t.time()
print('Valore di Pi Greco: ', 4*Ps/Pq)
print('elapsed time: ', round(tfin-tin, 3))
plt.plot(Pi_Greco, 'red', label='estimate of Pi')
plt.plot(Pi_Greco_Reale, 'green', label='Exact value')
plt.title('Monte Carlo simulation')
Results for our pi estimation. Pi is represented by the green line, while red represents our estimations.
It’s quite interesting to see our estimation start with low accuracy but as we increase our attempts, we start to get a convergence on the the value of pi, as shown by our green line. Statistical
methods like this, and other more complex versions, are nice tools to understand and experiment within the world of physics. I highly recommend taking a look into some Statistical Mechanic concepts
to see the beauty behind the application of statistics and probability in physics, and maybe take some time play with these concepts in Python!
{ 0 comments… add one } | {"url":"https://www.raucci.net/2021/09/02/estimating-pi/","timestamp":"2024-11-03T12:46:20Z","content_type":"text/html","content_length":"63564","record_id":"<urn:uuid:1dc77e63-310a-4d81-b270-cfceb5113cef>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00141.warc.gz"} |
Gamma Matrices - Properties | Quantum Field Theory
Gamma Matrices – Properties
Quantum Field Theory (QFT)
Basic Concepts
Relativistic Equations
1. More topics coming soon…
Gamma matrices have a number of important mathematical properties that are used in the study of quantum field theory. Here are some of the most important properties
1. Anticommutation relation: The product of any two gamma matrices is equal to the negative of the product of the same two matrices in the opposite order:
${\gamma^{\mu},\gamma^{\nu}} = 2g^{\mu\nu}I_4$
where $g^{\mu\nu}$ is the metric tensor and $I_4$ is the 4×4 identity matrix.
2. Hermiticity: Gamma matrices are Hermitian, meaning that they are equal to their own conjugate transpose:
$\gamma^{\mu \dagger} = \gamma^{\mu}$
3. Chirality: The product of the first three gamma matrices is equal to the imaginary unit times the fourth gamma matrix:
$\gamma^0\gamma^1\gamma^2\gamma^3 = iI_4$
4. Trace: The trace of any product of gamma matrices is always zero:
$tr(\gamma^{\mu_1} \gamma^{\mu_2} … \gamma^{\mu_n}) = 0$
These properties of gamma matrices are important in understanding the mathematical structure of quantum field theory and spin-1/2 particles such as electrons.
Quantum Field Theory (QFT)
Basic Concepts
Relativistic Equations
1. More topics coming soon… | {"url":"https://www.bottomscience.com/gamma-matrices-properties/","timestamp":"2024-11-13T18:37:17Z","content_type":"text/html","content_length":"89223","record_id":"<urn:uuid:5f288e8f-cc95-4542-a965-457539356d3b>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00053.warc.gz"} |
What is electronic band structure?
I'm gonna do it! I'm going to attempt the impossible: to explain electronic band structure for a lay audience.
Why should you care about electronic band structure? Condensed matter physics
is one of the largest fields of physics with some of the biggest practical applications. Pretty much all of our electronics are built on it. Now, this knowledge is neither necessary, nor helpful to
our everyday use of electronics. But doesn't it bother you that you don't know the first thing about it? That is, you don't have any idea what
electronic band structure
There is a reason you've never heard of it. Unless you've taken courses in quantum mechanics, electronic band structure is a very inaccessible concept. And yet, it's one of the most basic concepts in
condensed matter physics. So let's do this!
First, a familiar concept
E = 1/2 mv^2
Does this look familiar? This is the equation for E, the kinetic energy of an object, given v, its velocity. We're going to analyze the heck out of this equation. Look, a graph!
In the above graph, v is simply a number, positive or negative. But in reality, v is a vector, containing components in x direction, y direction, and z direction. If we were to include two
directions, this is what the graph would look like:
But that's complicated, so we'll just consider v in a single direction.
But now we have to add in quantum mechanics. Even if you understand nothing about quantum mechanics, you probably know it has to do with combining the concepts of particles and waves. It turns out
that the
of a particle is proportional to how much the wave goes up and down per unit length. We usually call this quantity k, the wavenumber, which is defined as 2 pi times the number of cycles per meter.
E is proportional to v
, which is proportional to k
. And so, if we were to graph E vs k, it would look like this:
Congratulations! We've just constructed an electronic band structure!
Band structure in a crystal
The band structure we constructed is the band structure in a vacuum. That is, if we have electrons in a vacuum, then each electron will fall somewhere along that line. But most electrons are not
wandering freely in vacuums, they're trapped in sold objects. For simplicity's sake, I will only consider the simplest of solid objects, a crystal. A crystal is a solid which has a repeating
Now I'm going to wave my hands around wildly. Woooo! Math omitted! A repeating crystal structure leads to a repeating band structure. (Mind you, the crystal is repeats in space, while the band
structure repeats in k. k is measured in units of inverse meters, so it's more like the reciprocal of space.)
But if the band structure is just repeating itself, then we might as well keep only the first copy. In other words, we'll limit k to the "first Brillouin Zone".
Okay, but we forgot something. The electrons are attracted to the atomic nuclei, and repelled by each other. This changes the energy of the electrons in ways that are difficult to calculate. But
qualitatively, the effect is most noticeable whenever those lines cross each other. That's because when the lines cross, it's easy for electrons to exist in a superposition of those two lines (and
that's all the explanation you'll get out of me).
The dashed lines represent the original band structure, and the solid lines are our correction.
Okay. So far, pretty simple (I see people in the audience shaking their heads saying, "Um... not simple, no."). But as we add more dimensions, we can get even stranger-looking band structures. For
example, this is the band structure of graphene:
The horizontal axes are k in the x and y directions. The vertical axis is E. The black hexagon represents the first Brillouin Zone.
My point in showing this is to demonstrate that the electronic band structure can be quite complicated, and look very different for different kinds of solids.
And now I'm going to connect the band structure with another concept which you might find familiar. In an atom, electrons have discrete energy states. It's almost as if electrons are only allowed to
be in certain orbits around the nucleus.
Of course, this is not an accurate picture of electrons (which are waves, not just particles), but it's still true that electrons have discrete energy levels. This is true of the electronic band
structure as well. I drew a continuous line, but in reality it is a set of discrete points.
And so, electrons are only allowed to have certain values of E and k.
How many points are there? Well, how many atoms are there in the crystal? The answer: millions of billions of billions. So I might as well draw the band structure as continuous.
And yet, the fact that E and k are discrete has an important consequence. According to the Pauli Exclusion Principle, no two electrons* may have the same values of E and k. And we also know that
there are millions of billions of billions of electrons. As a result, the electrons fill up all those energy levels, starting with the lowest energies, going upwards, until we run out of electrons.
*ignoring spin
Note that if we ignore k, we actually get
of allowed energies. And sometimes there are
between these bands, where no electrons are allowed to exist. Physicists call these energy bands, and energy gaps. It's not uncommon for electrons to exactly fill up an entire energy band, right up
to the energy gap.
The electronic band structure is the set of allowed energies and k-values of electrons. In a vacuum, E just has to be proportional to k
(due to kinetic energy), but in a solid object, we have to consider the energy of attraction to nuclei and repulsion from other electrons. This results in distortions in the relationship between E
and k, and may even create energy gaps. Energy gaps are values of energy which are forbidden to electrons.
Since there are lots of electrons, and they obey the Pauli Exclusion Principle, they fill up the electronic band structure, from the lowest energies upwards.
Now that I've explained the electronic band structure, you may look back at my post on
, which may make a bit more sense. I also hope to write a few more essays explaining other things that should now make sense. In the meantime, are there any questions?
All images, except those credited, were created by me. They may be used if they are attributed. I think 11 images in one post is some kind of record for me.
3 comments:
thank you for your excellent explanation!
Very well explained. Any chance you could explain the mexican hat potential in relation to the Higgs field. I've spent untold hours trying to understand this.
miller said...
No, I am not a particle physicist, and anyway I'd need a more specific question. | {"url":"https://skepticsplay.blogspot.com/2011/06/what-is-electronic-band-structure.html","timestamp":"2024-11-11T23:57:55Z","content_type":"application/xhtml+xml","content_length":"90239","record_id":"<urn:uuid:353fcda7-c97a-4d6a-a949-f651caed1b9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00693.warc.gz"} |
Beginning Algebra: Proportions
Lesson: Introduction to Setting Up and Solving Proportions
Essential Questions for students (objectives):
Suggested Books (paid link)
Can you solve for an unknown using proportions?
Can you apply a proportion to a real-life situation?
Common Core Standard: 7.RP.1-2
Book: If You Hopped Like a Frog
Facts on animals handout
White paper, measuring tapes, rulers, glue sticks
Instructional Format:
Teacher lead instruction, cooperative groups
Vocabulary for a Word Wall:
Proportion, fraction
1) Introduction (20 minutes): Teacher starts the lesson by introducing a basic proportion. For this lesson, I would focus on converting measurements (i.e. miles to feet) and solve for an unknown.
Allow students to practice sample problems individually or in pairs. For example 48in/x = 12in/1ft.
2) (10 minutes) Read If You Hopped Like a Frog. Ask the students how the author knows that his assumptions are true? Share with them how he solved the problems illustrated and explained at the end of
the story. Work one of the situations the author created by setting up the proportion for them.
3) (6-10 minutes) Have the students check a different problem that is given in the book. Ask them to work with a partner to generate some information that they would need in order to solve the
problem. Open the discussion up as a class to address what information is needed. Give any information they request or let them know that it might not be important, then have them solve the
proportion to check.
4) (20 minutes) Break students up into groups of four. Each student needs a piece of white paper, a measuring tape, and glue stick. Assign each group an animal from the "facts on animals" sheet (you
should have these cut up in advance). Each group will get the facts on the animal and one cut-out of the animal to glue on their own paper. As a group, they need to work together for everyone to
create their own page in If You Hopped Like A Frog. They will have to decide how to use the information given, set-up a problem on their paper, solve the problem on their paper, and have each group
member ready to explain their own work.
5) (15 minutes) As a class, move to an open area (hallway, courtyard) taking the students' problems and measuring tapes. Each group will present their problem, show how the problem is set-up, how it
was solved, and they will need to demonstrate their answer for the group. For example, if they were solving about being able to run as fast as a spider, they would need to mark off 50 yards to show
everyone that they would be able to run that far in ONE second. I would ask each group member to do a different part of the presentation so no individual student does all the talking. [tip: assign
the parts of the presentation a number, then have the group members number off, that is the role they will have to present]
6) Optional assignment: Have students research animal facts on their own and write and illustrate their own page from the book.
7) Extension: Present students with the Proportion Problems worksheet. All of these problems can be solved using proportions. In groups, they might want to see if they can set-up the proportions
(solving may not need to be the focus at this time). After examining the problems, it is important to have a class discussion/brainstorming session on what traits they notice about the word problems
that could help distinguish them as proportion problems.
Prior Knowledge/ Possible Warm-up Activities:
Basic Multiplication, one step equation solving
Time needed:
1 hour and 20 minutes
Assessment (Acceptable Evidence):
Students are to create their own page from If You Hopped Like a Frog, showing all calculations. Also, they must present to their peers and show a measurement.
Attached worksheets or documents:
Facts on Animals
Proportion Problems
Cautionary notes/ misconceptions:
Setting up proportions is much more difficult for students that solving them. I would practice quite a bit at the beginning staying with basic measurement conversions. I wouldn't make it too
difficult at the onset, so that the students can delve into the topic with this activity and assignment. Then I would move on to more difficult conversions. It is very important to stress to students
that similar parts (units) should be in the same part of the fraction before they do this lesson. However, the lesson is designed to encourage the students to struggle and think about important
information as well as if their answer makes sense.
I would caution a teacher to set up groups homogeneously on the creating a page of the book assignment. The hippo and the leopard are the easiest animals to create a problem from the information
Optional Assignment: Be careful to explain to students that they need to research more than one piece of information on their animal/insect (that was the point of brainstorming needed information)
and they will need to use one piece of related information on themselves. I always warn students that it is not acceptable to say, a deer has large antlers, so if I were a deer, I would have large
Extension: These problems are from released copies of the Nevada State High School Proficiency Exam. Most students miss these types of problems because they don't recognize them as proportion | {"url":"https://makingmathematicians.com/index.php/lessons-categories/35-grades-9-12/18-beginning-algebra-proportions","timestamp":"2024-11-12T17:04:31Z","content_type":"text/html","content_length":"43028","record_id":"<urn:uuid:8d8a85b0-914c-4468-a8da-e018d9915dd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00675.warc.gz"} |
energy in a magnetic field when an electric current passes through it and releasing this stored energy back into the circuit when the current changes. It consists of a coil of wire wound around a
core made of a ferromagnetic material such as iron or ferrite. When current flows through the coil, a magnetic field is generated around it, and energy is stored in this magnetic field.
Inductors are used in electronic circuits for various purposes, including energy storage, filtering, signal processing, and impedance matching. They are particularly useful in applications where
changes in current need to be regulated or smoothed out over time. The inductance of an inductor depends on factors such as the number of turns in the coil, the shape of the coil, and the magnetic
properties of the core material. Inductors can also exhibit other characteristics, such as resistance and capacitance, which can affect their performance in a circuit. The resistance of an inductor,
known as its series resistance or DC resistance, dissipates energy in the form of heat and can limit the efficiency of the inductor. Additionally, inductors can have parasitic capacitance, which can
affect their behavior at high frequencies.
How Inductors Work
Generating Magnetic Field - When current flows through the wire coil of an inductor, it generates a magnetic field around the coil according to Ampere's law. The strength of this magnetic field is
directly proportional to the current flowing through the coil.
Inducing Voltage - According to Faraday's law of electromagnetic induction, when the current flowing through the coil changes, it induces a voltage across the coil. This induced voltage opposes the
change in current, in accordance with Lenz's law.
Storing Energy - The induced voltage creates an opposing electromagnetic force that resists the change in current flow. As a result, energy is stored in the magnetic field surrounding the coil. This
stored energy can be released back into the circuit when the current flowing through the inductor changes.
Behavior with Alternating Current - In AC circuits, where the current continuously alternates direction, inductors have unique behavior. They resist changes in the current flow, effectively slowing
down the changes in current. This property is utilized in various applications such as filters, transformers, and impedance matching.
Inductive Reactance - Inductors also exhibit a property known as inductive reactance, which is the opposition to the flow of alternating current. It is proportional to both the frequency of the
alternating current and the inductance of the coil.
Applications - Inductors find applications in various electrical circuits, such as filters to block certain frequencies, in power supplies to smooth out voltage fluctuations, in transformers to
change voltage levels, and in electronic oscillators to generate oscillations at specific frequencies.
Inductor Circuit formula
\(L \;=\; V_L \;/\; (dI \;/\; dt) \) (Inductor)
\(L_t \;=\; L_1 + L_2 + L_3 \; + ... + \; L_n \) (Series Circuit)
\((1\;/\;L_t) \;=\; (1\;/\;L_1) + (1\;/\;L_2) + (1\;/\;L_3) \; + ... +\; (1\;/\;L_n) \) (Parallel Citcuit)
Symbol English Metric
\(L\) = Inductance \(H\) \(kg-m^2\;/\;s^2-A^2\)
\(V_L\) = Electromotive Force \(V\) \(kg-m^2 \;/\; s^3-A\)
\(dI\) = Current Change \(I\) \(C \;/\; s\)
\(dt\) = Time Change \(sec\) \(s\)
Inductor Types
Inductors are passive electronic components that store energy in the form of a magnetic field when current flows through them. They are widely used in various electrical and electronic circuits for
purposes such as filtering, energy storage, impedance matching, and signal processing. There are several types of inductors, each with specific characteristics suited for different applications. The
choice of inductor type depends on factors such as the desired inductance value, frequency of operation, current and voltage ratings, size constraints, and cost considerations.
Air-Core Inductor - These consist of a coil of wire wound around a non-magnetic core, typically made of plastic, ceramic, or another non-conductive material. They have no magnetic core material,
which minimizes magnetic interference and distortion. Air-core inductors are used in applications where high-frequency operation, low magnetic coupling, and low core losses are required, such as in
radio frequency circuits, high-frequency filters, and tuning circuits.
Ferrite-Core Inductor - This inductor use a core made of ferrite, a type of ceramic material containing iron oxide. Ferrite cores provide high magnetic permeability, allowing for high inductance
values in a compact size. They are commonly used in power supply circuits, noise suppression filters, RF chokes, and various other applications requiring high inductance and high-frequency operation.
Iron-Core Inductor - These use a core made of laminated iron or iron powder. Iron-core inductors offer high magnetic permeability and are capable of handling higher power levels compared to air-core
and ferrite-core inductors. They are commonly used in power supply circuits, transformers, filters, and various electromagnetic devices.
Toroidal Inductor - This inductors consist of a coil of wire wound around a toroidal (doughnut-shaped) core. Toroidal cores provide high magnetic coupling and lower electromagnetic interference
compared to other core shapes. Toroidal inductors are used in applications where compact size, low electromagnetic radiation, and high efficiency are required, such as in audio equipment, power
supplies, and RF circuits.
Multilayer Chip Inductor - These are surface-mount inductors that consist of multiple layers of coil windings sandwiched between ceramic layers. They are compact, lightweight, and suitable for
high-frequency applications. Multilayer chip inductors are widely used in mobile devices, wireless communication devices, and other compact electronic devices.
Variable Inductor - Also called adjustable inductors or variometers, allow for adjustment of inductance within a certain range. They are used in applications where tunable or variable inductance is
required, such as in radio tuning circuits, impedance matching networks, and analog filters.
Tags: Electrical | {"url":"https://piping-designer.com/index.php/disciplines/electrical/3416-inductor","timestamp":"2024-11-03T22:47:05Z","content_type":"text/html","content_length":"34903","record_id":"<urn:uuid:04d03b87-f02d-4741-be6b-7a09d5143ea8>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00160.warc.gz"} |
354. Quantum coherence
I usually speak about the structure or blueprint of creation, but this template, like a fractal, is filling all space and describes the all, and the all describes you. What science calls quantum
coherence, we assign to gods Omni presence. It’s only spooky action in a distance because it wasn’t known how it was all connected.
Like the snake and the story of Adam and Eve we are so conditioned to see the fruit of which she took to be a real fruit and snake, nearly unable to see them in any other way. When you search the
internet you’ll find all sorts of discussions as to what type of tree or fruit it was, from fig to apple, never seeing them as symbolic, we are so blinded by our conditioning. Just take the Fibonacci
sequence and its spiral. We recognize it in flowers, trees, shells and we see it in galaxies in the small and very large, wouldn’t it then be logical that it is in some way that it must be connected,
a kind of universal law? We see spheres throughout the universe, we see Pi expressing itself in all sorts of ways just like the golden ratio, and there are many more and they must in some way be kind
of interwoven. We know that the torus is part of it too just as the electromagnetic field. But both religious organizations as well as scientists are unwilling to look at other fields of research, or
other religions to see if there is something linking them. Perhaps of fear that theirs is the least or less important and then lose their grant or fame.
They have tried to create life in laboratories, but did they ever include those laws that are in every living thing or being? Or see if those other religions are just different in a symbolic way? No,
all they do is point out that the others are wrong and theirs is the only true one. Even science and religion were once one. And due to it the ancients had an understanding of what historians and
others now wonder how they knew.
Now there was a video about the ark of covenant where some so called specialists were talking about its contents and the question they could not answer was if both the two sets of stone round
tablets, the first that were broken and the second pair, were put in it or not. Now as long as they haven’t found it they will never be able to tell, unless of course you know where to find it.
So let’s see if we can find it.
If you studied the articles you will know that these round tablets are representing the tree, and the tree is in the garden with the animals or in other words the zodiac. Now let’s look at the
measurements given for the ark of covenant: 1.5 by 1.5 by 2.5 cubit, and as you know the royal cubit is 52,36 so the sum is 78.54 × 78,54 × 130,9 = 2945.25. Now in one of the articles you’ll have
noticed that the zodiac is not 25920 but actually 25918.2 (1.8 missing, but that is a story for another time). What I can tell you is that the circumference of the circle turns 1.5 times in the
bigger zodiac circle and the other circle too. So there you have your 1.5 by 1.5. See if you can figure out the 2.5. But let’s move on, the zodiac 25918.2 divided by the sum of the ark 2945.25 = 88.
And the 8 is the tree made up of two circular stones so 8 8 means two sets. Which of course is correct. Just as there are two trees in the middle of the garden.
Perhaps it’s better to give you a little hint as to the third 8, that is also in an ark, but this time it’s in the ark of Noah. And perhaps it starts to dawn on you that the reason why Christ as
mentioned in the new testament is related to 888.
So let’s look at the ark of Noah, it was 300 x 50 x 30 = 45000, but they were cubits so we multiply it by 0,5236 so the ark’s real measure is 235.62.
Now the galactic ocean that the ark travelled on and collected the animals from is of course the zodiac of 25920 but again we need to convert it to cubits 25920 x 0.5236 = 13571712. So let’s see the
ark being 235.62 and divide the zodiac 13571712 by the ark, the answer is 576.
So now let’s look at the other tree, with its circumference of 480. It fits 54 times in the zodiac and if we divide it by the 8 we get 3.1416 Pi. Now without revealing too much I want you to recall
when I said that a bendy river’s length against a direct line between the start and finish has a ratio of 3,1416. Now recall the curling snake and the staff.
In other words there is a science to free will and enlightenment.
Moshiya van den Broek | {"url":"https://www.truth-revelations.org/?page_id=2440","timestamp":"2024-11-09T17:46:39Z","content_type":"text/html","content_length":"31046","record_id":"<urn:uuid:c680de5b-7fd3-4fff-bb2b-195da2bb34d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00525.warc.gz"} |
The Christmas Tree
Every year, Roberto likes to choose his Christmas tree, he doesn't let anyone choose for him, because he thinks the tree to be beautiful, must meet some conditions, such as height, thickness and
number of branches, so he can hang many Christmas decorations on it.
Roberto wants his tree to be at least 200 centimeters tall, but he doesn't want it to be larger than 300 centimeters, or the tree won't fit in his house. As for thickness, he wants his tree to have a
trunk that is 50 centimeters in diameter or more. The tree must be 150 branches or greater.
The first line of input contains an integer N(0 < N ≤ 10000), the number of test cases. The next N lines contain 3 integers each, h, d and g(0 < a, d, g ≤ 5000), the height of the tree in
centimeters, its diameter in centimeters, and the amount of tree branches.
Your task is, for each tree, to print Sim, if it is a tree that Roberto can choose, or Nao, if it is a tree he should not choose.
Input Sample Output Sample | {"url":"https://www.beecrowd.com.br/repository/UOJ_3040_en.html","timestamp":"2024-11-04T08:38:32Z","content_type":"text/html","content_length":"6553","record_id":"<urn:uuid:0c714d95-71d4-433f-8e4c-c857e3df57f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00028.warc.gz"} |
Reasoning and Making Sense of Algebra
Kentucky Center for Mathematics Conference
The Standards for Mathematical Practice can be used as a foundation to help educators bring some desperately needed mathematical coherence to school algebra. This talk will look at how a small set of
general-purpose habits, like abstracting regularity from repeated calculation, can help students see algebra as a web of interconnected results and methods that makes sense. Seeing algebra in this
way makes it more understandable, useful, and enjoyable for students. It makes teaching algebra more satisfying and effective for teachers. And, it connects school algebra to algebra as it is
practiced outside of school, both as a scientific discipline and as a central tool in every modern field of scientific inquiry. We'll work through some low-threshold, high-ceiling examples that show
how seemingly different topics in high school mathematics can be understood more effectively when they are viewed through the lens of mathematical practice. | {"url":"https://cmeproject.edc.org/presentation/reasoning-and-making-sense-algebra","timestamp":"2024-11-13T06:01:02Z","content_type":"text/html","content_length":"18108","record_id":"<urn:uuid:d0021ece-dcec-4322-9bfb-ba2bbf240c11>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00880.warc.gz"} |
Tutorial: Universal Approximator and NN
Single layer neuron network is trained for simple set of mappings:
http://www.wolfram.com/cdf-player/ <---- download this free plugging to see the calculations
The emphasis was to make the NN training more accessible for programmers without requiring a PhD.
The code was actually implemented in Mathematica 8.0 but the pseudo C code is quite accurate.
The algorithm was matched to number of papers and 8000 training loop was experimented with.
You need to be a member of diydrones to add comments!
Ok now I have the quad copter simulator, primitive but mathematically accurate and programmable. Also have the training algorithm above for single layer NN. These are Mathematica coded, but I can
easily convert them to python or C if required.
So now we can experiment with NN control systems for adrucopter by off-line and in-flight training.
My next effort is to come up with NN training for vertical lift-off of the arducopter autonomously.
This reply was deleted. | {"url":"https://diydrones.com/group/arducopter-evolution-team/forum/tutorial-universal-approximator-and-nn","timestamp":"2024-11-14T15:44:33Z","content_type":"text/html","content_length":"94440","record_id":"<urn:uuid:980a2108-14ab-4854-9e0c-3881a551ffc0>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00023.warc.gz"} |
critical speed of a ball mill depends on
WEBOct 19, 2006 · For instance, if your jar had in inside diameter of 90 mm and your milling media was mm diameter lead balls, the optimum rotation would be 98 RPM. Optimum RPM= .65 x Critical speed
(cascading action of the media stops) Critical speed = /sqrt (Jar Media ) with dimensions in inches. | {"url":"https://www.nature-nett.fr/2022_Oct_25-8129.html","timestamp":"2024-11-05T00:13:43Z","content_type":"text/html","content_length":"45994","record_id":"<urn:uuid:bf94cb79-5ff5-4d31-83d4-26ed29da7908>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00636.warc.gz"} |
uranium Archieven - Lars Boelen
Reposted blog by Luis Baram
Introduction : With IPCC’s latest report is has become 100% clear that our Fossil Society is totally unsustainable. It is Force Majeur, we need to act now or else…. I’ve been reluctant to think about
the nuclear option, but I think our problems are so dire that we have to look at every option to reduce our carbon GHG emissions. The following blogpost shows that the nuclear option doesn’t look
good though:
Nuclear power has many advantages about which we have written extensively in the past but there is a question we need to ask today.
Do we have enough uranium reserves to power a “nuclear renaissance?” Let’s run the math.
Today, our global total primary energy supply (TPES) is equivalent to 12,717 Mtoe (million tons of oil equivalent).
For the year 2035 the IEA (International Energy Agency) predicts two scenarios, one at 16,961 and the other at 14,870 Mtoe. For simplicity let’s use the mathematical average of the above: 15,916.
Today, nuclear energy provides 5.7% of our TPES.
In order for nuclear to be a very significant energy source that would help us to drastically reduce our carbon emissions, let’s say we target for 25% of our TPES by the year 2035 to be uranium based
nuclear power.
How much uranium would we need per year and, most importantly, what are our current known reserves?
According to MIT (below), 200 tons of natural uranium are required to produce one Giga Watt of electricity for a full year. That means currently we use close to 65,000 tons per year**.
For the 2035 scenario, we would need grossly (25% / 5.7%) x (15,916 Mtoe / 12,717Mtoe) * 65,000 tons = 356,802 tons every year.
According to Wikipedia (below) the current uranium reserves are around 5.5 million tons, so this would turn out to be around 15 years of supply. Not very encouraging. Sure, more reserves will be
found, but still…
On the other hand, today we have close to 430 nuclear power reactors. Assuming the same average power from new reactors as we have right now, we would need an additional 1,930 reactors, in other
words, commissioning an average of 88 new reactors EVERY year for 22 consecutive years (and this without decommissioning any of the current ones).
Sorry, but this ain’t going to happen.
With respect to the uranium shortage, thorium looks, on paper, quite promising, but even if it did go mainstream soon, the thorium build out would have to be of monumental proportions (see above).
Conclusion: moving to a low carbon economy is MUCH more difficult than is generally realized.
**Annual nuclear electricity production: 2,765 TWh * 1,000 = 2,765,000 GWh
2,765,000 / (1GW x 24 hrs. x 365 days) x 200 tons = 63,128 tons.
Labels: electricity, energy, nuclear, thorium, TPES, uranium | {"url":"https://larsboelen.nl/tag/uranium/","timestamp":"2024-11-14T23:54:33Z","content_type":"text/html","content_length":"58606","record_id":"<urn:uuid:d4adb811-aabf-4925-9bb5-79e46f8f08ba>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00707.warc.gz"} |
A part of monthly hostel charge is fixed and the remaining depends on
A part of monthly hostel charge is fixed and the remaining depends on the number of days one has taken food in the mess. When Swati takes food for 20 days, she has to pay Rs 3,000 as hostel charges
whereas Mansi who takes food for 25 days has to pay Rs 3,500 as hostel charges. Find the fixed charges and the cost of food per day.
To solve the problem, we need to set up a system of linear equations based on the information given.
Step 1: Define the variables
- x = fixed charges (in Rs)
- y = cost of food per day (in Rs)
Step 2: Set up the equations
From the information provided:
1. When Swati takes food for 20 days, she pays Rs 3000:
x+20y=3000(Equation 1)
2. When Mansi takes food for 25 days, she pays Rs 3500:
x+25y=3500(Equation 2)
Step 3: Solve the equations
We can solve these equations using the elimination method.
First, let's subtract Equation 1 from Equation 2:
This simplifies to:
Now, divide both sides by 5:
Step 4: Substitute y back into one of the equations
Now that we have y, we can substitute it back into Equation 1 to find x:
This simplifies to:
Now, subtract 2000 from both sides:
The fixed charges x are Rs 1000, and the cost of food per day y is Rs 100.
Final Answer:
- Fixed charges: Rs 1000
- Cost of food per day: Rs 100 | {"url":"https://www.doubtnut.com/qna/647933884","timestamp":"2024-11-13T14:52:45Z","content_type":"text/html","content_length":"223614","record_id":"<urn:uuid:7affec71-7c40-4eb5-a2b9-b1a7df5be992>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00333.warc.gz"} |
Epsilon Readiness Assessment
Students who are ready for
• can comfortably add or subtract multiple-digit numbers, using regrouping as necessary
• can demonstrate a division problem by drawing a diagram
• can give the quotients for single-digit division facts from memory
• can comfortably multiply multiple-digit numbers, regrouping as necessary
• can comfortably divide multiple-digit numbers with remainders
• can determine and apply the operation(s) needed to solve a word problem
Preparing for the Assessment:
This readiness assessment will take approximately 40 total minutes to complete. Do not begin unless you are sure you and your student can work together without distraction for this amount of time.
You will need the following items:
• pencil with eraser
• extra paper for calculation, if desired
Calculators are not permitted. Gather these materials before you begin.
Student Portion:
Print out a copy of the
Epsilon Readiness Assessment
. The printed part of the assessment is to be completed independently by the student and should take approximately 20 minutes. Be sure to keep track of the actual time your student spends on this
part of the assessment. You may attempt to clarify the wording of a question if your student does not understand, but you should not answer specific questions asking how to solve a particular
When your student has completed his work on paper, come back to the computer to complete the rest of the assessment.
Instructor Portion:
Use this tool as an opportunity to help you determine your student’s understanding of the concepts.
For each problem, first check to see if the student answered it correctly. Then ask your student to explain to you how he arrived at each of his answers, or “teach back” the solution. Based on your
student’s response, choose the statement(s) that most accurately describe how your student solved the problem. (IMPORTANT: Several of the questions require multiple responses. Be sure to mark ALL
appropriate responses.) | {"url":"http://mathusee.christianbook.com/readiness-assessments/epsilon-placement-assessment/","timestamp":"2024-11-04T12:26:10Z","content_type":"text/html","content_length":"12965","record_id":"<urn:uuid:b2fa02f5-156b-44d2-90d6-ed2086136bbe>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00332.warc.gz"} |
ZK Research Highlights (July 2023)
Again, in this issue, I intentionally excluded huge amount of important works that will appear in Crypto 2023. There will be a special issue to present those works.
by Tim Dokchitser and Alexandr Bulkin
• provide a source of introductory information into building a zero knowledge proof system for general computation.
□ review how to build such a system with a polynomial commitment scheme, and how to implement a fully functional command set in terms of zero knowledge primitives.
by Liam Eagen and Ariel Gabizon
• Building on ideas from ProtoStar [BC23] we construct a folding scheme where the recursive verifier's ``marginal work'', beyond linearly combining witness commitments, consists only of a
logarithmic number of field operations and a constant number of hashes.
• performs well when folding multiple instances at one step, in which case the marginal number of verifier field operations per instance becomes constant, assuming constant degree gates.
Thanks for reading ZK Research Highlights! Subscribe for free to receive new posts and support my work.
by Mathias Hall-Andersen and Mark Simkin and Benedikt Wagner
• define data availability sampling precisely as a clean cryptographic primitive.
• show how data availability sampling relates to erasure codes.
• defining a new type of commitment schemes which naturally generalizes vector commitments and polynomial commitments.
• analyze existing constructions and prove them secure.
• give new constructions which are based on weaker assumptions, computationally more efficient, and do not rely on a trusted setup, at the cost of slightly larger communication complexity.
by Foteini Baldimtsi and Ioanna Karantaidou and Srinivasan Raghuraman
• define oblivious accumulators, a set commitment with concise membership proofs that hides the elements and the set size from every entity
□ two properties: element hiding and add-delete indistinguishability.
• define almost-oblivious accumulators, that only achieve add-delete unlinkability, hide the elements but not the set size.
• give a generic construction of an oblivious accumulator based on key-value commitments (KVC).
• show a generic way to construct KVCs from an accumulator and a vector commitment scheme.
• give lower bounds on the communication (size of update messages) required for oblivious accumulators and almost-oblivious accumulators.
by Enrique Larraia and Owen Vaughan (IEEE IST-Africa 2023)
• suggest using on-chain Pedersen commitments and off-chain zero knowledge proofs (ZKP) for designated verifiers to prove the link between the data and the on-chain commitment.
by Logan Allen and Brian Klatt and Philip Quirk and Yaseen Shaikh
• Nock is a minimal, homoiconic combinator function, a Turing-complete instruction set that is practical for general computation, and is notable for its use in Urbit.
• introduce Eden, an Efficient Dyck Encoding of Nock that serves as a practical, SNARK-friendly combinator function and instruction set architecture.
• describe arithmetization techniques and polynomial equations used to represent the Eden ISA in an Interactive Oracle Proof.
• present the Eden zkVM, a particular instantiation of Eden as a zk-STARK.
by Collin Zhang and Zachary DeStefano and Arasu Arun and Joseph Bonneau and Paul Grubbs and Michael Walfish
• Zombie, the first system built using the Zero-knowledge middleboxes (ZKMB) paradigm.
□ preprocessing (to move the bulk of proof generation to idle times between requests),
□ asynchrony (to remove proving and verifying costs from the critical path),
□ batching (to amortize some of the verification work).
• introduces a portfolio of techniques to efficiently encode regular expressions in probabilistic (and zero knowledge) proofs;
□ to support policies based on regular expressions, such as data loss prevention.
by Lorenzo Grassi and Dmitry Khovratovich and Reinhard Lüftenegger and Christian Rechberger and Markus Schofnegger and Roman Walch
• propose a new 2-to-1 compression function and a SAFE hash function, instantiated by the Monolith permutation.
□ The permutation is significantly more efficient than its competitors, Reinforced Concrete and Tip5.
• instantiate the lookup tables as functions defined over F2 while ensuring that the outputs are still elements in Fp.
• performance comparable to SHA3-256
by Tomer Ashur and Al Kindi and Mohammad Mahzoun
• propose two new Arithmetization-Oriented(AO) hash functions, XHash8 and XHash12 which are designed based on improving the bottlenecks in RPO [ePrint 2022/1577].
□ XHash8 performs ≈2.75 times faster than RPO, and XHash12 performs ≈2 times faster than RPO
□ inheriting the security and robustness of the Marvellous design strategy.
by Ramiro Martínez and Paz Morillo and Sergi Rovira
• provide the implementation details and performance analysis of the lattice-based post-quantum commitment scheme introduced by Martínez and Morillo in their work titled «RLWE-Based Zero-Knowledge
Proofs for Linear and Multiplicative Relations»
□ obtain tight conditions that allow us to find the best sets of parameters for actual instantiations of the commitment scheme and its companion ZKPoK.
• very flexible and its parameters can be adjusted to obtain a trade-off between speed and memory usage
• further extends the literature of exact Zero-Knowledge proofs, providing ZKPoK of committed elements without any soundness slack.
by Fatemeh Heidari Soureshjani and Mathias Hall-Andersen and MohammadMahdi Jahanara and Jeffrey Kam and Jan Gorzny and Mohsen Ahmadvand; Satisfiability Modulo Theories 2023 (SMT 2023)
• describe methods for checking Halo2.
□ use abstract interpretation and an SMT solver to check various properties of Halo2 circuits.
☆ abstract interpretation can detect unused gates, unconstrained cells, and unused columns.
☆ SMT solver can detect under-constrained (in the sense that for the same public input they have two efficiently computable satisfying assignments) circuits.
by Lennart Braun and Cyprien Delpech de Saint Guilhem and Robin Jadoul and Emmanuela Orsini and Nigel P. Smart and Titouan Tanguy
• extend the MPC-in-the-head framework, used in recent efficient zero-knowledge protocols, to work over the ring which is the primary operating domain for modern CPUs.
• explore various batching methodologies, leveraging Shamir's secret sharing schemes and Galois ring extensions, and show the applicability of our approach in RAM program verification.
• analyse different options for instantiating the resulting ZK scheme over rings and compare their communication costs.
by Gal Arnon and Alessandro Chiesa and Eylon Yogev , FOCS 2023
• show that every language in NP has an Interactive Oracle Proof (IOP) with inverse polynomial soundness error and small query complexity.
by Jieyi Long
• for the batch satisfiability problem, provide a construction of succinct interactive argument of knowledge for generic log-space uniform circuits based on the bilinear pairing and common
reference string assumption.
• For the evaluation problem, construct statistically sound interactive proofs for various special yet highly important types of circuits, including linear circuits, and circuits representing sum
of polynomials.
• describe protocols optimized specifically for batch FFT and batch matrix multiplication which achieve desirable properties, including lower prover time and better composability.
by Markulf Kohlweiss and Mahak Pancholi and Akira Takahashi
• Most SNARKs are initially only proven knowledge sound (KS). show that the commonly employed compilation strategy from polynomial interactive oracle proofs (PIOP) via polynomial commitments to
knowledge sound SNARKS actually also achieves other desirable properties:
□ weak unique response (WUR)
□ and trapdoorless zero-knowledge (TLZK);
□ and that together they imply simulation extractability (SIM-EXT).
• The factoring of SIM-EXT into KS + WUR + TLZK is becoming a cornerstone of the analysis of non-malleable SNARK systems.
by Alexander R. Block and Albert Garreta and Jonathan Katz and Justin Thaler and Pratyush Ranjan Tiwari and Michał Zając
• prove the FS security of the FRI and batched FRI protocols;
□ by analyzing the round-by-round (RBR) soundness and RBR knowledge soundness of FRI.
• analyze a general class of protocols, which we call \delta-correlated, that use low-degree proximity testing as a subroutine (this includes many "Plonk-like" protocols (e.g., Plonky2 and
Redshift), ethSTARK, RISC Zero, etc.);
□ prove that if a \delta-correlated protocol is RBR (knowledge) sound under the assumption that adversaries always send low-degree polynomials, then it is RBR (knowledge) sound in general.
• prove FS security of the aforementioned "Plonk-like" protocols
by Michael Brand and Benoit Poletti
• present a protocol extending the standard Bulletproof protocol, allowing the Verifier to choose the set of equations after the Prover has already committed to portions of the solution.
• Verifier-chosen (or stochastically-chosen) equation sets can be used to design smaller equation sets with less variables that are orders of magnitude faster both in proof generation and in proof
verification, and even reduce the size
by Yanyi Liu and Rafael Pass
• present the first “OWF-complete” promise problem---a promise problem whose worst-case hardness w.r.t. BPP (resp. P/poly) is equivalent to the existence of OWFs secure against PPT (resp. nuPPT)
algorithms. The problem is a variant of the Minimum Time-bounded Kolmogorov Complexity problem.
by Erik Rybakken and Leona Hioki and Mario Yaksetig
• present a novel stateless ZK-rollup protocol with client-side validation called Intmax2. all of the data availability and computational costs are shifted to the client-side.
by Lilya Budaghyan and Mohit Pal
• investigate arithmetization-oriented APN functions, APN permutations in the CCZ-classes of known families of APN power functions over prime field Fp.
• present a new class of APN binomials over Fq obtained by modifying the planar function x^2 over Fq.
• present a class of binomials having differential uniformity at most 5 defined via the quadratic character over finite fields of odd characteristic.
by Yibin Yang and David Heath
• optimize arithmetic-circuit-based read/write memory uses only 4 input gates and 6 multiplication gates per memory access.
• implemented our memory in the context of ZK proofs based on vector oblivious linear evaluation (VOLE)
by Gora Adj, Luis Rivera-Zamarripa and Javier Verbel
• introduces the first MinRank-based digital signature scheme that uses the MPC-in-the-head, enabling it to achieve small signature sizes and running times.
by Matteo Campanelli, Chaya Ganesh, Hamidreza Khoshakhlagh and Janno Siim
• formalizing a folklore lower bound for the proof size of black-box extractable arguments based on the hardness of the language. This separates knowledge-sound SNARGs (SNARKs) in the random oracle
model (that can have black-box extraction) and those in the standard model.
• Under the existence of non-adaptively sound SNARGs (without extractability) and from standard assumptions, it is possible to build SNARKs with black-box extractability for a non-trivial subset of
• show that (under some mild assumptions) all NP languages cannot have SNARKs with black-box extractability even in the non-adaptive setting.
• The Gentry-Wichs result does not account for the preprocessing model, under which fall several efficient constructions.
by Lorenzo Grassi, Dmitry Khovratovich and Markus Schofnegger
• propose an optimized version of Poseidon, called Poseidon2.
□ Poseidon is a sponge hash function, while Poseidon2 can be either a sponge or a compression function depending on the use case.
□ Poseidon2 is instantiated by new and more efficient linear layers with respect to Poseidon.
Thanks for reading ZK Research Highlights! Subscribe for free to receive new posts and support my work. | {"url":"https://research.zkfold.ing/p/zk-research-highlights-july-2023","timestamp":"2024-11-08T06:35:03Z","content_type":"text/html","content_length":"257555","record_id":"<urn:uuid:da637c2b-7633-4e5a-97ca-0d723e452129>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00657.warc.gz"} |
Journal of Automated Reasoning, 68(24), 2024.
We introduce and elaborate a novel formalism for the manipulation and analysis of proofs as objects in a global manner. In this first approach the formalism is restricted to first-order problems
characterized by condensed detachment. It is applied in an exemplary manner to a coherent and comprehensive formal reconstruction and analysis of historical proofs of a widely-studied problem due
to Łukasiewicz. The underlying approach opens the door towards new systematic ways of generating lemmas in the course of proof search to the effects of reducing the search effort and finding
shorter proofs. Among the numerous reported experiments along this line, a proof of Łukasiewicz's problem was automatically discovered that is much shorter than any proof found before by man or
In Michael R. Douglas, Thomas C. Hales, Cezary Kaliszyk, Stephan Schulz, and Josef Urban, editors, 9th Conference on Artificial Intelligence and Theorem Proving, AITP 2024 (Informal Book of
Abstracts), 2024.
In Michael R. Douglas, Thomas C. Hales, Cezary Kaliszyk, Stephan Schulz, and Josef Urban, editors, 9th Conference on Artificial Intelligence and Theorem Proving, AITP 2024 (Informal Book of
Abstracts), 2024.
Synthesizing Nested Relational Queries from Implicit Specifications: Via Model Theory and via Proof Theory
Logical Methods in Computer Science, 20(3), 2024.
Derived datasets can be defined implicitly or explicitly. An implicit definition (of dataset O in terms of datasets I) is a logical specification involving two distinguished sets of relational
symbols. One set of relations is for the “source data” I, and the other is for the “interface data” O. Such a specification is a valid definition of O in terms of I, if any two models of the
specification agreeing on I agree on O. In contrast, an explicit definition is a transformation (or “query” below) that produces O from I. Variants of Beth's theorem state that one can convert
implicit definitions to explicit ones. Further, this conversion can be done effectively given a proof witnessing implicit definability in a suitable proof system. We prove the analogous
implicit-to-explicit result for nested relations: implicit definitions, given in the natural logic for nested relations, can be converted to explicit definitions in the nested relational calculus
(NRC). We first provide a model-theoretic argument for this result, which makes some additional connections that may be of independent interest between NRC queries, interpretations, a standard
mechanism for defining structure-to-structure translation in logic, and between interpretations and implicit to definability “up to unique isomorphism”. The latter connection uses a variation of
a result of Gaifman concerning “relatively categorical” theories. We also provide a proof-theoretic result that provides an effective argument: from a proof witnessing implicit definability, we
can efficiently produce an NRC definition. This will involve introducing the appropriate proof system for reasoning with nested sets, along with some auxiliary Beth-type results for this system.
As a consequence, we can effectively extract rewritings of NRC queries in terms of NRC views, given a proof witnessing that the query is determined by the views.
Synthesizing Strongly Equivalent Logic Programs: Beth Definability for Answer Set Programs via Craig Interpolation in First-Order Logic
In Chris Benzmüller, Marjin Heule, and Renate Schmidt, editors, International Joint Conference on Automated Reasoning, IJCAR 2024, volume 14739 of LNCS (LNAI), pages 172-193. Springer, 2024.
We show a projective Beth definability theorem for logic programs under the stable model semantics: For given programs P and Q and vocabulary V (set of predicates) the existence of a program R in
V such that P UR and P UQ are strongly equivalent can be expressed as a first-order entailment. Moreover, our result is effective: A program R can be constructed from a Craig interpolant for this
entailment, using a known first-order encoding for testing strong equivalence, which we apply in reverse to extract programs from formulas. As a further perspective, this allows transforming
logic programs via transforming their first-order encodings. In a prototypical implementation, the Craig interpolation is performed by first-order provers based on clausal tableaux or resolution
calculi. Our work shows how definability and interpolation, which underlie modern logic-based approaches to advanced tasks in knowledge representation, transfer to answer set programming.
In Jens Otten and Wolfgang Bibel, editors, AReCCa 2023 - Automated Reasoning with Connection Calculi, International Workshop, volume 3613 of CEUR Workshop Proceedings, pages 64-83. CEUR-WS.org, 2024.
Provers based on the connection method can become much more powerful than currently believed. We substantiate this thesis with certain generalizations of known techniques. In particular, we
generalize proof structure enumeration interwoven with unification - the proceeding of goal-driven connection and clausal tableaux provers - to an interplay of goal- and axiom-driven processing.
It permits lemma re-use and heuristic restrictions known from saturating provers. Proof structure terms, proof objects that allow to specify and implement various ways of building proofs, are
central for our approach. Meredith's condensed detachment represents the prototypical base case of such proof structure terms. Hence, we focus on condensed detachment problems, first-order Horn
problems of a specific form. Experiments show that for this problem class the approach keeps up with state-of-the-art first-order provers, leads to remarkably short proofs, solves an ATP
challenge problem, and is useful in machine learning for ATP. A general aim is to make ATP more accessible to systematic investigations in the space between calculi and implementation aspects.
In Revantha Ramanayake and Josef Urban, editors, Automated Reasoning with Analytic Tableaux and Related Methods: 32nd International Conference, TABLEAUX 2023, volume 14278 of LNCS (LNAI), pages 3-23.
Springer, 2023.
We show how variations of range-restriction and also the Horn property can be passed from inputs to outputs of Craig interpolation in first-order logic. The proof system is clausal tableaux,
which stems from first-order ATP. Our results are induced by a restriction of the clausal tableau structure, which can be achieved in general by a proof transformation, also if the source proof
is by resolution/paramodulation. Primarily addressed applications are query synthesis and reformulation with interpolation. Our methodical approach combines operations on proof structures with
the immediate perspective of feasible implementation through incorporating highly optimized first-order provers.
In Revantha Ramanayake and Josef Urban, editors, Automated Reasoning with Analytic Tableaux and Related Methods: 32nd International Conference, TABLEAUX 2023, volume 14278 of LNCS (LNAI), pages
153-174. Springer, 2023.
Noting that lemmas are a key feature of mathematics, we engage in an investigation of the role of lemmas in automated theorem proving. The paper describes experiments with a combined system
involving learning technology that generates useful lemmas for automated theorem provers, demonstrating improvement for several representative systems and solving a hard problem not solved by any
system for twenty years. By focusing on condensed detachment problems we simplify the setting considerably, allowing us to get at the essence of lemmas and their role in proof search.
In Michael R. Douglas, Thomas C. Hales, Cezary Kaliszyk, Stephan Schulz, and Josef Urban, editors, 8th Conference on Artificial Intelligence and Theorem Proving, AITP 2023 (Informal Book of
Abstracts), 2023.
In Principles of Database Systems, PODS 23, pages 33-45, 2023.
Derived datasets can be defined implicitly or explicitly. An implicit definition (of dataset O in terms of datasets I) is a logical specification involving the source data I and the interface
data O. It is a valid definition of O in terms of I, if two models of the specification agreeing on I agree on O. In contrast, an explicit definition is a query that produces O from I. Variants
of Beth's theorem state that one can convert implicit definitions to explicit ones. Further, this conversion can be done effectively given a proof witnessing implicit definability in a suitable
proof system. We prove the analogous effective implicit-to-explicit result for nested relations: implicit definitions, given in the natural logic for nested relations, can be effectively
converted to explicit definitions in the nested relational calculus (NRC). As a consequence, we can effectively extract rewritings of NRC queries in terms of NRC views, given a proof witnessing
that the query is determined by the views.
Compressed Combinatory Proof Structures and Blending Goal- with Axiom-Driven Reasoning: Perspectives for First-Order ATP with Condensed Detachment and Clausal Tableaux
In Michael R. Douglas, Thomas C. Hales, Cezary Kaliszyk, Stephan Schulz, and Josef Urban, editors, 7th Conference on Artificial Intelligence and Theorem Proving, AITP 2022 (Informal Book of
Abstracts), 2022.
Generating Compressed Combinatory Proof Structures - An Approach to Automated First-Order Theorem Proving
In Boris Konev, Claudia Schon, and Alexander Steen, editors, 8th Workshop on Practical Aspects of Automated Reasoning, PAAR 2022, volume 3201 of CEUR Workshop Proceedings. CEUR-WS.org, 2022.
Representing a proof tree by a combinator term that reduces to the tree lets subtle forms of duplication within the tree materialize as duplicated subterms of the combinator term. In a DAG
representation of the combinator term these straightforwardly factor into shared subgraphs. To search for proofs, combinator terms can be enumerated, like clausal tableaux, interwoven with
unification of formulas that are associated with nodes of the enumerated structures. To restrict the search space, the enumeration can be based on proof schemas defined as parameterized
combinator terms. We introduce here this “combinator term as proof structure” approach to automated first-order proving, present an implementation and first experimental results. The approach
builds on a term view of proof structures rooted in condensed detachment and the connection method. It realizes features known from the connection structure calculus, which has not been
implemented so far.
Technical report, 2022.
CD Tools is a Prolog library for experimenting with condensed detachment in first-order ATP, which puts a recent formal view centered around proof structures into practice. From the viewpoint of
first-order ATP, condensed detachment offers a setting that is relatively simple but with essential features and serious applications, making it attractive as a basis for developing and
evaluating novel techniques. CD Tools includes specialized provers based on the enumeration of proof structures. We focus here on one of these, SGCD, which permits to blend goal- and axiom-driven
proof search in particularly flexible ways. In purely goal-driven configurations it acts similarly to a prover of the clausal tableaux or connection method family. In blended configurations its
performance is much stronger, close to state-of-the-art provers, while emitting relatively short proofs. Experiments show characteristics and application possibilities of the structure generating
approach realized by that prover. For a historic problem often studied in ATP it produced a new proof that is much shorter than any known one.
Proceedings of the Second Workshop on Second-Order Quantifier Elimination and Related Topics (SOQE 2021)
Volume 3009 of CEUR Workshop Proceedings, Aachen, 2021. CEUR-WS.org.
In Renate A. Schmidt, Christoph Wernhard, and Yizheng Zhao, editors, Proceedings of the Second Workshop on Second-Order Elimination and Related Topics (SOQE 2021), volume 3009 of CEUR Workshop
Proceedings, pages 98-111, 2021.
In recent years, Gödel's ontological proof and variations of it were formalized and analyzed with automated tools in various ways. We supplement these analyses with a modeling in an automated
environment based on first-order logic extended by predicate quantification. Formula macros are used to structure complex formulas and tasks. The analysis is presented as a generated type-set
document where informal explanations are interspersed with pretty-printed formulas and outputs of reasoners for first-order theorem proving and second-order quantifier elimination. Previously
unnoticed or obscured aspects and details of Gödel's proof become apparent. Practical application possibilities of second-order quantifier elimination are shown and the encountered elimination
tasks may serve as benchmarks.
In André Platzer and Geoff Sutcliffe, editors, Automated Deduction: CADE 2021, volume 12699 of LNCS (LNAI), pages 58-75. Springer, 2021.
The material presented in this paper contributes to establishing a basis deemed essential for substantial progress in Automated Deduction. It identifies and studies global features in selected
problems and their proofs which offer the potential of guiding proof search in a more direct way. The studied problems are of the wide-spread form of “axiom(s) and rule(s) imply goal(s)”. The
features include the well-known concept of lemmas. For their elaboration both human and automated proofs of selected theorems are taken into a close comparative consideration. The study at the
same time accounts for a coherent and comprehensive formal reconstruction of historical work by Łukasiewicz, Meredith and others. First experiments resulting from the study indicate novel ways of
lemma generation to supplement automated first-order provers of various families, strengthening in particular their ability to find short proofs.
Journal of Automated Reasoning, 65(5):647-690, 2021.
We develop foundations for computing Craig-Lyndon interpolants of two given formulas with first-order theorem provers that construct clausal tableaux. Provers that can be understood in this way
include efficient machine-oriented systems based on calculi of two families: goal-oriented such as model elimination and the connection method, and bottom-up such as the hypertableau calculus. We
present the first interpolation method for first-order proofs represented by closed tableaux that proceeds in two stages, similar to known interpolation methods for resolution proofs. The first
stage is an induction on the tableau structure, which is sufficient to compute propositional interpolants. We show that this can linearly simulate different prominent propositional interpolation
methods that operate by an induction on a resolution deduction tree. In the second stage, interpolant lifting, quantified variables that replace certain terms (constants and compound terms) by
variables are introduced. We justify the correctness of interpolant lifting (for the case without built-in equality) abstractly on the basis of Herbrand's theorem and for a different
characterization of the formulas to be lifted than in the literature. In addition, we discuss various subtle aspects that are relevant for the investigation and practical realization of
first-order interpolation based on clausal tableaux.
Facets of the PIE Environment for Proving, Interpolating and Eliminating on the Basis of First-Order Logic
In Petra Hofstedt, Salvador Abreu, Ulrich John, Herbert Kuchen, and Dietmar Seipel, editors, Declarative Programming and Knowledge Management (DECLARE 2019), Revised Selected Papers, volume 12057 of
LNCS (LNAI), pages 160-177. Springer, 2020.
PIE is a Prolog-embedded environment for automated reasoning on the basis of first-order logic. Its main focus is on formulas, as constituents of complex formalizations that are structured
through formula macros, and as outputs of reasoning tasks such as second-order quantifier elimination and Craig interpolation. It supports a workflow based on documents that intersperse macro
definitions, invocations of reasoners, and LaTeX-formatted natural language text. Starting from various examples, the paper discusses features and application possibilities of PIE along with
current limitations and issues for future research.
KBSET - Knowledge-Based Support for Scholarly Editing and Text Processing with Declarative LaTeX Markup and a Core Written in SWI-Prolog
In Petra Hofstedt, Salvador Abreu, Ulrich John, Herbert Kuchen, and Dietmar Seipel, editors, Declarative Programming and Knowledge Management (DECLARE 2019), Revised Selected Papers, volume 12057 of
LNCS (LNAI), pages 178-196. Springer, 2020.
KBSET is an environment that provides support for scholarly editing in two flavors: First, as a practical tool KBSET/Letters that accompanies the development of editions of correspondences (in
particular from the 18th and 19th century), completely from source documents to PDF and HTML presentations. Second, as a prototypical tool KBSET/NER for experimentally investigating novel forms
of working on editions that are centered around automated named entity recognition. KBSET can process declarative application-specific markup that is expressed in LaTeX notation and incorporate
large external fact bases that are typically provided in RDF. KBSET includes specially developed LaTeX styles and a core system that is written in SWI-Prolog, which is used there in many roles,
utilizing that it realizes the potential of Prolog as a unifying language.
Von der Transkription zur Wissensbasis. Zum Zusammenspiel von digitalen Editionstechniken und Formen der Wissensrepräsentation am Beispiel von Korrespondenzen Johann Georg Sulzers
In Jana Kittelmann and Anne Purschwitz, editors, Aufklärungsforschung digital. Konzepte, Methoden, Perspektiven, volume 10/2019 of IZEA - Kleine Schriften, pages 84-114. Mitteldeutscher Verlag, 2019.
In Salvador Abreu, Petra Hofstedt, Ulrich John, Herbert Kuchen, and Dietmar Seipel, editors, Pre-proceedings of the DECLARE 2019 Conference, volume abs/1909.04870 of CoRR, 2019.
PIE is a Prolog-embedded environment for automated reasoning on the basis of first-order logic. It includes a versatile formula macro system and supports the creation of documents that
intersperse macro definitions, reasoner invocations and LaTeX-formatted natural language text. Invocation of various reasoners is supported: External provers as well as sub-systems of PIE, which
include preprocessors, a Prolog-based first-order prover, methods for Craig interpolation and methods for second-order quantifier elimination.
In Salvador Abreu, Petra Hofstedt, Ulrich John, Herbert Kuchen, and Dietmar Seipel, editors, Pre-proceedings of the DECLARE 2019 Conference, volume abs/1909.04870 of CoRR, 2019.
KBSET supports a practical workflow for scholarly editing, based on using LaTeX with dedicated commands for semantics-oriented markup and a Prolog-implemented core system. Prolog plays there
various roles: as query language and access mechanism for large Semantic Web fact bases, as data representation of structured documents and as a workflow model for advanced application tasks. The
core system includes a LaTeX parser and a facility for the identification of named entities. We also sketch future perspectives of this approach to scholarly editing based on techniques of
computational logic.
Technical Report Knowledge Representation and Reasoning 18-01, Technische Universität Dresden, 2018.
We develop foundations for computing Craig interpolants and similar intermediates of two given formulas with first-order theorem provers that construct clausal tableaux. Provers that can be
understood in this way include efficient machine-oriented systems based on calculi of two families: goal-oriented like model elimination and the connection method, and bottom-up like the hyper
tableau calculus. The presented method for Craig-Lyndon interpolation involves a lifting step where terms are replaced by quantified variables, similar as known for resolution-based
interpolation, but applied to a differently characterized ground formula and proven correct more abstractly on the basis of Herbrand's theorem, independently of a particular calculus. Access
interpolation is a recent form of interpolation for database query reformulation that applies to first-order formulas with relativized quantifiers and constrains the quantification patterns of
predicate occurrences. It has been previously investigated in the framework of Smullyan's non-clausal tableaux. Here, in essence, we simulate these with the more machine-oriented clausal tableaux
through structural constraints that can be ensured either directly by bottom-up tableau construction methods or, for closed clausal tableaux constructed with arbitrary calculi, by postprocessing
with restructuring transformations.
Volume 2013 of CEUR Workshop Proceedings. CEUR-WS.org, 2017.
Approximating Resultants of Existential Second-Order Quantifier Elimination upon Universal Relational First-Order Formulas
In Patrick Koopmann, Sebastian Rudolph, Renate A. Schmidt, and Christoph Wernhard, editors, Proceedings of the Workshop on Second-Order Quantifier Elimination and Related Topics, SOQE 2017, volume
2013 of CEUR Workshop Proceedings, pages 82-98. CEUR-WS.org, 2017.
We investigate second-order quantifier elimination for a class of formulas characterized by a restriction on the quantifier prefix: existential predicate quantifiers followed by universal
individual quantifiers and a relational matrix. For a given second-order formula of this class a possibly infinite sequence of universal first-order formulas that have increasing strength and are
all entailed by the second-order formula can be constructed. Any first-order consequence of the second-order formula is a consequence of some member of the sequence. The sequence provides a
recursive base for the first-order theory of the second-order formula, in the sense investigated by Craig. The restricted formula class allows to derive further properties, for example that the
set of those members of the sequence that are equivalent to the second-order formula, or, more generally, have the same first-order consequences, is co-recursively enumerable. Also the set of
first-order formulas that entails the second-order formula is co-recursively enumerable. These properties are proven with formula-based tools used in automated deduction, such as domain closure
axioms, eliminating individual quantifiers by ground expansion, predicate quantifier elimination with Ackermann's Lemma, Craig interpolation and decidability of the Bernays-Schönfinkel-Ramsey
Technical Report Knowledge Representation and Reasoning 17-01, Technische Universität Dresden, 2017.
Finding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century. Schröder investigated it as Auflösungsproblem (solution
problem). It is closely related to the modern notion of Boolean unification. Today it is commonly presented in an algebraic setting, but seems potentially useful also in knowledge representation
based on predicate logic. We show that it can be modeled on the basis of first-order logic extended by second-order quantification. A wealth of classical results transfers, foundations for
algorithms unfold, and connections with second-order quantifier elimination and Craig interpolation show up. Although for first-order inputs the set of solutions is recursively enumerable, the
development of constructive methods remains a challenge. We identify some cases that allow constructions, most of them based on Craig interpolation, and show a method to take vocabulary
restrictions on solution components into account.
In Clare Dixon and Marcelo Finger, editors, 11th International Symposium on Frontiers of Combining Systems, FroCoS 2017, volume 10483 of LNCS (LNAI), pages 333-350. Springer, 2017.
Finding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century. Schröder investigated it as Auflösungsproblem (solution
problem). It is closely related to the modern notion of Boolean unification. Today it is commonly presented in an algebraic setting, but seems potentially useful also in knowledge representation
based on predicate logic. We show that it can be modeled on the basis of first-order logic extended by second-order quantification. A wealth of classical results transfers, foundations for
algorithms unfold, and connections with second-order quantifier elimination and Craig interpolation show up.
Poster presentation at Automated Reasoning with Analytic Tableaux and Related Methods: 26th International Conference, TABLEAUX 2017, 2017.
In Pascal Fontaine, Stephan Schulz, and Josef Urban, editors, 5th Workshop on Practical Aspects of Automated Reasoning, PAAR 2016, volume 1635 of CEUR Workshop Proceedings, pages 125-138.
CEUR-WS.org, 2016.
The PIE system aims at providing an environment for creating complex applications of automated first-order theorem proving techniques. It is embedded in Prolog. Beyond actual proving tasks, also
interpolation and second-order quantifier elimination are supported. A macro feature and a L^AT[E]X formula pretty-printer facilitate the construction of elaborate formalizations from small,
understandable and documented units. For use with interpolation and elimination, preprocessing operations allow to preserve the semantics of chosen predicates. The system comes with a built-in
default prover that can compute interpolants.
In Thomas C. Hales, Cezary Kaliszyk, Stephan Schulz, and Josef Urban, editors, 1st Conference on Artificial Intelligence and Theorem Proving, AITP 2016 (Book of Abstracts), pages 29-31, 2016.
We investigate possibilities to utilize techniques of computational logic for scholarly editing. Semantic Web technology already makes relevant large knowledge bases available in form of logic
formulas. There are several further roles of logic-based reasoning in machine supported scholarly editing. KBSET, a free prototype system, provides a platform for experiments.
In DHd 2016 - Digital Humanities im deutschsprachigen Raum: Modellierung - Vernetzung - Visualisierung. Die Digital Humanities als fächerübergreifendes Forschungsparadigma. Konferenzabstracts, pages
178-181, Duisburg, 2016. nisaba verlag.
Heinrich Behmann's Contributions to Second-Order Quantifier Elimination from the View of Computational Logic
Technical Report Knowledge Representation and Reasoning 15-05, Technische Universität Dresden, 2015.
Revised 2017.
For relational monadic formulas (the Löwenheim class) second-order quantifier elimination, which is closely related to computation of uniform interpolants, projection and forgetting - operations
that currently receive much attention in knowledge processing - always succeeds. The decidability proof for this class by Heinrich Behmann from 1922 explicitly proceeds by elimination with
equivalence preserving formula rewriting. Here we reconstruct the results from Behmann's publication in detail and discuss related issues that are relevant in the context of modern approaches to
second-order quantifier elimination in computational logic. In addition, an extensive documentation of the letters and manuscripts in Behmann's bequest that concern second-order quantifier
elimination is given, including a commented register and English abstracts of the German sources with focus on technical material. In the late 1920s Behmann attempted to develop an
elimination-based decision method for formulas with predicates whose arity is larger than one. His manuscripts and the correspondence with Wilhelm Ackermann show technical aspects that are still
of interest today and give insight into the genesis of Ackermann's landmark paper “Untersuchungen über das Eliminationsproblem der mathematischen Logik” from 1935, which laid the foundation of
the two prevailing modern approaches to second-order quantifier elimination.
Some Fragments Towards Establishing Completeness Properties of Second-Order Quantifier Elimination Methods
Poster presentation at Jahrestreffen der GI Fachgruppe Deduktionssysteme, associated with CADE-25, 2015.
Second-Order Quantifier Elimination on Relational Monadic Formulas - A Basic Method and Some Less Expected Applications
In Hans de Nivelle, editor, Automated Reasoning with Analytic Tableaux and Related Methods: 24th International Conference, TABLEAUX 2015, volume 9323 of LNCS (LNAI), pages 249-265. Springer, 2015.
For relational monadic formulas (the Löwenheim class) second-order quantifier elimination, which is closely related to computation of uniform interpolants, forgetting and projection, always
succeeds. The decidability proof for this class by Behmann from 1922 explicitly proceeds by elimination with equivalence preserving formula rewriting. We reconstruct Behmann's method, relate it
to the modern DLS elimination algorithm and show some applications where the essential monadicity becomes apparent only at second sight. In particular, deciding ALCOQH knowledge bases,
elimination in DL-Lite knowledge bases, and the justification of the success of elimination methods for Sahlqvist formulas.
Second-Order Quantifier Elimination on Relational Monadic Formulas - A Basic Method and Some Less Expected Applications (Extended Version)
Technical Report Knowledge Representation and Reasoning 15-04, Technische Universität Dresden, 2015.
For relational monadic formulas (the Löwenheim class) second-order quantifier elimination, which is closely related to computation of uniform interpolants, forgetting and projection, always
succeeds. The decidability proof for this class by Behmann from 1922 explicitly proceeds by elimination with equivalence preserving formula rewriting. We reconstruct Behmann's method, relate it
to the modern DLS elimination algorithm and show some applications where the essential monadicity becomes apparent only at second sight. In particular, deciding ALCOQH knowledge bases,
elimination in DL-Lite knowledge bases, and the justification of the success of elimination methods for Sahlqvist formulas.
In Alexander Bolotov and Manfred Kerber, editors, Joint Automated Reasoning Workshop and Deduktionstreffen, ARW-DT 2014, pages 36-37, 2014.
Workshop presentation [
Workshop presentation [
Technical Report Knowledge Representation and Reasoning 14-03, Technische Universität Dresden, 2014.
Predicate quantification can be applied to characterize definientia of a given formula that are in terms of a given set of predicates. Methods for second-order quantifier elimination and the
closely related computation of forgetting, projection and uniform interpolants can then be applied to compute such definientia. Here we address the question, whether this principle can be
transferred to definientia in given classes that allow efficient processing, such as Horn or Krom formulas. Indeed, if propositional logic is taken as basis, for the class of all formulas that
are equivalent to a conjunction of atoms and the class of all formulas that are equivalent to a Krom formula, the existence of definientia as well as representative definientia themselves can be
characterized in terms of predicate quantification. For the class of formulas that are equivalent to a Horn formula, this is possible with a special further operator. For first-order logic as
basis, we indicate guidelines and open issues.
Semantik, Linked Data, Web-Präsentation: Grundlagen der Nachlasserschließung im Portal www.pueckler-digital.de
In Anne Baillot and Anna Busch, editors, Workshop Datenmodellierung in digitalen Briefeditionen und ihre interpretatorische Leistung. Humboldt-Universität zu Berlin, 2014.
Technical Report Knowledge Representation and Reasoning 14-02, Technische Universität Dresden, 2014.
Journal of Applied Non-Classsical Logics, 24(1-2):61-85, 2014.
In Hans Tompits, Salvador Abreu, Johannes Oetsch, Jörg Pührer, Dietmar Seipel, Masanobu Umeda, and Armin Wolf, editors, Applications of Declarative Programming and Knowledge Management, 19th
International Conference (INAP 2011) and 25th Workshop on Logic Programming (WLP 2011), Revised Selected Papers, volume 7773 of LNCS (LNAI), pages 289-296. Springer, 2013.
A prototype system is described whose core functionality is, based on propositional logic, the elimination of second-order operators, such as Boolean quantifiers and operators for projection,
forgetting and circumscription. This approach allows to express many representational and computational tasks in knowledge representation - for example computation of abductive explanations and
models with respect to logic programming semantics - in a uniform operational system, backed by a uniform classical semantic framework.
Semantik, Web, Metadaten und digitale Edition: Grundlagen und Ziele der Erschließung neuer Quellen des Branitzer Pückler-Archivs
In Irene Krebs et al., editors, Resonanzen. Pücklerforschung im Spannungsfeld zwischen Wissenschaft und Kunst. Ein Konferenzbericht., pages 179-202. trafo Verlag, Berlin, 2013.
In Fachtagung Semantische Technologien - Verwertungsstrategien und Konvergenz, Humboldt-Universität zu Berlin, Position Papers, 2013.
In Pascal Fontaine, Christophe Ringeissen, and Renate A. Schmidt, editors, 9th International Symposium on Frontiers of Combining Systems, FroCoS 2013, volume 8152 of LNCS (LNAI), pages 103-119.
Springer, 2013.
It is known that skeptical abductive explanations with respect to classical logic can be characterized semantically in a natural way as formulas with second-order quantifiers. Computing
explanations is then just elimination of the second-order quantifiers. By using application patterns and generalizations of second-order quantification, like literal projection, the globally
weakest sufficient condition and circumscription, we transfer these principles in a unifying framework to abduction with three non-classical semantics of logic programming: stable model, partial
stable model and well-founded semantics. New insights are revealed about abduction with the partial stable model semantics.
In Matti Järvisalo and Allen Van Gelder, editors, Theory and Applications of Satisfiability Testing, 16th International Conference, SAT 2013, volume 7962 of LNCS, pages 22-39. Springer, 2013.
Received the SAT 2013 Best Paper Award [
We present a formalism that models the computation of clause sharing portfolio solvers with inprocessing. The soundness of these solvers is not a straightforward property since shared clauses can
make a formula unsatisfiable. Therefore, we develop characterizations of simplification techniques and suggest various settings how clause sharing and inprocessing can be combined. Our
formalization models most of the recent implemented portfolio systems and we indicate possibilities to improve these. A particular improvement is a novel way to combine clause addition techniques
- like blocked clause addition - with clause deletion techniques - like blocked clause elimination or variable elimination.
In Thomas Barkowsky, Marco Ragni, and Frieder Stolzenburg, editors, Human Reasoning and Automated Deduction: KI 2012 Workshop Proceedings, volume SFB/TR 8 Report 032-09/2012 of Report Series of the
Transregional Collaborative Research Center SFB/TR 8 Spatial Cognition, pages 41-48. Universität Bremen / Universität Freiburg, Germany, 2012.
Stenning and van Lambalgen introduced an approach to model empirically studied human reasoning with nonmonotonic logics. Some of the research questions that have been brought up in this context
concern the interplay of the open- and closed-world assumption, the suitability of particular logic programming semantics for the modeling of human reasoning, and the role of three-valued logic
programming semantics and three-valued logics. We look into these questions from the view of a framework where logic programs that model human reasoning are represented declaratively and are
mechanizable by classical formulas extended with certain second-order operators.
Journal of Symbolic Computation, 47:1089-1108, 2012.
We develop a semantic framework that extends first-order logic by literal projection and a novel second semantically defined operator, “raising”, which is only slightly different from literal
projection and can be used to define a generalization of parallel circumscription with varied predicates in a straightforward and compact way. We call this variant of circumscription
“scope-determined”, since like literal projection and raising its effects are controlled by a so-called “scope”, that is, a set of literals, as parameter. We work out formally a toolkit of
propositions about projection, raising and circumscription and their interaction. It reveals some refinements of and new views on previously known properties. In particular, we apply it to show
that well-foundedness with respect to circumscription can be expressed in terms of projection, and that a characterization of the consequences of circumscribed propositional formulas in terms of
literal projection can be generalized to first-order logic and expressed compactly in terms of new variants of the strongest necessary and weakest sufficient condition.
Forward Human Reasoning Modeled by Logic Programming Modeled by Classical Logic with Circumscription and Projection
Technical Report Knowledge Representation and Reasoning 11-07, Technische Universität Dresden, 2011.
Recently an approach to model human reasoning as studied in cognitive science by logic programming, has been introduced by Stenning and van Lambalgen and exemplified with the suppression task. We
investigate this approach from the view of a framework where different logic programming semantics correspond to different translations of logic programs into formulas of classical two-valued
logic extended by two second-order operators, circumscription and literal projection. Based on combining and extending previously known such renderings of logic programming semantics, we take
semantics into account that have not yet been considered in the context of human reasoning, such as stable models and partial stable models. To model human reasoning, it is essential that only
some predicates can be subjected to closed world reasoning, while others are handled by open world reasoning. In our framework, variants of familiar logic programing semantics that are extended
with this feature are derived from a generic circumscription based representation. Further, we develop a two-valued representation of a three-valued logic that renders semantics considered for
human reasoning based on the Fitting operator.
In Proceedings of the 25th Workshop on Logic Programming, WLP 2011, Infsys Research Report 1843-11-06, pages 94-98. Technische Universität Wien, 2011.
A prototype system is described whose core functionality is, based on propositional logic, the elimination of second-order operators, such as Boolean quantifiers and operators for projection,
forgetting and circumscription. This approach allows to express many representational and computational tasks in knowledge representation - for example computation of abductive explanations and
models with respect to logic programming semantics - in a uniform operational system, backed by a uniform classical semantic framework.
In Logical Formalizations of Commonsense Reasoning, Papers from the AAAI 2011 Spring Symposium, AAAI Spring Symposium Series Technical Reports, pages 135-138. AAAI Press, 2011.
In this paper we contribute to bridging the gap between human reasoning as studied in Cognitive Science and commonsense reasoning based on formal logics and formal theories. In particular, the
suppression task studied in Cognitive Science provides an interesting challenge problem for human reasoning based on logic. The work presented in the paper is founded on the recent approach by
Stenning and van Lambalgen to model human reasoning by means of logic programs with a specific three-valued completion semantics and a semantic fixpoint operator that yields a least model, as
well as abduction. Their approach has been subsequently made more precise and technically accurate by switching to three-valued Łukasiewicz logic. In this paper, we extend this refined approach
by abduction. We show that the inclusion of abduction permits to adequately model additional empiric results reported from Cognitive Science. For the arising abductive reasoning tasks we give
complexity results. Finally, we outline several open research issues that emerge from the application of logic to model human reasoning.
In M. Hermenegildo and T. Schaub, editors, Technical Communications of the 26th International Conference on Logic Programming, ICLP'10, volume 7 of Leibniz International Proceedings in Informatics
(LIPIcs), pages 202-211, Dagstuhl, Germany, 2010. Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
We pursue a representation of logic programs as classical first-order sentences. Different semantics for logic programs can then be expressed by the way in which they are wrapped into -
semantically defined - operators for circumscription and projection. (Projection is a generalization of second-order quantification.) We demonstrate this for the stable model semantics, Clark's
completion and a three-valued semantics based on the Fitting operator. To represent the latter, we utilize the polarity sensitiveness of projection, in contrast to second-order quantification,
and a variant of circumscription that allows to express predicate minimization in parallel with maximization. In accord with the aim of an integrated view on different logic-based representation
techniques, the material is worked out on the basis of first-order logic with a Herbrand semantics.
In Nicolas Peltier and Viorica Sofronie-Stokkermans, editors, Proceedings of the 7th International Workshop on First-Order Theorem Proving, FTP'09, volume 556 of CEUR Workshop Proceedings, pages
60-74. CEUR-WS.org, 2010.
We develop a formal framework intended as a preliminary step for a single knowledge representation system that provides different representation techniques in a unified way. In particular we
consider first-order logic extended by techniques for second-order quantifier elimination and non-monotonic reasoning. In this paper two independent results are developed. The background for the
first result is literal projection, a generalization of second-order quantification which permits, so to speak, to quantify upon an arbitrary sets of ground literals, instead of just (all ground
literals with) a given predicate symbol. We introduce an operator raise that is only slightly different from literal projection and can be used to define a generalization of predicate
circumscription in a straightforward and compact way. We call this variant of circumscription scope-determined. Some properties of raise and scope-determined circumscription, also in combination
with literal projection, are then shown. A previously known characterization of consequences of circumscribed formulas in terms of literal projection is generalized from propositional to
first-order logic and proven on the basis of the introduced concepts. The second result developed in this paper is a characterization stable models in terms of circumscription. Unlike traditional
characterizations, it does not recur onto syntactic notions like reduct and fixed-point construction. It essentially renders a recently proposed “circumscription-like” characterization in a
compact way, without involvement of a non-classically interpreted connective.
In Martin Giese and Arild Waaler, editors, Automated Reasoning with Analytic Tableaux and Related Methods: 18th International Conference, TABLEAUX 2009, volume 5607 of LNCS (LNAI), pages 325-340.
Springer, 2009.
Projection computation is a generalization of second-order quantifier elimination, which in turn is closely related to the computation of forgetting and of uniform interpolants. On the basis of a
unified view on projection computation and knowledge compilation, we develop a framework for applying tableau methods to these tasks. It takes refinements from performance oriented systems into
account. Formula simplifications are incorporated at the level of tableau structure modification, and at the level of simplifying encountered subformulas that are not yet fully compiled. In
particular, such simplifications can involve projection computation, where this is possible with low cost. We represent tableau construction by means of rewrite rules on formulas, extended with
some auxiliary functors, which is particularly convenient for formula transformation tasks. As instantiations of the framework, we discuss approaches to propositional knowledge compilation from
the literature, including adaptions of DPLL, and the hyper tableau calculus for first-order clauses.
Number 324 in Dissertations in Artificial Intelligence. AKA Verlag/IOS Press, Heidelberg, Amsterdam, 2009.
Projection is a logic operation which allows to express tasks in knowledge representation. These tasks involve extraction or removal of knowledge concerning a given sub-vocabulary. It is a
generalization of second-order quantification, permitting, so to speak, to `quantify' upon an arbitrary set of ground literals instead of just (all ground literals with) a given predicate symbol.
In Automated Deduction for Projection Elimination, a semantic characterization of projection for first-order logic is presented. On this basis, properties underlying applications and processing
methods are derived. The computational processing of projection, called projection elimination in analogy to quantifier elimination, can be performed by adapted theorem proving methods. This is
shown for resolvent generation and, more in depth, tableau construction. An abstract framework relates projection elimination with knowledge compilation and shows the adaption of key features of
high performance tableau systems. As a prototypical instance, an adaption of a modern DPLL method, such as underlying state-of-the-art SAT solvers, is worked out. It generalizes various recent
knowledge compilation methods and utilizes the interplay with projection elimination for efficiency improvements.
In Daniel Le Berre et al., editor, SAT 2009 Competitive Event Booklet, pages 15-16, 2009.
In Steffen Hölldobler, Carsten Lutz, and Heinrich Wansing, editors, Logics in Artificial Intelligence: 11th European Conference, JELIA 08, volume 5293 of LNCS (LNAI), pages 389-402. Springer, 2008.
The computation of literal projection generalizes predicate quantifier elimination by permitting, so to speak, quantifying upon an arbitrary sets of ground literals, instead of just (all ground
literals with) a given predicate symbol. Literal projection allows, for example, to express predicate quantification upon a predicate just in positive or negative polarity. Occurrences of the
predicate in literals with the complementary polarity are then considered as unquantified predicate symbols. We present a formalization of literal projection and related concepts, such as literal
forgetting, for first-order logic with a Herbrand semantics, which makes these notions easy to access, since they are expressed there by means of straightforward relationships between sets of
literals. With this formalization, we show properties of literal projection which hold for formulas that are free of certain links, pairs of literals with complementary instances, each in a
different conjunct of a conjunction, both in the scope of a universal first-order quantifier, or one in a subformula and the other in its context formula. These properties can justify the
application of methods that construct formulas without such links to the computation of literal projection. Some tableau methods and direct methods for second-order quantifier elimination can be
understood in this way.
In Frank Pfenning, editor, Automated Deduction: CADE-21, volume 4603 of LNCS (LNAI), pages 503-513. Springer, 2007.
The E-KRHyper system is a model generator and theorem prover for first-order logic with equality. It implements the new E-hyper tableau calculus, which integrates a superposition-based handling
of equality into the hyper tableau calculus. E-KRHyper extends our previous KRHyper system, which has been used in a number of applications in the field of knowledge representation. In contrast
to most first order theorem provers, it supports features important for such applications, for example queries with predicate extensions as answers, handling of large sets of uniformly structured
input facts, arithmetic evaluation and stratified negation as failure. It is our goal to extend the range of application possibilities of KRHyper by adding equality reasoning.
Technical Report Arbeitsberichte aus dem Fachbereich Informatik 18/2007, Universität Koblenz-Landau, Institut für Informatik, Universitätsstr. 1, 56070 Koblenz, Germany, 2007.
Generalized methods for automated theorem proving can be used to compute formula transformations such as projection elimination and knowledge compilation. We present a framework based on clausal
tableaux suited for such tasks. These tableaux are characterized independently of particular construction methods, but important features of empirically successful methods are taken into account,
especially dependency directed backjumping and branch local operation. As an instance of that framework an adaption of DPLL is described. We show that knowledge compilation methods can be
essentially improved by weaving projection elimination partially into the compilation phase.
In José Júlio Alferes and João Leite Leite, editors, Logics in Artificial Intelligence: 9th European Conference, JELIA 04, volume 3229 of LNCS (LNAI), pages 552-564. Springer, 2004.
Some operations to decompose a knowledge base (considered as a first order logic formula) in ways so that only its semantics determines the results are investigated. Intended uses include the
extraction of “parts” relevant to an application, the exploration and utilizing of implicit possibilities of structuring a knowledge base and the formulation of query answers in terms of a
signature demanded by an application. A semantic framework based on Herbrand interpretations is outlined. The notion of ´model relative to a scope¡ is introduced. It underlies the partitioning
operations “projection” and “forgetting” and also provides a semantic account for certain formula simplification operations. An algorithmic approach which is based on resolution and may be
regarded as a variation of the SCAN algorithm is discussed.
In Ulrike Sattler, editor, Contributions to the Doctoral Programme of the Second International Joint Conference on Automated Reasoning, IJCAR 2004, volume 106 of CEUR Workshop Proceedings.
CEUR-WS.org, 2004.
In Proceedings of the CADE-19 workshop Challenges and Novel Applications for Automated Reasoning, pages 55-72, 2003.
Three real world applications are depicted which all have a full first order theorem prover based on the hyper tableau calculus as their core component. These applications concern information
retrieval in electronic publishing, the integration of description logics with other knowledge representation techniques and XML query processing.
Technical Report Fachberichte Informatik 14-2003, Universität Koblenz-Landau, Institut für Informatik, Universitätsstr. 1, 56070 Koblenz, Germany, 2003.
Presented at the CADE-19 workshop Model Computation: Principles, Algorithms, Applications.
KRHyper is a first order logic theorem proving and model generation system based on the hyper tableau calculus. It is targeted for use as an embedded system within knowledge based applications.
In contrast to most first order theorem provers, it supports features important for those applications, for example queries with predicate extensions as answers, handling of large sets of
uniformly structured input facts, arithmetic evaluation and stratified negation as failure.
In Proceedings of the CADE-15 Workshop on Integration of Deductive Systems, pages 36-43, 1998.
We describe a concept for the cooperation of a computer algebra system, several automated theorem provers and a mathematical library. The purpose of this cooperation is to enable intelligent
retrieval of theorems and definitions from the remote mathematical library. Automated theorem provers compete on remote machines to verify conjectures provided by the user in a local copy of
Mathematica. They make use of a remote knowledge base which contains parts of the Mizar Mathematical Library.
In Maria Paola Bonacina and Ulrich Furbach, editors, International Workshop on First-Order Theorem Proving, FTP'97, RISC-Linz Report Series No. 97-50, pages 58-62. Johannes Kepler Universität, Linz,
Over the years, interactive theorem provers have built a large body of verified computer mathematics. The ILF Mathematical Library aims to make this knowledge available to other systems. One of
the reasons for such a project is economy. Verification of software and hardware frequently requires the proof of purely mathematical theorems. It is obviously inefficient, to use the time of
experts in the design of software or hardware systems to prove such theorems. Another reason for presenting a collection of mathematical theorems in a unified framework is safety. It should
facilitate the verification of theorems in the library of one system by other systems. A third reason is dynamics of research. New interactive theorem provers should obtain the possibility to
show their usability for real-world problems without having to reprove elementary mathematical facts. Last but not least, it is hoped that reproving theorems in a uniform mathematical library
will be considered as a challenge to the development of automated theorem provers.
In W. Remmele, editor, Künstliche Intelligenz in der Praxis. Siemens AG, 1990.
The idea of InfraEngine is to help establishing a Semantic Web infrastructure by a specially adapted AI planning engine. Inputs are distributed Web documents in Semantic Web formats proposed by
the World Wide Web Consortium. Outputs are also delivered in such formats. The user interface allows to browse Semantic Web data and to control the planning services from any Web browser. Also
other programs can make use of these services. A small and understandable, but general, mechanism that can be used for different kinds of applications should be provided.
Working paper for Persist AG, Teltow, Germany, 2001.
First steps towards an RDF format for exchanging proofs in the Semantic Web.
Working paper for Persist AG, Teltow, Germany, 2001.
We outline two application scenarios which might be in the realm of short term applications of the Semantic Web: a software packaging system and the organization of a business trip. Both of them
can be solved with today's technology to some degree, so they do not show the novel potential of the Semantic Web in full. However considering Semantic Web solutions for them is useful to get a
picture of the characteristics of the Semantic Web and become aware of some concrete technical issues.
Working paper for Persist AG, Teltow, Germany, 2000.
We propose resource oriented inference as one way of bringing the Semantic Web into action. It provides a framework for expressing and processing a variety of tasks from areas such as planning,
scheduling, manufacturing resource planning, product data management, configuration management, workflow management and simulation. Resource oriented inference as a part of the Semantic Web
should allow such tasks to be performed within the scope of the World Wide Web. A prototypical application is be the purchase of a complex product with the help of the Web. The product consists
of multiple parts, some of them complex products by themselves. Several services are required to compose the product. Subparts and services can be provided by different companies all over the
world. Delivery time and cost of the product should be optimized.
Working paper for Persist AG, Teltow, Germany. Presented at KnowTech 2000, Leipzig, 2000.
Outline of a small Web embedded language for specifying types and reasoning about them. Its main features are: (1.) From primitive types, that require from their instances just that they
implement a single method, complex types can be constructed by a few operators, such as as a type intersection. This composition of type definitions maps to the composition of fragments of type
definitions in different Web documents. (2.) Types are compared by structural equivalence and not based on their names. This facilitates combination of independently developed models and implicit
association of types with given instances. (3.) The hyperlinking of expressions of the modeling language can be understood in terms of defining equations of a rewriting system. The left side of
such a rewrite rule is an URI, the right side an expression. Hyperlink dereferencing corresponds to a rewrite step with such a rule.
(Edited May 2003), 1999.
The performance of an implementation of the linear backward chaining planning algorithm is compared to other planning systems by means of the problem set of the first ai planning systems
competition (AIPS-98).
Magisterarbeit, Freie Universität Berlin, Berlin, Germany, 1992. | {"url":"http://cs.christophwernhard.com/papers_abstracts.html","timestamp":"2024-11-08T19:03:53Z","content_type":"text/html","content_length":"115035","record_id":"<urn:uuid:402cdd5e-5aeb-46fa-99f2-1c7c9102a3af>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00074.warc.gz"} |
Scale Conversion Calculator & Scale Factor Calculator (2024)
Scale a measurement to a larger or smaller measurement, which is useful for architecture, modeling, and other projects. You can also add the real size and scaled size to find the scale factor.
Find the Scaled Size
Find the Real Size
Find the Scale Factor
Have a Question or Feedback?
Scale Size:
Scale Size:
48 4 1.333 0.000758 1,219.2 121.92 1.219 0.001219
Learn how we calculated this below
Chevron Down Icon scroll down
Add this calculator to your site
On this page:
• Scale Conversion Calculator
• How to Scale a Measurement
• How to Find the Scale Factor
• How to Reduce the Scale Factor
• Architectural Scales
• Engineering Scales
• Common Model Scales
• Frequently Asked Questions
Joe Sexton
Joe is the creator of Inch Calculator and has over 20 years of experience in engineering and construction. He holds several degrees and certifications.
Full bioChevron Right Icon
Reviewed by
Ethan Dederick, PhD
Ethan has a PhD in astrophysics and is currently a satellite imaging scientist. He specializes in math, science, and astrophysics.
Full bioChevron Right Icon
Cite As:
Sexton, J. (n.d.). . Inch Calculator. Retrieved October 24, 2024, from https://www.inchcalculator.com/scale-calculator/
How to Scale a Measurement
Making a measurement smaller or larger, known as scale conversion, requires a common scale factor, which you can use to multiply or divide all measurements.
To scale a measurement down to a smaller value, for instance, when making a blueprint, simply divide the real measurement by the scale factor. The scale factor is commonly expressed as 1:n or 1/n,
where n is the factor.
For example, if the scale factor is 1:8 and the real measurement is 32, divide 32 ÷ 8 = 4 to convert.
To convert a smaller, scaled measurement up to the actual measurement, simply multiply the scaled measurement by the scale factor. For example, if the scale factor is 1:8 and the scaled length is 4,
multiply 4 × 8 = 32 to convert it to the larger actual size.
A common tool used to scale a measurement from a real-world measurement is an architect’s scale rule. These are often used for scaling drawings and blueprints for buildings.
There are also engineering rules that are primarily used in civil engineering to scale measurements for roadways and land development. For example, when creating detail drawings, a scale of 1:10 is
often used, and when creating working plans, scales of 1:20 or 1:40 are the preferred choices.
How to Find the Scale Factor
A scale factor is a ratio of two corresponding measurements or lengths. You can use the factor to increase or decrease the dimensions of a geometric shape, drawing, or model to different sizes. You
can find the scale factor in a few easy steps.
Step One: Use the Scale Factor Formula
Since the scale factor is a ratio, the first step to finding it is to use the following formula:
scale factor = scaled size / real size
So, the scale factor is a ratio of the scaled size to the real size.
Step Two: Simplify the Fraction
The next step is to reduce or simplify the fraction.
If you’re scaling down, that is, if the scaled size is smaller than the actual size, then the ratio should be shown with a numerator of 1. If you’re scaling up, that is, if the scaled size is larger
than the actual size, then the ratio should be shown with a denominator of 1.
To find the final scale factor when you’re scaling up, reduce the ratio to a fraction with a denominator 1. To do this, divide both the numerator and the denominator by the denominator.
Note: by doing this, the numerator may become a decimal. This may or may not be desired, depending on your use case. If it’s not desired, then simply reduce the fraction like you would normally.
If you’re scaling down, then reduce the fraction so that the numerator is 1. You can do this by dividing both the numerator and the denominator by the numerator. Again, this may not result in whole
numbers, so adjust accordingly.
Our fraction simplifier can help with this step if needed.
Step Three: Rewrite the Fraction as a Ratio
Finally, rewrite the fraction as a ratio by replacing the fraction bar with a colon. For instance, a scale factor of 1/10 can be rewritten as 1:10.
For example, let’s find the scale factor used on an architectural drawing where ½” on the drawing represents 12″ on the final building.
Begin by replacing the values in the formula above.
scale factor = ½” / 12″
Since the drawing is scaled down, then the scale factor should be reduced to a fraction with a numerator of 1.
Multiply both the numerator and denominator by 2 to simplify.
scale factor = ½” × 2 / 12″ × 2 = 1 / 24
And finally, rewrite the fraction as a ratio.
scale factor = 1 / 24 = 1:24
Thus the scale factor for this drawing is 1:24.
How to Reduce the Scale Factor
If you already know the scale factor, but it is not in the form of 1:n or 1/n, then some additional work is needed to reduce or simplify it. If the ratio is 2:3, for example, then you’ll need to
reduce it so that the numerator is 1.
Use our ratio calculator to reduce a ratio. You can also reduce a ratio by dividing both the numerator and the denominator by the numerator.
For example: reduce 2/3 by dividing both numbers by 2, which would be 1/1.5 or 1:1.5.
2 ÷ 2 = 1
3 ÷ 2 = 1.5
scale factor = 1:1.5
Architectural Scales
Architectural scales often relate a measurement, in feet, of a building to inches on a drawing. You can quickly find the scale factor for an architectural scale by inverting the fraction, then
multiplying it by 12 (inches/foot).
For example, to find the scale factor on a drawing equaling 1′ on a building (1/16″ = 1′), start by inverting the fraction 1/16 so that it becomes 16/1. Then, multiply that by 12, which is 192. So,
the scale factor for 1/16″ = 1′ is 1:192.
Scale factors for common
architectural scales
Drawing Scale Scale Factor
1/16″ = 1′ 1:192
3/32″ = 1′ 1:128
1/8″ = 1′ 1:96
3/16″ = 1′ 1:64
1/4″ = 1′ 1:48
3/8″ = 1′ 1:32
1/2″ = 1′ 1:24
3/4″ = 1′ 1:16
1″ = 1′ 1:12
1 1/2″ = 1′ 1:8
3″ = 1′ 1:4
Engineering Scales
Engineering scales are represented in the same units for both the drawing and the actual measurement. For example, if you have a ratio of inches on a drawing to feet in reality, you can quickly find
the scale factor for an engineering, or civic, scale by multiplying the feet portion by 12 (inches/foot).
As an example, to find the scale factor for 1″ = 30′, multiply 30′ by 12, which is 360. So, the scale factor for 1″ = 30′ is 1:360.
Scale factors for common
engineering scales
Drawing Scale Scale Factor
1″ = 10′ 1:120
1″ = 20′ 1:240
1″ = 30′ 1:360
1″ = 40′ 1:480
1″ = 50′ 1:600
1″ = 60′ 1:720
1″ = 70′ 1:840
1″ = 80′ 1:960
1″ = 90′ 1:1080
1″ = 100′ 1:1200
Common Model Scales
This table lists some common scale factors you may come across when dealing with different types of models.
Common scale factors used for models and hobbies
Scale Factor Model Type
1:4 steam trains, RC planes
1:8 steam trains, cars
1:10 figures
1:12 cars, motorcycles, dollhouses
1:16 steam trains, cars, motorcycles, military vehicles, figures
1:18 diecast cars
1:20 formula one cars
1:22.5 G-gauge trains
1:24 cars, trucks, aircraft, dollhouses
1:25 cars, trucks
1:32 1 gauge trains, cars, aircraft, figures
1:35 military vehicles
1:43 O-gauge trains, cars, trucks
1:48 O-gauge trains, dollhouses, Lego minifig
1:64 S-gauge trains, diecast cars, Hotwheels/Matchbox
1:72 aircraft, military vehicles, boats, cars
1:76 aircraft, military vehicles
1:87 HO-gauge trains, military vehicles
1:96 ships, spacecraft
1:100 aircraft, spacecraft
1:120 TT-gauge trains
1:144 ships, rockets, spacecraft
1:160 N-gauge trains, wargaming
1:200 aircraft, ships
1:220 Z-gauge trains
1:285 wargaming
1:350 ships
1:700 ships
1:720 ships
Frequently Asked Questions
Is scale factor a fraction?
Yes, the scale factor can be represented as a fraction that describes the relative size between a model or drawing, and the actual object.
Is scale factor always greater than 1?
No, the scale factor is not always greater than one. It will be greater than one if the model is smaller than the actual object, but smaller than one if the model is larger than the actual object.
Typically we make models that are smaller than the object being modeled, hence why the scale factor is usually greater than one.
How do you calculate scale from a drawing?
You can calculate scale from a drawing by measuring what length on the drawing corresponds to what length on the actual object. For example, if 1 inch on the drawing equals 3 inches on the actual
object, then the scale is 1:3.
What is an example of a scale factor?
An example of a scale factor is the ratio used on a model airplane to describe how much bigger the actual airplane is than the model. If the actual airplane is ten times bigger than the model, then
the scale factor is 1:10. | {"url":"https://nhadat21.com/article/scale-conversion-calculator-scale-factor-calculator","timestamp":"2024-11-07T06:43:49Z","content_type":"text/html","content_length":"79653","record_id":"<urn:uuid:2e95fa6e-fae4-468d-b30a-04c78ac7887d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00496.warc.gz"} |
The ERF function calculates the error function of a given value. The error function returns the probability of a normal variable with a mean of zero and a standard deviation of one falling between
two values. This function is commonly used in statistics and engineering applications.
Use the ERF formula with the syntax shown below, it has 1 required parameter and 1 optional parameter:
=ERF(lower_bound, [upper_bound])
1. lower_bound (required):
The lower bound of the integral used to calculate the error function.
2. upper_bound (optional):
The upper bound of the integral used to calculate the error function. If not provided, the upper bound is assumed to be infinity.
Here are a few example use cases that explain how to use the ERF formula in Google Sheets.
Calculating the probability of a normal variable falling between two values
By using the ERF function, you can calculate the probability of a normal variable with a mean of zero and a standard deviation of one falling between two values.
Calculating the cumulative distribution function
The ERF function can be used to calculate the cumulative distribution function of a normal variable with a mean of zero and a standard deviation of one.
Calculating the complementary error function
By subtracting the result of the ERF function from one, you can calculate the complementary error function.
Common Mistakes
ERF not working? Here are some common mistakes people make when using the ERF Google Sheets Formula:
Missing or incorrect lower bound
One common mistake when using the ERF formula is to either forget to include the lower bound or to include an incorrect value for the lower bound. This can result in an inaccurate or incorrect
result. Make sure to double check that the lower bound is included and accurate.
Missing or incorrect upper bound
Another common mistake when using the ERF formula is to either forget to include the upper bound or to include an incorrect value for the upper bound. This can result in an inaccurate or incorrect
result. Make sure to double check that the upper bound is included and accurate.
Incorrect order of bounds
It's important to make sure that the lower bound is listed before the upper bound in the formula. If the order of the bounds is reversed, the result will be inaccurate or incorrect.
Incorrect data type
The ERF formula requires numeric input for both the lower and upper bounds. If non-numeric data is included, an error will occur. Make sure to include only numeric data.
Missing or incorrect arguments
If the ERF formula is missing one or both of its required arguments, or if the argument syntax is incorrect, an error will occur. Double check the formula syntax and make sure that both the lower and
upper bounds are included.
Related Formulas
The following functions are similar to ERF or are often used with it in a formula:
Learn More
You can learn more about the ERF Google Sheets function on Google Support. | {"url":"https://checksheet.app/google-sheets-formulas/erf/","timestamp":"2024-11-10T04:35:51Z","content_type":"text/html","content_length":"46338","record_id":"<urn:uuid:7288dbb0-965a-4ce9-b781-a57ae8e3d7c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00226.warc.gz"} |
Change Due
Change is the money a customer receives back when they have made a purchase. Often the customer gives the merchant more money than the amount due because the customer may not have the exact coins
and bills that are needed. The merchant determines how much extra was paid and returns the excess which is called change.
How to find the least number of coins to give in change:
□ Determine the total amount of change due.
□ Start with the highest denomination of coins or bills
□ Use as many of this coin as possible without exceeding the amount of change due
□ Repeat this process with the next lowest denomination of coin or bill
Example of making change for purchase of $2.11 and customer paying with a $20.00 bill
□ Determine change due -- $20.00 - $2.11 = $17.89
□ Get one $10 bill but two would be too much -- have $10.00 for change
□ Get one $5 bill but two would be too much -- have $15.00 for change
□ Get two $1 bills but three would be too much -- have $17.00 for change
□ Get 3 quarters but four would be too much -- have $17.75 for change
□ Get 1 dime but two would be too much -- have $17.85 for change
□ Get 0 nickels - even one would be too much -- have $17.85 for change
□ Get 4 pennies - 5 would be too many -- have $17.89 for change
□ -------- NOW COUNT THE CHANGE OUT TO THE CUSTOMER --------
□ Say the original amount before giving any change
□ Count the change from the lowest denomination to the highest denomination
□ The final count should be the same as the amount the customer gave you
□ You should say: "Two eleven" -- before starting to give change back
□ "two twelve, two thirteen, two fourteen, two fifteen" -- as the pennies are given
□ "two twenty-five" -- as the dime is given
□ "two fifty, two seventy-five, three dollars" -- as the quarters are given
□ "four dollars, five dollars" -- as the one dollar bills are given
□ "ten dollars" -- as the five dollar bill is given
□ "And twenty dollars" -- as the ten dollar bill is given
This process accomplishes the following:
□ The customer has the least possible coins in their purse or pocket.
□ The amount of money returned is double checked, when it is gathered and when given to customer.
□ The amount of change due is double checked by counting change.
□ The possibility of a misunderstanding is eliminated.
□ The merchant and the customer are both treated fairly by making sure the change is exact. | {"url":"http://www.aaaspell.com/g313_cx2.htm","timestamp":"2024-11-03T02:58:25Z","content_type":"text/html","content_length":"16862","record_id":"<urn:uuid:473217b5-4f41-4552-a507-7910171784d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00272.warc.gz"} |
N-of-1 Blinded Experiment: Will 210mg L-Theanine Reduce my Coffee-Induced Anxiety?
resolved Dec 7
Randomised trials have suggested that L-Theanine meaningfully reduces the anxiety-inducing effects of caffeine, while maintaining its cognitive benefits.
I will be undertaking a blinded trial with L-Theanine using the following protocol:
1. Take 210mg L-Theanine or a placebo (blinded) just before my morning small iced coffee.
2. 90 mins after coffee consumption, I will record my anxiety level as a subjective measurement between 0-10.
3. Repeat for 20 days during which I will engage in sustained periods of focused work.
(Protocol taken from here, which has sources: https://n1.tools/experiments/anxiety/lTheanine)
Resolution criteria: Resolves YES if a naive model (i.e. that doesn't include confounders etc.) suggests at least an 80% probability that consuming L-Theanine reduces anxiety.
In other words: "Compared to placebo, a difference in anxiety between -xx% and 0% is at least 80% likely".
For reference, my previous market on whether L-Tyrosine improves focus used 20 data points and resulted in mean focus (placebo) of 7.4 and mean focus (L-Tyrosine) of 7.0. The model suggested that
"Compared to placebo, a difference in focus between 0% and 30% is 19.4% likely". Image from that experiment below.
Extra notes:
- Starting this experiment today.
- I may visit different cafes, but I'll always order a small iced coffee and try my best to consume about the same amount of caffeine each day (without being too strict).
- I won't bet on this market.
- I am sensitive to caffeine and do experience quite pronounced caffeine highs and lows.
- Dosage is 210mg because the brand I use from Amazon provides 210mg capsules (they suggest two capsules per dose).
Model details (for those interested)
I'll model the posterior distribution of anxiety values with the placebo and with L-Theanine, and then merge those to create a posterior distribution for the % difference in anxiety between the two.
I'll be using a Bayesian model for this, which requires a prior to update. That prior will be the mean anxiety for the placebo and L-Theanine from the data itself (which is informative, and means the
data will basically define the posterior distribution).
Mean L-Theanine ~ Normal(mean_data, std_data)
Mean Placebo ~ Normal(mean_data, std_data)
StdDev L-Theanine ~ HalfNormal(5)
StdDev Placebo ~ HalfNormal(5)
Data L-Theanine ~ Normal(Mean L-Theanine, StdDev L-Theanine)
Data Placebo ~ Normal(Mean Placebo, StdDev Placebo)
Deterministic Transformation for Percentage Difference:
Percent Difference = ((Mean L-Theanine - Mean Placebo) / Mean Placebo) * 100
Posterior Inference:
Probability(Percent Difference ≤ 0) ≥ 0.80
This question is managed and resolved by Manifold.
Seems like L-Theanine works to reduce post-coffee jitters!
Mean post-coffee anxiety with L-Theanine: 4.2
Mean post-coffee anxiety without L-Theanine: 5.0
82.6% chance of an anxiety reducing effect (honestly higher than I expected).
(Note: I updated my app to use Absolute Difference on the KDE plot x-axis but for the resolution of this market quickly switched the analysis back to Percent Difference, so ignore the x-axis label)
I bet NO because I thought 83% chance of effect was pretty high. I think there’s more than a 17% chance of no effect, let alone potentially negative effects or different kinds of anxiety. I’ve heard
anecdata about L-theanine giving a different sort of anxiety that is still edgy despite being less jittery than caffeine, so increased total anxiety also seems possible to me. | {"url":"https://manifold.markets/LuisCostigan/nof1-blinded-experiment-will-210mg","timestamp":"2024-11-14T21:43:39Z","content_type":"text/html","content_length":"142550","record_id":"<urn:uuid:302f8583-4eab-4d63-bf2d-4fb0dd6bff96>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00540.warc.gz"} |
2.a. A utility company
services a growing community. Its management is considering raising
outside capital by...
2.a. A utility company services a growing community. Its management is considering raising outside capital by...
2.a. A utility company services a growing community. Its management is considering raising outside capital by issuing equity. It is exploring changing its energy mix, which will require various
one-time costs associated with its transition to low-carbon energy sources. It uses an internal cost of capital of 10% to make investment decisions. Last year, it paid dividends of $4. The investment
banker working with the company is trying to determine its share price to determine how to price additional equity, and is exploring 3 scenarios.
2.a. Historical dividend growth of 2%/year will continue, even after the energy mix changes.
2.b. Demand will as ratepayers begin to use more electricity as they switch to electric vehicles -- especially if they understand their electricity is fueled by renewables. What would an additional
1% dividend growth mean for the share price?
2.b. An alternative scenario is put forth by a recent graduate of an MBA in Sustainability. She suggests that many of the people who buy electric cars will also buy solar panels for their homes,
irrespective of the utility's energy mix. She believes is likely to decrease, and expects dividends growth to decline to negative 1% a year. What would this mean for the share price?
We have to use the Gordon dividend discount model to determine the value of equity
The following information is given
· The last year dividend is$ 4
· The cost of capital is 10%
The formula to calculate value of equity
P[0]= D[1]/(k-g)
P[0]= value of stock
D[1] = Next year annual dividend per share
K = cost of capital
G= Expected dividend growth rate
2.a. Historical dividend growth of 2%/year will continue, even after the energy mix changes
The formula to calculate value of equity
P[0]= D[1]/(k-g)
P[0]= value of stock
D[1] = Next year annual dividend per share
K = cost of capital
G= Expected dividend growth rate
Calculate the D[1]
D[1]= last year dividend x current year growth rate x next year growth rate
= $4 x (1+0.02) x (1+0.02)
=$4 x 1.02 x 1.02
= $4.08 x 1.02
= $4.1616
K = 10% or 0.10
G = 2% or .02
Putting values in the formula
P[0]= D[1]/(k-g)
= $4.1616 /(0.10 – 0.02)
=$4.1616 /0.08
= $52.02
The value of equity will be $52.02
2.b. Demand will as ratepayers begin to use more electricity as they switch to electric vehicles -- especially if they understand their electricity is fueled by renewables. What would an additional
1% dividend growth mean for the share price?
The formula to calculate value of equity
P[0]= D[1]/(k-g)
P[0]= value of stock
D[1] = Next year annual dividend per share
K = cost of capital
G= Expected dividend growth rate
Calculate the D[1]
D[1]= last year dividend x current year growth rate x next year growth rate
= $4 x (1+0.03) x (1+0.03)
=$4 x 1.03 x 1.03
= $4.12 x 1.03
= $4.2436
K = 10% or 0.10
G = 3% or .03
Putting values in the formula
P[0]= D[1]/(k-g)
= $4.2436 /(0.10 – 0.03)
=$4.2436 /0.07
= $60.6228 or $60.62
The value of equity will be $60.62
2.b. An alternative scenario is put forth by a recent graduate of an MBA in Sustainability. She suggests that many of the people who buy electric cars will also buy solar panels for their homes,
irrespective of the utility's energy mix. She believes is likely to decrease, and expects dividends growth to decline to negative 1% a year. What would this mean for the share price?
The formula to calculate value of equity
P[0]= D[1]/(k-g)
P[0]= value of stock
D[1] = Next year annual dividend per share
K = cost of capital
G= Expected dividend growth rate
Calculate the D[1]
D[1]= last year dividend x current year growth rate x next year growth rate
= $4 x (1+0.01) x (1+0.01)
=$4 x 1.01 x 1.01
= $4.04 x 1.01
= $4.0804
K = 10% or 0.10
G = 1% or .01
Putting values in the formula
P[0]= D[1]/(k-g)
= $4.0804 /(0.10 – 0.01)
=$4.0804 /0.09
= $45.3377 or $ 45.34
The value of equity will be $45.34 | {"url":"https://justaaa.com/finance/1290935-2a-a-utility-company-services-a-growing-community","timestamp":"2024-11-02T20:53:42Z","content_type":"text/html","content_length":"50814","record_id":"<urn:uuid:04f42c1b-d534-4eaf-923b-46f70510d180>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00529.warc.gz"} |
heur_rounding.h File Reference
Detailed Description
LP rounding heuristic that tries to recover from intermediate infeasibilities.
Tobias Achterberg
Rounding heuristic that starts from an LP-feasible point and reduces the number of fractional variables by one in each step. As long as no LP row is violated, the algorithm iterates over the
fractional variables and applies a rounding into the direction of fewer locks, updating the activities of the LP rows after each step. If there is a violated LP row, the heuristic will try to find a
fractional variable that can be rounded in a direction such that the violation of the constraint is decreased, using the number of up- and down-locks as a tie breaker. If no rounding can decrease the
violation of the constraint, the procedure is aborted.
Definition in file heur_rounding.h.
Go to the source code of this file. | {"url":"https://www.scipopt.org/doc/html/heur__rounding_8h.php","timestamp":"2024-11-04T13:58:45Z","content_type":"text/html","content_length":"11688","record_id":"<urn:uuid:fecf9da9-b50e-4e42-ae4c-f14ec29f8cd4>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00446.warc.gz"} |
Need to Look at Entire Column for ISBLANK
This formula works for just the one row and returns either blank or the number inputted into the row where the formula resides:
=IF(ISBLANK(Rating@row), "", AVERAGEIF([Area of Review]:[Area of Review], "E-commerce", Rating:Rating))
But I need to look at the entire column called Rating to ignore those left blank and then average the entire row called Rating with the number inputted for the remaining items in the Rating column
for my Area of Review.
Any suggestion would be greatly appreciated.
Best Answer
• Try an AVG/COLLECT combo.
=AVG(COLLECT(Rating:Rating, Rating:Rating, @cell <> "", [Area of Review]:[Area of Review], @cell = "E-commerce"))
• Try an AVG/COLLECT combo.
=AVG(COLLECT(Rating:Rating, Rating:Rating, @cell <> "", [Area of Review]:[Area of Review], @cell = "E-commerce"))
• Paul, thank you for your feedback but your example this is not averaging the entire column.
Row1: 9
Row3: 3
In my quick example above, I want to look at column RATING and number inputted the ROW1:ROW3 but since Row2 is blank I do not want to include it in my average. In my example the average would be
(for this example) 6 since I do not want to include Row2.
My original formula that I am using and works to average the Rating is =IFERROR(AVG(COLLECT(Rating:Rating, [Area of Review]:[Area of Review], "E-Commerce")), 0) but this formula is not ignoring
the " " rows and since they are being included, they are skewing my overall average.
Hope this added information helps.
• Paul, my mistake. Your process did work; I did not have enough information in my test sheet to confirm, but added more data and was able to confirm. Thank you again for your feedback.
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/122371/need-to-look-at-entire-column-for-isblank","timestamp":"2024-11-10T23:56:34Z","content_type":"text/html","content_length":"409676","record_id":"<urn:uuid:00bae787-862e-48f2-8856-cae370505a42>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00775.warc.gz"} |
Thermomechanical modeling for gaseous diffusion in elastic stress fields
A mechanical theory is formulated for the diffusion of a gas, behaving as an elastic fluid, through an isotropic elastic solid. It is assumed that each point of the mixture is occupied simultaneously
by both constituents in given proportions. The mechanical properties of each component are specified by means of constitutive equations for the stresses. Diffusion effects are accounted for by means
of a body force acting on each constituent which depends upon the composition, the elasticity of the solid, and the relative motion of the substances in the mixture. Coupled diffusion equations for
both constituents are derived. Uncoupling of the equations is attempted within a linearized theory by adopting particular motions for the mixture. The result is compared with the classical diffusion
equations derived by intuitive modifications to the empirical Fick's law.
Ph.D. Thesis
Pub Date:
December 1975
□ Elastic Media;
□ Gaseous Diffusion;
□ Stress Distribution;
□ Composition (Property);
□ Elastic Properties;
□ Linearity;
□ Fluid Mechanics and Heat Transfer | {"url":"https://ui.adsabs.harvard.edu/abs/1975PhDT........40A/abstract","timestamp":"2024-11-06T00:04:00Z","content_type":"text/html","content_length":"34722","record_id":"<urn:uuid:e3aa58f9-4d91-44ee-92fb-8a45bc0162a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00098.warc.gz"} |
Properties of Sets | Learn and Solve Questions
Introduction to Set Properties
A set contains elements or members that can be mathematical objects of any kind, including numbers, symbols, points in space, lines, other geometric shapes, variables, or even other sets. Set
properties make it simple to execute many operations on sets. A set is a mathematical model for a collection of various things. Numerous properties, including commutative and associative qualities,
are comparable to those of real numbers.
With the help of examples and frequently asked questions, let's learn more about the properties of the union of sets, the intersection of sets, and the complement of sets.
Properties of Sets
The traits of real numbers apply to sets as well. The three most important properties of sets are the associative property, the commutative property, and the other important property. The following
are the formulas for the three sets of qualities, A, B, and C.
Properties of Sets
Using the above-mentioned set attributes, it is simple to conduct the various operations of the union of sets, the intersection of sets, and the complement of sets for the supplied sets.
Properties of Set Operations
A collection of well-defined items is referred to as a set in mathematics. A set's elements remain unchanged if the order in which they are written is changed. A set remains the same if one or more
of its components are repeated. We shall discover the crucial characteristics of set operations in this essay.
1. Closure Property
2. Associative Property
3. Commutative Property
5. Identity Property
Sets have the same qualities as real numbers. Sets have the same associative property, commutative property, and other properties as numbers. The six essential properties of sets are commutative
property, associative property, distributive property, identity property, complement property, and idempotent property.
Properties of Sets Problems
Example 1: Discover the complement of a set A and demonstrate that it complies with the double complement characteristic of sets. Given \[A = 1,2,4,5\] and \[\mu = 1,2,3,4,5,6,7,8,9,10\].
Solution: \[A = 1,2,4,5\]and \[\mu = 1,2,3,4,5,6,7,8,9,10\] are the given sets.
The objective is to demonstrate the double complement of sets \[\left( {A'} \right]' = A\] attribute.
\[\begin{array}{l}A = \left\{ {1,2,4,5,7} \right\}\\A' = \mu - A = \left\{ {3,6,8,9,10} \right\}\\\left( {A'} \right)' = \mu - A' = \left\{ {1,2,4,5,7} \right\}\end{array}\]
Since the set \[\left( {A'} \right)'\] above is identical to the supplied set A, we can conclude that\[\left( {A'} \right)' = A\].
As a result, the set in question adheres to the double complement feature of sets.
Example 2: Find the union of sets A and B, then demonstrate that it complies with the union of sets' commutative property. \[A = 1,2,3,4,5,6{\rm{ and }}B = 2,3,5,7,8,9\] are given.
Solution: The given sets are\[A = 1,2,3,4,5,6{\rm{ and }}B = 2,3,5,7,8,9\].
\[A \cup B = B \cup A\] is the commutative property of a union of sets.
\[A \cup B = \left\{ {1,2,3,4,5,6,7,8,9} \right\}\]
\[B \cup A = \left\{ {1,2,3,4,5,6,7,8,9} \right\}\] . We can see from the two sets above that\[A \cup B = B \cup A\].
As a result, the commutative property of the union of sets is observed in the two specified sets.
FAQs on Properties of Sets
1. What 5 categories of sets are there?
Some different kinds of sets include the infinite set, the finite set, the equivalent set, the subset, the universal set, and the superset.
2. What Purpose Do Set Properties Serve?
Numerous operations can be carried out across sets using the features of sets. The qualities of sets can be used to conveniently carry out the operations of union, intersection, and complement.
3. What Qualifies Sets Of Equal Sets As Sets?
The elements in both sets are identical, and both equal sets include the same number of elements. Equal sets have equal cardinality. | {"url":"https://www.vedantu.com/maths/properties-of-sets","timestamp":"2024-11-02T01:42:16Z","content_type":"text/html","content_length":"180954","record_id":"<urn:uuid:83c7eed4-254f-4d69-91e6-539ec16a9298>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00141.warc.gz"} |
Fluctuation theory at low density
Università di Roma "La Sapienza"
Thursday, June 8, 2023 - 11:00 to 13:00
We review the state of the art in Grad's validity problem for a mathematical justification of fluid equations based on fundamental laws of classical mechanics. With the techniques currently
available, such problem can be faced in some simple case for perfect gases, using the kinetic theory of Boltzmann as an intermediate step. We will discuss a recent result establishing the connection
between microscopic and hydrodynamic scales, for perturbations of a global equilibrium. More precisely, we consider dynamical fluctuations of a system of hard spheres at low density (Boltzmann-Grad
limit), with random initial data distributed according to the corresponding Gibbs measure. The asymptotics of the fluctuations were conjectured by Spohn in the Eighties. Lanford and collaborators
proved partial results for short times. However, the small time restriction intrinsic to the perturbation theory can be lifted in this case. The main feature of the proof is an $L^2$ (entropy-type)
bound together with a suitable coupling of trajectories. The method allows to reach diffusive time scales and obtain the fluctuating hydrodynamics. | {"url":"https://www.math.sissa.it/seminar/mathematics-many-body-systems/fluctuation-theory-low-density","timestamp":"2024-11-05T09:37:49Z","content_type":"application/xhtml+xml","content_length":"33869","record_id":"<urn:uuid:0adc641a-a06a-480c-9ac4-1269de600a10>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00391.warc.gz"} |
Department of Mathematics
Mathematical Data Science Seminar
This seminar welcomes faculty and students from the Pikes Peak region who are interested in mathematical aspects of data science and machine learning, and more broadly, optimization, scientific
computation, modeling and simulations. Formerly known as the AaA seminar, it is intended to have a very informal format, with several seminars in the format of Math Clinic workshops rather than
lecture format, introducing the audience to topics that are of current interest in the field.
Please contact Dr. Radu Cascaval (radu@uccs.edu) if you are interested to join this seminar or need more info. Limited number of parking passes will be made available to non-UCCS individuals
attending this seminar.
Fridays 12:15-1:30pm, ENG 247 or ENG 101, UCCS campus
(refreshments available at 12:15pm, talks start at 12:30pm)
│Fall 2024 │
│Sept 13 │Radu Cascaval │An Introduction to PINNs (Physics-Informed Neural Networks) │
│ │UCCS │ │
│Sept 20 │Troy Butler │Transforming Displacements to Distributions with a Machine-Learning Framework │
│ │CU Denver and NSF │ │
│Oct 4 │Mihno Kim │An Introduction to Handling Missing Values │
│ │Colorado College │ │
│Oct 11 &│UCCS Math Clinic │Python Workshops │
│Oct 18 │ │ │
│Nov 1 │Meng Li │Mathematics in Data Science: Key Concepts and Applications │
│ │Data Scientist, Booz Allen Hamilton │ │
│Nov 15 │Jenny Russell │How DATA Can Improve the Decision Making Process │
│ │Director, UCCS Institutional Research│ │
│Dec 6 │Seth Minor │Discovering Closed-Form Weather Models from Data │
│ │Applied Math, CU Boulder │ │
Fall 2024 Abstracts:
Dec 6, 2024
Speaker: Seth Minor, Applied Math Dep, Cu Boulder
Title: Discovering Closed-Form Weather Models from Data
Abstract: The multiscale and turbulent nature of Earth's atmosphere has historically rendered accurate weather modeling a hard problem. Recently, there has been an explosion of interest surrounding
data-driven approaches to weather modeling (e.g., GraphCast), which in many cases boasts both improved forecast accuracy and computational efficiency when compared to traditional methods. However,
many of the new data-driven approaches employ highly parameterized neural networks, which often result in uninterpretable models and, in turn, a limited gain in scientific understanding. In this
talk, we address a current research direction that addresses the interpretability problem in data-driven weather modeling. In particular, we cover a data-driven approach for explicitly discovering
the governing PDEs symbolically, thus identifying mathematical models with direct physical interpretations. In particular, we use a weak-form sparse regression method dubbed the Weak Sparse
Identification of Nonlinear Dynamics (WSINDy) algorithm to learn models from simulated and assimilated weather data.
Nov 15, 2024
Speaker: Jenny Russell, Director, UCCS Institutional Research
Title: How DATA Can Improve the Decision Making Process
Abstract: There is no shortage of data in today’s world – but data alone is not enough. The true value of data emerges when it is communicated in ways that resonate with diverse audiences, especially
those who may find data overwhelming or uninteresting. This presentation explores ways to transform data into compelling stories that engage stakeholders, facilitate understanding, and ultimately
influence decisions. By utilizing various visualization techniques, narrative structures, and impactful examples, we can break down complex data and provide actionable insights. We will discuss how
to make data accessible, relatable, and impactful, regardless of the industry or audience.
Nov 1, 2024
Speaker: Meng Li, Data Scientist, Booz Allen Hamilton
Title: Mathematics in Data Science: Key Concepts and Applications
Abstract: Mathematics forms the backbone of data science, providing the tools and frameworks necessary for creating predictive models, optimizing decision-making, and deriving actionable insights
from data. This talk offers a high-level overview of key mathematical methods integral to data science, such as linear algebra for dimensionality reduction, probability and statistics for model
assessment, optimization techniques for algorithm training, and calculus for understanding model dynamics. We will explore how these methods are applied across industries, including finance,
healthcare, and marketing, illustrating real-world applications from portfolio optimization to customer behavior prediction. This session is designed for both new and experienced data enthusiasts
looking to understand the mathematical principles that drive data science innovation.
Oct 4, 2024
Speaker: Mihno Kim, Colorado College
Title: An Introduction to Handling MIssing Values
Abstract: Almost all real-life data contains missing values, yet most modeling techniques were developed based on having complete data. While accounting for the missing values is a popular research
area, they are a greatly overlooked problem in practice because it is a preliminary step before the primary analysis. This talk will address the importance of properly handling the missing values.
Different types of missing values will be introduced, along with several techniques that are easy to use.
Sept 20, 2024
Speaker: Troy Butler, CU Denver and NSF
Title: Transforming Displacements to Distributions with a Machine-Learning Framework
Abstract: In general, any uncertainty quantification (UQ) analysis for a computational model of an engineered or physical system governed by principles of mechanics seeks to use principles of
probability theory to identify, classify, and quantify sources of uncertainty between simulated predictions and observational data. A significant UQ challenge is that both aleatoric (i.e.,
irreducible) and epistemic (i.e., reducible) sources of uncertainty can plague the modeling, manufacturing, and observation of a system. Aleatoric uncertainty may arise from the intrinsic variability
of material properties represented as model parameters such as Young's modulus while epistemic uncertainty may arise from an inability to perfectly measure the true state of system responses such as
displacements when subjected to known forces. In this talk, we combine two novel frameworks to quantify both types of uncertainty in the modeling of engineered systems. The data-consistent (DC)
framework is utilized to quantify aleatoric uncertainties in system properties appearing as model parameters for a given Quantities of Interest (QoI) map. The Learning Uncertain Quantities (LUQ)
framework is a three-step machine-learning enabled process for transforming noisy spatio-temporal data into samples of a learned QoI map to enable DC-based inversion. We focus discussion primarily on
the LUQ framework to highlight key aspects of the mathematical foundations, the implications for learning QoI maps from a combination of data and simulations, and also to develop quantitative
sufficiency tests for the data. Illustrative examples are used throughout the presentation to provide intuition for each step while the final two examples demonstrate the full capabilities of the
methods for problems involving the manufacturing of shells of revolution motivated by real-world applications.
Sept 13, 2024
Speaker: Radu Cascaval, UCCS
Title: An Introduction to PINNs (Physics-Informed Neural Networks)
Abstract: PINNs have gained popularity in the field of Scientific Machine Learning due to the wide range of applications where PINNs have been found to be effective. In particular solving PDEs,
inverse problems and optimization problems. In this seminar we will introduce a few such application, together with a description of the DeepXDE package.
│Spring 2024 │
│Mar 22│Radu C Cascaval │Mathematics in Data Science │
│ │UCCS │and the UCCS Math Clinic │
│ │Justin Garrish │Quantitative Assessment of Metabolic Health: │
│Apr 5 │Colorado School of Mines│Bayesian Hierarchical Models Uniting Dynamical Systems │
│ │ │and Gaussian Processes │
│Apr 12│Caroline Kellackey │Advanced Framework for Simulation, Integration and Modeling (AFSIM) │
│ │US Navy │ │
│Apr 19│Dustin Burchett │Network Resiliency and Optimization: │
│ │MITRE Corp. │The Graph Conductance Problem │
│Apr 26│Jonathan Thompson │Supervised and Unsupervised Learning │
│ │UCCS │via the Kernel Trick │
Spring 2024 Abstracts:
April 26, 2024
Speaker: Jonathan Thompson, UCCS Math
Title: Supervised and Unsupervised Learning via the Kernel Trick
The field of machine learning has garnered extraordinary interest over the last decade as a result of powerful hardware, improved algorithms, and new theoretical insights. In this talk, we offer an
elementary introduction to the study of kernel methods by way of extending supervised and unsupervised learning algorithms (such as support vector machines and principal component analysis) to
support non-linear predictive models embedded in an infinite-dimensional Hilbert space. In doing so, we utilize the so-called "kernel trick", which allows us to learn reduced-order decision
boundaries for high-dimensional non-linear data.
April 19, 2024
Speaker: Dustin Burchett, MITRE Corp.
Title: Network Resiliency and Optimization: The Graph Conductance Problem
This talk aims to provide an entry-level understanding of networks and topologies, with a specific focus on analyzing resiliency as a function of the nodes present in the network. A significant
portion of the talk will be dedicated to the “graph conductance problem”, one possible measurement of network resiliency. The conductance of a cut in a given topology can be easily computed. However,
the conductance of a graph is the minimum conductance of all possible cuts, which is an Np-complete nonlinear combinatorial optimization problem. An overview of optimization problems are provided, as
well as candidate approaches to solving the conductance problem. Solution approximations are showcased, which highlight key nodes providing resiliency in the network.
This talk will provide attendees with a basic understanding of the relationships between network topologies and optimization problems, equipping them with the knowledge to measure and enhance network
April 12, 2024
Speaker: Caroline Kellackey, US Navy
Title: Advanced Framework for Simulation, Integration and Modeling (AFSIM) Survivability Study
In this talk, we will explore the AFSIM Survivability Study, which focuses on utilizing the Advanced Framework for Simulation, Integration and Modeling (AFSIM) software to analyze the survivability
of military systems. We will delve into the concept of survivability and how it is measured using the probability of a missile reaching its target. The Monte Carlo method will be discussed as a means
to obtain numerical results, and the calculation of the number of scenarios in a study will be explored. Additionally, we will walk through the process of designing a missile using AFSIM, considering
factors such as altitude, speed, flight path, and end-game maneuvers. Practical instructions on conducting a study using AFSIM will be provided, including scenario selection, input file generation,
running the experiment, and data analysis. Finally, we will discuss how to interpret and present the findings, examining the relationship between speed, altitude, and the probability of survival
through graphical representations.
April 5, 2024
Speaker: Justin Garrish, Colorado School of Mines
Title: Quantitative Assessment of Metabolic Health: Bayesian Hierarchical Models Uniting Dynamical Systems and Gaussian Processes
Diseases such as diabetes and cystic fibrosis disrupt the body's ability to regulate plasma glucose, resulting in chronic health issues across multiple body systems, often requiring long-term
management. Through data collected under controlled settings, clinicians and researchers can utilize differential equations (DE)-based models to analyze the physiological response to glucose and
generate indices of metabolic health. While such models have proven invaluable in clinical metabolism research, their efficacy is often limited outside of specific conditions. To address this
challenge, we propose integrating Gaussian processes into established DE-based models within a Bayesian framework to construct robust hybrid models capable of providing reliable indices of metabolic
health with associated uncertainty quantification. In this presentation, we first explore the necessary background in Gaussian processes and Bayesian statistics, emphasizing their connection to
positive definite kernel-based approximation methods. Then, through illustrative examples, we discuss mathematical and statistical modeling, numerical implementation, and clinical interpretation of
results in human metabolism studies.
March 22, 2024
Speaker: Dr. Radu Cascaval, UCCS
Title: Mathematics in Data Science and the UCCS Math Clinic
We will provide an overview of various mathematical tools (from the field of linear algebra and optimization) in machine learning and data science, including regularizations and kernel methods. We
then describe a few applications that have being investigated during the UCCS Math Clinic in recent years.
││Spring 2022 ││
││Feb 9, 2022 │Radu C Cascaval │Mathematical Analysis of Deep Learning and Kernel Methods ││
││ │UCCS │ ││
││Feb 23, 2022│Denis Silantyev │Obtaining Stokes wave with high-precision using conformal maps and spectral methods on non-uniform grids││
││ │UCCS │ ││
││Mar 30, 2022│John Villavert │Methods for superlinear elliptic problems ││
││ │UT Rio Grande Valley│ ││
││Apr 27, 2022│Cory B Scott │Machine Learning for Graphs ││
││ │Colorado College │ ││
Spring 2022 abstracts:
Feb 9, 2022
Speaker: Dr. Radu Cascaval, UCCS
Title: Mathematical Analysis of Deep Learning and Kernel Methods
Kernel methods have become an important tool in the realm of machine learning and found a wide applicability in classification tasks such as support vector machines and deep learning. This seminar
will provide an overview of such methods and how mathematical analysis can aid in understanding their success.
Feb 23, 2022
Speaker: Dr. Denis Silantyev, UCCS
Title: Obtaining Stokes wave with high-precision using conformal maps and spectral methods on non-uniform grids
Two-dimensional potential flow of the ideal incompressible fluid with free surface and infinite depth has a class of solutions called Stokes waves which is fully nonlinear periodic gravity waves
propagating with the constant velocity. We developed a new highly efficient method for computation of Stokes waves. The convergence rate of the numerical approximation by a Fourier series is
determined by the complex singularity of the travelling wave in the complex plane above the free surface. We study this singularity and use an auxiliary conformal mapping which moves it away from the
free surface thus dramatically speeding up Fourier series convergence of the solution. Three options for the auxiliary conformal map are described with their advantages and disadvantages for
numerics. Their efficiency is demonstrated for computing Stokes waves near the limiting Stokes wave (the wave of the greatest height) with 100-digit precision. Drastically improved convergence rate
significantly expands the family of numerically accessible solutions and allows us to study the oscillatory approach of these solutions to the limiting wave in great detail.
March 30, 2022
Speaker: Dr. John Villavert, Univ Texas Rio Grande Valley
Title: Methods for superlinear elliptic problems
We give an elementary overview of several nonlinear elliptic (and parabolic) PDEs that arise from well-known problems in analysis and geometry. We discuss existence, non-existence (including
Liouville theorems) and qualitative results for the equations and introduce some powerful geometric and topological techniques used to establishing these results.
We shall attempt to highlight the underlying ideas in the techniques and illustrate how we can refine them to handle more general problems involving differential and integral equations.
April 27, 2022
Speaker: Dr. Cory B. Scott, Colorado College
Title: Machine Learning for Graphs
The recent rise in Deep Learning owes much of its success to a small handful of techniques which made machine learning models drastically more efficient on image and video input. These techniques are
directly responsible for the explosion of image filters, face recognition apps, deepfakes, etc. However, they all rely on the fact that image and video data lives on a grid of pixels (2D for images,
3D for video). If we want to analyze data that doesn't have a rigid grid-like structure - like molecules, social networks, biological food webs or traffic patterns - we need some more tricks. One of
these techniques is called a Graph Neural Network (GNN). In this talk, we will talk about GNNs in general, and demonstrate a couple of cool applications of these models.
│Spring 2019 │
│Feb 20, 2019│Daniel Appelö │What’s new with the wave equation? │
│ │Applied Math, CU Boulder │ │
│Mar 6, 2019 │Richard Wellman │Scalable semi-supervised learning with operators in Hilbert space │
│ │Comp Sci, Colorado College │ │
│Mar 20, 2019│Radu Cascaval │The mathematics of (spatial) mobility │
│ │UCCS Math │ │
│Apr 17, 2019│Mahmoud Hussein │Exact dispersion relation for strongly nonlinear elastic wave propagation │
│ │Aerospace Eng, CU Boulder │ │
│May 8, 2019 │Michael Calvisi │The Curious Dynamics of Translating Bubbles: An Application of Perturbation Methods and Potential Flow Theory│
│ │Mechanical and Aerospace Eng, UCCS│ │
Spring 2019 abstracts:
Feb 20, 2019
Speaker: Dr. Daniel Appelo, CU Boulder
Title: What’s new with the wave equation?
The defining feature of waves is their ability to propagate over vast distances in space and time without changing shape. This unique property enables the transfer of information and constitutes the
foundation of today’s communication based society. To see that accurate propagation of waves requires high order accurate numerical methods, consider the problem of propagating a wave in three
dimensions for 100 wavelengths with 1% error. Using a second order method this requires 0.2 trillion space-time samples while a high order method requires many orders of magnitude fewer samples.
In the first part of this talk we present new arbitrary order dissipative and conservative Hermite methods for the scalar wave equation. The degrees-of-freedom of Hermite methods are tensor-product
Taylor polynomials of degree m in each coordinate centered at the nodes of Cartesian grids, staggered in time. The methods achieve space-time accuracy of order O(2m). Besides their high order of
accuracy in both space and time combined, they have the special feature that they are stable for CFL = 1, for all orders of accuracy. This is significantly better than standard high-order element
methods. Moreover, the large time steps are purely local to each cell, minimizing communication and storage requirements.
In the second part of the talk we present a spatial discontinuous Galerkin discretization of wave equations in second order form that relies on a new energy based strategy featuring a direct,
mesh-independent approach to defining interelement fluxes. Both energy-conserving and upwind discretizations can be devised. The method comes with optimal a priori error estimates in the energy norm
for certain fluxes and we present numerical experiments showing that optimal convergence for certain fluxes.
Mar 6, 2019
Speaker: Dr. Richard Wellman, Colorado College
Title: Scalable semi-supervised learning with operators in Hilbert space
There is preponderance of semi-supervised learning problems in science and industry, but there is a dearth of applicable semi-supervised algorithms. The LaplaceSVM Semi-Supervised Support Vector
Machine is a learning algorithm that demonstrates state of the art performance on benchmark semi-supervised data sets. However this algorithm does not scale. In this talk we’ll discuss the
mathematical foundations of the LaplaceSVM and show the kernel is a solution of a non-homogenous self-adjoint operator equation. It can be shown certain Galerkin spectral approximations are
themselves valid reproducing kernels that encode the underlying Riemannian geometry. The spectral kernels have excellent scalability metrics and interesting mathematical properties. We discuss both
the mathematics and experimental results of the resultant semi-supervised alogirthm.
Apr 6, 2019
Speaker: Dr. Mahmoud Hussein, CU Boulder
Title: Exact dispersion relation for strongly nonlinear elastic wave propagation
Wave motion lies at the heart of many disciplines in the physical sciences and engineering. For example, problems and applications involving light, sound, heat or fluid flow are all likely to involve
wave dynamics at some level. In this seminar, I will present our recent work on a class of problems involving intriguing nonlinear wave phenomena‒large-deformation elastic waves in solids; that is,
the “large-on-small” problem.
Specifically, we examine the propagation of a large-amplitude wave in an elastic one-dimensional medium that is undeformed at its nominal state. In this context, our focus is on the effects of
inherent nonlinearities on the dispersion relation. Considering a thin rod, where the thickness is small compared to the wavelength, I will present an exact formulation for the treatment of a
nonlinearity in the strain-displacement gradient relation. As examples, we consider Green Lagrange strain and Hencky strain. The ideas presented, however, apply generally to other types of
nonlinearities. The derivation starts with an implementation of Hamilton’s principle and terminates with an expression for the finite-strain dispersion relation in closed form. The derived relation
is then verified by direct time-domain simulations, examining both instantaneous dispersion (by direct observation) and short-term, pre-breaking dispersion (by Fourier transformations), as well as by
perturbation theory. The results establish a perfect match between theory and simulation and reveal that an otherwise linearly nondispersive elastic solid may exhibit dispersion solely due to the
presence of a nonlinearity. The same approach is also applied to flexural waves in an Euler Bernoulli beam, demonstrating qualitatively different nonlinear dispersive effects compared to longitudinal
waves. Finally, I will present a method for extending this analysis to a continuous thin rod with a periodic arrangement of material properties. The method, which is based on a standard transfer
matrix augmented with a nonlinear enrichment at the constitutive material level, yields an approximate band structure that accounts for the finite wave amplitude. Using this method, I will present an
analysis on the condition required for the existence of spatial invariance in the wave profile.
This work provides insights into the fundamentals of nonlinear wave propagation in solids, both natural and engineered-a problem relevant to a range of disciplines including dislocation and crack
dynamics, geophysical and seismic waves, material nondestructive evaluation, biomedical imaging, elastic metamaterial engineering, among others.
May 8, 2019
Speaker: Dr. Michel Calvisi, Mechanical and Aerospace Eng, UCCS
Title: The Curious Dynamics of Translating Bubbles: An Application of Perturbation Methods and Potential Flow Theory
When subject to an acoustic field, bubbles will translate and oscillate in interesting ways. This motion is highly nonlinear and its understanding is essential to the application of bubbles in
diagnostic ultrasound imaging, microbubble drug delivery, and acoustic cell sorting, among others. This talk will review some of the interesting physics that occur when bubbles translate in an
acoustic field, including Bjerknes forces, the added mass effect, and nonspherical shape oscillation. Such nonspherical shape modes strongly affect the stability and acoustic signature of
encapsulated microbubbles (EMBs) used for biomedical applications, and thus are an important factor to consider in the design and utilization of EMBs. The shape stability of an EMB subject to
translation is investigated through development of an axisymmetric model for the case of small deformations using perturbation analysis. The potential flow in the bulk volume of the external flow is
modeled using an asymptotic analysis. Viscous effects within the thin boundary layer at the interface are included, owing to the no-slip boundary condition. The results of numerical simulations of
the evolutions equations for the shape and translation of the EMB demonstrate the counterintuitive result that, compared to a free gas bubble, the encapsulation actually promotes instability when a
microbubble translates due to an acoustic wave.
│Fall 2018 │
│Sep 12, 2018│Sarbarish Chakravarty│Beach waves and KP solitons │
│ │UCCS Math │ │
│Oct 3, 2018 │Robert Carlson │An elementary trip from the Gauss hypergeometric function to the Poschl-Teller potential in quantum mechanics │
│ │UCCS Math │ │
│Oct 17, 2018│Geraldo de Souza │Fourier series, Wavelets, Inequalities, Geometry and Optimization │
│ │Auburn University │ │
│Nov 14, 2018│Robert Jenkins │Semiclassical soliton ensembles │
│ │CSU Fort Collins │ │
│Dec 5, 2018 │Barbara Prinari │Discrete solitons for the focusing Ablowitz-Ladik equation with non-zero boundary conditions via inverse scattering transform│
│ │UCCS Math │ │
Fall 2018 abstracts:
Sep 12, 2018
Speaker: Dr. Sarbarish Chakravarty, UCCS
Title: Beach Waves and KP Solitons
Abstract: In this talk, I will give a brief overview of the soliton solutions of the KP equation, and discuss how these solutions can describe shallow water wave patterns on long flat beaches.
Oct 3, 2018
Speaker: Dr. Robert Carlson, UCCS
Title: An elementary trip from the Gauss hypergeometric function to the Poschl-Teller potential in quantum mechanics
Abstract: A simple transformation takes the (G) equation for the Gauss hypergeometric function to the (J) equation for Jacobi polynomials. J has an (unusual) adjoint equation (H) (of Heun type) with
an extra singular point. H has eigenfunctions that can be expressed in terms of the Gauss hypergeometric function. Another change of variables lets us rediscover a ‘solvable’ (Poschl-Teller)
Schrodinger equation. The methods use the kinds of techniques we often teach in Math 3400.
Oct 17, 2018
Speaker: Dr. Geraldo de Souza, Auburn University
Title: Fourier series, Wavelets, Inequalities, Geometry and Optimization
Abstract: This talk will have two parts. In the first part, I will start with motivation and comments to some important problems in Analysis. Each problem has led to important discovery, such as
Wavelets, technique of convergence of Fourier, among others. The second part I will talk about Inequalities. In general, I view the second part of this presentation as simple or perhaps an elementary
approach to the subject (even though it is a new idea). On the other hand, this talk will show some interesting observations that are part of the folklore of mathematics. I will go over some very
common and important inequalities in analysis that we see in the course of Analysis and even in Calculus. I will give some different views of different proofs, using Geometry, Graphing and some of
them “a new analytic proof” by using optimization of functions of two variables (this is very interesting).
Nov 14, 2018
Speaker: Dr. Robert Jenkins, Colorado State University - Fort Collins
Title: Semiclassical soliton ensembles
Abstract: Equations like the Korteweg-de Vries (KdV) and the nonlinear Schroedinger equation exhibit interesting and complicated dynamics when the dispersive length scales in the problem are small
compared to those of the initial wave profile; this is the relevant scaling regime for many problem is optical fibers. In this talk I'll discuss one way to analyze such problems for integrable PDEs
using the inverse scattering transform (IST) that approximates initial data by an increasingly large sum of solitons. I'll talk both about NLS and some more recent work of mine on the resonant three
wave interaction equations. There will be lots of pictures to help clear up the technical details!
Dec 5, 2018
Speaker: Dr. Barbara Prinari, UCCS
Title: Discrete solitons for the focusing Ablowitz-Ladik equation with non-zero boundary conditions via inverse scattering transform
Abstract: Soliton solutions of the focusing Ablowitz-Ladik (AL) equation with nonzero boundary conditions at infinity are derived within the framework of the inverse scattering transform (IST). After
reviewing the relevant aspects of the direct and inverse problems, explicit soliton solutions will be discussed which are the discrete analog of the Tajiri-Watanabe and Kuznetsov-Ma solutions to the
focusing NLS equation on a finite background. Then, by performing suitable limits of the above solutions, discrete analog of the celebrated Akhmediev and Peregrine solutions will also be presented.
These solutions, which had been recently derived by direct methods, are obtained for the first time within the framework of the IST, thus providing a spectral characterization of the solutions and a
description of the singular limit process.
│Spring 2018 │
│Apr 11, 2018│Greg Fasshauer │An Introduction to Kernel-Based Approximation Methods │
│ │Colorado School of Mines│ │
│Mar 14, 2018│Ethan Berkove │Short Paths and Long Titles: Travels through the Sierpinski carpet, Menger sponge, and beyond. │
│ │Lafayette College │ │
│Feb 28, 2018│Radu Cascaval │Traffic Flow Models. A Tutorial │
│ │UCCS Math │ │
Spring 2018 abstracts:
Apr 11, 2018
Speaker: Dr. Greg Fasshauer, Colorado School of Mines
Title: An Introduction to Kernel-Based Approximation Methods
Abstract: I will start with a few historical remarks, and then motivate the use of kernel-based approximation as a numerical approach that generalizes standard polynomial-based methods. Examples of
kernels and their use in data fitting problems will be provided along with an overview of some of the concerns and issues associated with the use of kernel methods.
Mar 14, 2018
Speaker: Dr. Ethan Berkove, Laffayette College (Joint with Rings and Wings Seminar)
Title: Short Paths and Long Titles: Travels through the Sierpinski carpet, Menger sponge, and beyond.
Abstract: Sierpinski carpet and Menger sponge are fractals which can be thought of as two and three dimensional versions of the Cantor set. Like the Cantor set, each is formed by starting with a
shape (a square for the carpet, a cube for the sponge) and then recursively removing certain subsets of it. Unlike the Cantor set, what remains is connected in the following sense: given any two
points s and f in the carpet or sponge, there is a path from s to f that stays in the carpet or sponge. In this talk, we’ll discuss what we know about the shortest path from s to f in the carpet,
sponge, and even higher dimensional versions of these fractals. The proofs required a surprising (at least to us) breadth of techniques, from combinatorics, geometry, and even linear programming.
(Joint work with Derek Smith)
Feb 28, 2018
Speaker: Dr. Radu Cascaval, UCCS
Title: Traffic Flow Models. A Tutorial
Abstract: We present several traffic flow models, both at the micro- and macro-scale, including for multi-lane traffic. Problems of controlling the traffic will be described and numerical simulations
will illustrate possible solutions.
│Fall 2017 │
│Dec 8, 2017 │Barbara Prinari│Solitons and rogue waves for a square matrix nonlinear Schrodinger equation with nonzero boundary conditions │
│ │UCCS Math │ │
│Nov 17, 2017│Oksana Bihun │New properties of the zeros of Krall polynomials │
│ │UCCS Math │ │
│Oct 27, 2017│Radu Cascaval │What do Analysis and Scientific Computation have in common ... │
│ │UCCS Math │ │
│Sep 29, 2017│Fritz Gesztesy │The eigenvalue counting function for Krein-von Neumann extensions of elliptic operators │
│ │Baylor Univ. │ │
Fall 2017 seminars:
Dec 11, 2017
Speaker: Dr. Barbara Prinari, UCCS
Title: Solitons and rogue waves for a square matrix nonlinear Schrodinger equation with nonzero boundary conditions
Abstract: In this talk we discuss the Inverse Scattering Transform (IST) under nonzero boundary conditions for a square matrix nonlinear Schrodinger equation which has been proposed as a model to
describe hyperfine spin F = 1 spinor Bose-Einstein condensates with either repulsive interatomic interactions and anti-ferromagnetic spin-exchange interactions, or attractive interatomic interactions
and ferromagnetic spin-exchange interactions. Emphasis will be given to a discussion of the soliton and rogue wave solutions one can obtain as a byproduct of the IST.
Nov 17, 2017 Seminar
Speaker: Dr. Oksana Bihun, UCCS
Title: New properties of the zeros of Krall polynomials
Abstract: We identify a class of remarkable algebraic relations satisfied by the zeros of the Krall orthogonal polynomials that are eigenfunctions of linear differential operators of order higher
than two. Given an orthogonal polynomial family p_n(x), we relate the zeros of the polynomial p_N with the zeros of p_m for each m <=N (the case m = N corresponding to the relations that involve the
zeros of pN only). These identities are obtained by exacting the similarity transformation that relates the spectral and the (interpolatory) pseudospectral matrix representations of linear
differential operators, while using the zeros of the polynomial p_N as the interpolation nodes. The proposed framework generalizes known properties of classical orthogonal polynomials to the case of
non-classical polynomial families of Krall type. We illustrate the general result by proving new remarkable identities satisfied by the Krall-Legendre, the Krall-Laguerre and the Krall-Jacobi
orthogonal polynomials.
Oct 27, 2017 Seminar
Speaker: Dr. Radu C. Cascaval, UCCS
Title: What do Analysis and Scientific Computation have in common ...
Abstract: Analysis, the world of the infinitesimally small, is thought to be one of the last standing outposts where humans can fight the computational invasion. In spite of this fact, computational
sciences continue to benefit greatly from advances in analysis. This talk will illustrate this relationship, in particular functional analysis connections to numerical spectral methods, meshless
methods, and their applications to numerical solutions to PDEs.
Sept 29, 2017 Seminar
Speaker: Dr. Fritz Gesztesy, Baylor University
Title: The eigenvalue counting function for Krein-von Neumann extensions of elliptic operators
Abstract: We start by providing a historical introduction into the subject of Weyl-asymptotics for Laplacians on bounded domains in n-dimensional Euclidean space, and a brief introduction into the
basic principles of self-adjoint extensions. Subsequently, we turn to bounds on eigenvalue counting functions and derive such a bound for Krein-von Neumann extensions corresponding to a class of
uniformly elliptic second order PDE operators (and their positive integer powers) on arbitrary open, bounded, n-dimensional subsets \Omega in R^n. (No assumptions on the boundary of \Omega are made;
the coefficients are supposed to satisfy certain regularity conditions.) Our technique relies on variational considerations exploiting the fundamental link between the Krein-von Neumann extension and
an underlying abstract buckling problem, and on the distorted Fourier transform defined in terms of the eigenfunction transform of the corresponding differential operator suitably extended to all of
R^n. We also consider the analogous bound for the eigenvalue counting function for the corresponding Friedrichs extension. This is based on joint work with M. Ashbaugh, A. Laptev, M. Mitrea, and S. | {"url":"https://math.uccs.edu/research/math-data-science-seminar","timestamp":"2024-11-11T19:09:28Z","content_type":"text/html","content_length":"135820","record_id":"<urn:uuid:2640907f-7392-4056-83ce-d7803e559a79>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00401.warc.gz"} |
Ch. 26 Review Questions - Psychiatric-Mental Health Nursing | OpenStax
Review Questions
1 .
What is a common barrier to recovery from mental illness?
a. increased social interaction
b. stigma and discrimination
c. availability of multiple treatment options
d. high levels of self-esteem
2 .
What is an example of an adjunctive treatment in mental health care?
a. antipsychotic medication
b. hospitalization
c. psychotherapy
d. yoga
3 .
What is a key role of nurses in the provision of adjunctive treatments for mental illness?
a. prescribing medication
b. conducting psychotherapy sessions
c. monitoring client treatment adherence
d. performing surgical procedures
4 .
What statement describes a controversy associated with the practice of psychiatry?
a. the universal agreement on the efficacy of psychiatric medications across all populations
b. the use of involuntary treatment and the potential infringement on personal freedoms
c. the absence of any ethical dilemmas in psychiatric practices
d. the complete alignment of psychiatric diagnoses with physical health conditions
5 .
What does the anti-psychiatry movement want to reform in psychiatric practices?
a. increasing the use of involuntary treatments to improve client outcomes
b. enhancing the transparency and client involvement in treatment decisions
c. reducing the emphasis on social determinants of mental health
d. eliminating the use of all medications in psychiatric treatment
6 .
What is a nursing implication derived from the anti-psychiatry movement?
a. Nurses should solely rely on psychiatrists’ decisions without involving clients in the care process.
b. Nurses should disregard clients’ personal experiences and narratives about their mental health.
c. Nurses should adopt a client-centered approach that respects individuals’ rights and preferences in care.
d. Nurses should exclusively use involuntary treatment methods for managing all psychiatric clients.
7 .
What statement best describes the development of user groups in the digital age?
a. User groups have decreased in number due to the advent of technology.
b. User groups are exclusively for professional networking and cannot be used for health-care purposes.
c. User groups have evolved to utilize digital platforms for broader reach and acquiring digital health information.
d. User groups are less important in the digital age as individuals prefer to seek information independently.
8 .
What platform is commonly used for hosting online communities and support groups for clients and families?
a. video game forums
b. social media platforms
c. encrypted email services
d. corporate intranets
9 .
What is a key resource for finding databases and evidence-based practice resources in nursing?
a. popular search engines like Google
b. nursing forums and professional organizations’ websites
c. personal blogs of health-care professionals
d. entertainment websites
10 .
How can nurses use informatics and technology innovation in their practice?
a. by avoiding the use of electronic health records to protect client privacy
b. by utilizing telehealth services to provide care and consultation remotely
c. by relying solely on traditional paper records for client documentation
d. by ignoring new technology trends as they are often temporary and not useful | {"url":"https://openstax.org/books/psychiatric-mental-health/pages/26-review-questions","timestamp":"2024-11-09T09:38:24Z","content_type":"text/html","content_length":"541815","record_id":"<urn:uuid:2e80a970-08c3-4aa4-a678-2b9ebf19d47e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00877.warc.gz"} |
0026 Probability - Out of the Math Box!
Hints will display for most wrong answers; explanations for most right answers. You can attempt a question multiple times; it will only be scored correct if you get it right the first time.
I used the official objectives and sample test to construct these questions, but cannot promise that they accurately reflect what’s on the real test. Some of the sample questions were more
convoluted than I could bear to write. See terms of use. See the MTEL Practice Test main page to view random questions on a variety of topics or to download paper practice tests.
MTEL General Curriculum Mathematics Practice
What is the probability that two randomly selected people were born on the same day of the week? Assume that all days are equally probable.
\( \large \dfrac{1}{7}\)
It doesn't matter what day the first person was born on. The probability that the second person will match is 1/7 (just designate one person the first and the other the second). Another way to look
at it is that if you list the sample space of all possible pairs, e.g. (Wed, Sun), there are 49 such pairs, and 7 of them are repeats of the same day, and 7/49=1/7.
\( \large \dfrac{1}{14}\)
What would be the sample space here? Ie, how would you list 14 things that you pick one from?
\( \large \dfrac{1}{42}\)
If you wrote the seven days of the week on pieces of paper and put the papers in a jar, this would be the probability that the first person picked Sunday and the second picked Monday from the jar --
not the same situation.
\( \large \dfrac{1}{49}\)
This is the probability that they are both born on a particular day, e.g. Sunday.
Question 1 Explanation:
Topic: Calculate the probabilities of simple and compound events and of independent and dependent events (Objective 0026).
There are six gumballs in a bag — two red and four green. Six children take turns picking a gumball out of the bag without looking. They do not return any gumballs to the bag. What is the
probability that the first two children to pick from the bag pick the red gumballs?
\( \large \dfrac{1}{3}\)
This is the probability that the first child picks a red gumball, but not that the first two children pick red gumballs.
\( \large \dfrac{1}{8}\)
Are you adding things that you should be multiplying?
\( \large \dfrac{1}{9}\)
This would be the probability if the gumballs were returned to the bag.
\( \large \dfrac{1}{15}\)
The probability that the first child picks red is 2/6 = 1/3. Then there are 5 gumballs in the bag, one red, so the probability that the second child picks red is 1/5. Thus 1/5 of the time, after the
first child picks red, the second does too, so the probability is 1/5 x 1/3 = 1/15.
Question 2 Explanation:
Topic: Calculate the probabilities of simple and compound events and of independent and dependent events (Objective 0026).
The table below gives data from various years on how many young girls drank milk.
Based on the data given above, what was the probability that a randomly chosen girl in 1990 drank milk?
\( \large \dfrac{502}{1222}\)
This is the probability that a randomly chosen girl who drinks milk was in the 1989-1991 food survey.
\( \large \dfrac{502}{2149}\)
This is the probability that a randomly chosen girl from the whole survey drank milk and was also surveyed in 1989-1991.
\( \large \dfrac{502}{837}\)
\( \large \dfrac{1222}{2149}\)
This is the probability that a randomly chosen girl from any year of the survey drank milk.
Question 3 Explanation:
Topic: Recognize and apply the concept of conditional probability (Objective 0026).
The table below gives the result of a survey at a college, asking students whether they were residents or commuters:
Based on the above data, what is the probability that a randomly chosen commuter student is a junior or a senior?
\( \large \dfrac{34}{43}\)
\( \large \dfrac{34}{71}\)
This is the probability that a randomly chosen junior or senior is a commuter student.
\( \large \dfrac{34}{147}\)
This is the probability that a randomly chosen student is a junior or senior who is a commuter.
\( \large \dfrac{71}{147}\)
This is the probability that a randomly chosen student is a junior or a senior.
Question 4 Explanation:
Topic: Recognize and apply the concept of conditional probability (Objective 0026).
At a school fundraising event, people can buy a ticket to spin a spinner like the one below. The region that the spinner lands in tells which, if any, prize the person wins.
If 240 people buy tickets to spin the spinner, what is the best estimate of the number of keychains that will be given away?
"Keychain" appears on the spinner twice.
The probability of getting a keychain is 1/3, and so about 1/3 of the time the spinner will win.
What is the probability of winning a keychain?
That would be the answer for getting any prize, not a keychain specifically.
Question 5 Explanation:
Topic: I would call this topic expected value, which is not listed on the objectives. This question is very similar to one on the sample test. It's not a good question in that it's oversimplified (a
more difficult and interesting question would be something like, "The school bought 100 keychains for prizes, what is the probability that they will run out before 240 people play?"). In any case, I
believe the objective this is meant for is, "Recognize the difference between experimentally and theoretically determined probabilities in real-world situations. (Objective 0026)." This is not
something easily assessed with multiple choice .
A family has four children. What is the probability that two children are girls and two are boys? Assume the the probability of having a boy (or a girl) is 50%.
\( \large \dfrac{1}{2}\)
How many different configurations are there from oldest to youngest, e.g. BGGG? How many of them have 2 boys and 2 girls?
\( \large \dfrac{1}{4}\)
How many different configurations are there from oldest to youngest, e.g. BGGG? How many of them have 2 boys and 2 girls?
\( \large \dfrac{1}{5}\)
Some configurations are more probable than others -- i.e. it's more likely to have two boys and two girls than all boys. Be sure you are weighting properly.
\( \large \dfrac{3}{8}\)
There are two possibilities for each child, so there are \(2 \times 2 \times 2 \times 2 =16\) different configurations, e.g. from oldest to youngest BBBG, BGGB, GBBB, etc. Of these configurations,
there are 6 with two boys and two girls (this is the combination \(_{4}C_{2}\) or "4 choose 2"): BBGG, BGBG, BGGB, GGBB, GBGB, and GBBG. Thus the probability is 6/16=3/8.
Question 6 Explanation:
Topic: Apply knowledge of combinations and permutations to the computation of probabilities (Objective 0026).
If two fair coins are flipped, what is the probability that one will come up heads and the other tails?
\( \large \dfrac{1}{4}\)
Think of the coins as a penny and a dime, and list all possibilities.
\( \large \dfrac{1}{3} \)
This is a very common misconception. There are three possible outcomes -- both heads, both tails, and one of each -- but they are not equally likely. Think of the coins as a penny and a dime, and
list all possibilities.
\( \large \dfrac{1}{2}\)
The possibilities are HH, HT, TH, TT, and all are equally likely. Two of the four have one of each coin, so the probability is 2/4=1/2.
\( \large \dfrac{3}{4}\)
Think of the coins as a penny and a dime, and list all possibilities.
Question 7 Explanation:
Topic: Calculate the probabilities of simple and compound events and of independent and dependent events (Objective 0026).
Four children randomly line up, single file. What is the probability that they are in height order, with the shortest child in front? All of the children are different heights.
\( \large \dfrac{1}{4}\)
Try a simpler question with 3 children -- call them big, medium, and small -- and list all the ways they could line up. Then see how to extend your logic to the problem with 4 children.
\( \large \dfrac{1}{256} \)
Try a simpler question with 3 children -- call them big, medium, and small -- and list all the ways they could line up. Then see how to extend your logic to the problem with 4 children.
\( \large \dfrac{1}{16}\)
Try a simpler question with 3 children -- call them big, medium, and small -- and list all the ways they could line up. Then see how to extend your logic to the problem with 4 children.
\( \large \dfrac{1}{24}\)
The number of ways for the children to line up is \(4!=4 \times 3 \times 2 \times 1 =24\) -- there are 4 choices for who is first in line, then 3 for who is second, etc. Only one of these lines has
the children in the order specified.
Question 8 Explanation:
Topic: Apply knowledge of combinations and permutations to the computation of probabilities (Objective 0026).
There are 8 questions to complete.
If you found a mistake or have comments on a particular question, please contact me (please copy and paste at least part of the question into the form, as the numbers change depending on how quizzes
are displayed). General comments can be left here. | {"url":"https://debraborkovitz.com/0026-probability/","timestamp":"2024-11-10T07:37:59Z","content_type":"text/html","content_length":"109023","record_id":"<urn:uuid:624f30f2-69c8-400f-beff-c00b8be16654>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00797.warc.gz"} |
Calculating the Day of the Week Without Code
This blog is inspired by a forum post I read where someone asked to display the day of the week in a CRM view. Unfortunately, I cannot find the request now but it is possible (and without code).
In the above, the Input Date is the date we want the date for and the Day of the Week shows the correct day.
The Calculation
For this one we need the following fields.
Working backwards:
• Day of the Week: An option set of the days of the week (set via a workflow)
• DOTW Number: A Whole Number which shows a number representing the correct day of the week
(a calculated field and, in this case, a number between –3 and 3)
• Dodgy DIV: Strictly speaking, not a mathematical DIV but close enough for our purposes. In our case, this represents the week our Input Date is in. It is a calculated field.
• Input Date: The date of interest, entered by the user.
• Reference Date: A date we know the day of the week for. In this case, 1/1/2015 which was a Thursday
To anyone outside of the USA, Belize, or the Federated States of Micronesia, I apologize for using “Middle Endian” formatting for my dates. It is a blatant play to the American market on my behalf
and does not affect the calculations.
In terms of the Reference Date, my preference was to incorporate this into the Dodgy DIV calculation but I could not figure out how to input a fixed date e.g. “01/01/2015” into the calculated field.
CRM either thought it was an integer or thought it was a string. If anyone knows how to do this, please leave a comment at the end.
Given I could not feed my reference date directly into the Dodgy DIV calculation, I had to automatically enter it into a field via a Business Rule. Here it is.
I tried using the Set Default Value action but struggled to get it to behave so I just used the Set Value action in the end. I also was forced to put in a condition so I selected one which is always
With this value set, I could then calculate the Dodgy DIV.
A couple of issues with this formula, which is why it is dodgy. What I wanted it to do was calculate the difference in days between 1/1/2015 and our Input Date and then divide it by seven. It turns
out that the DiffInDays function has a pretty huge bug in it, in that it gets the difference in days wrong. I found that if I put in the same two dates into the DiffInDays formula, it returned –1 and
not 0, as expected. This is why I add one.
The second issue is, in C (the language I used to code in a loong time ago), if you divided two numbers and put them into an integer variable, it rounded down, effectively being a DIV operation.
Dynamics CRM does not play this way and applies a normal rounding operation (x.5 or more goes up and less than x.5 goes down).
Using my Dodgy DIV, I can now calculate the day of the week. The DOTW Number (Day of the Week Number) is also a calculated field.
This is where the clever trick comes in (if I do say so myself). Here we calculate the difference in days again, add one to account for the bug in the DiffInDays formula and take away the Dodgy DIV
value, multiplied by 7. Because the Dodgy DIV field has applied a rounding, the difference is the Modulus (also know as the remainder) or it would be for a pure DIV function. In our case we generate
a number between –3 and 3, rather than a number between 0 and 6.
Finally, we have our days of the week. Unfortunately there is no simple way to set the Option Set Value with a formula using a number so I had to use a real time workflow.
The workflow is triggered on creation of the record and on the changing of the Input Date field. There are seven IF statements, one for each day of the week, linking the DOTW Number to the right day.
Once all of this is done it works like a treat, with the day value being set on the saving of the record. It is then a case of adding the Day of the Week field to the view you want to see it in.
I like this solution as it opens the way for quite a few other requests seen in CRM systems. For example, with the day of the week we can check whether a day is a work day when setting an
appointment. Although I have not fully figured it out how yet, I imagine we could also use a similar technique to work out the number of working days between two dates. So many possibilities thanks
to calculated fields.
If you are not exploring how calculated fields can help you manage your business processes in Dynamics CRM, perhaps you should because, as you can see above, they open up a wealth of options,
previously inaccessible, without using code.
2 comments:
Unknown said...
Wouldn't the workflow produce the wrong DOTW if next year's Jan 1 (reference date) is not on a Thursday?
Leon Tribe said...
If we used a different reference date we'd need to adjust but my one will work for as long as we like. | {"url":"https://leontribe.blogspot.com/2015/11/calculating-day-of-week-without-code.html","timestamp":"2024-11-07T20:21:40Z","content_type":"application/xhtml+xml","content_length":"68837","record_id":"<urn:uuid:4201b1cf-4a95-437b-a87a-e24b95976a67>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00203.warc.gz"} |
How do you draw Miller indices?
How do you draw Miller indices?
For the given Miller indices, the plane can be drawn as follows:
1. Find the reciprocal of the given Miller indices.
2. Draw the cube and select a proper origin and show X, Y and Z axes respectively.
3. With respect to origin mark these intercepts and join through straight lines.
What is Miller planes and Miller directions?
Miller indices are used to specify directions and planes. These directions and planes could be in lattices or in crystals. The number of indices will match with the dimension of the lattice or the
crystal: in 1D there will be 1 index, in 2D there will be two indices, in 3D there will be 3 indices, etc.
What are the Miller indices of faces of a cubic lattice?
Miller indices are a notation to identify planes in a crystal. The three integers define directions orthogonal to the planes, thus constituting reciprocal basis vectors. Negative integers are usually
written with an overbar (e.g., represents ).
What is crystallographic direction?
i. Refers to directions in the various crystal systems that correspond with the growth of the mineral and often with the direction of one of the faces of the original crystal itself. ii. Vectors
referred to as crystallographic axes.
How the orientation of a plane is specified by Miller indices?
Miller Indices are a symbolic vector representation for the orientation of an atomic plane in a crystal lattice and are defined as the reciprocals of the fractional intercepts which the plane makes
with the crystallographic axes. In other words, how far along the unit cell lengths does the plane intersect the axis.
How do we find out directions and planes in crystals?
Indices of crystallographic points, directions, and planes are given in terms of the lattice constants of the unit cell. For points and directions, you can consider the indices to be coefficients of
the lattice constants. Remember that you only need to invert the indices for planes. | {"url":"https://leagueslider.com/how-do-you-draw-miller-indices/","timestamp":"2024-11-09T14:23:44Z","content_type":"text/html","content_length":"47162","record_id":"<urn:uuid:38bc0126-1432-4f57-bbf2-f8c2af665209>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00150.warc.gz"} |
Relative Motion - S.B.A. Invent
Relative Motion
I have mainly talked about absolute motion (motion observed by a stationary observer) in the articles preceding this article. There can however be many cases where the motion of a particle could be
so complex that it would be difficult to analyze by observing it from one stationary point. Hence to simplify complex problems you can observe the motion from multiple frames of reference. An
example of this could be analyzing the tip of a ships propeller. For instance, a fixed observer would see the ship itself moving, while another observer would analyze the propellers rotation. As a
result to find the complete motion of the propeller you would superimpose the two observations vectorially.
Let’s consider two particles that are moving in a straight line in different directions. First, there will be a stationary observer at point O observing both particle A and particle B. Next, a
translating observer will view the motion of particle B from particle A. Paths $s_A$ and $s_B$ are absolute paths. While path $s_{B/A}$ is a relative position. Finally, the three paths can be
related to each other using vector addition.
(Eq 1) $s_B=s_A+s_{B/A}$
If you were to take the derivative of equation 1 you would be able to find the velocity of particle A and particle B. This would result in the equation below.
(Eq 2) $v_B=\frac{ds_B}{dt}$, $v_A=\frac{ds_A}{dt}$, $v_{B/A}=\frac{ds_{B/A}}{dt}$
For the above equation, velocities $v_B$ and $v_A$ are absolute velocities. The third velocity $v_{B/A}$ will be a relative velocity. In turn, the velocities can be related to each other through
vector addition as seen in the equation below.
(Eq 3) $v_B=v_A+v_{B/A}$
Finally, if you were take the derivative of equation 1 or the double derivative of equation 3 you will be able to find the acceleration of particle A and particle B.
(Eq 4) $a_B=\frac{dv_B}{dt}$, $a_A=\frac{dv_A}{dt}$, $a_{B/A}=\frac{dv_{B/A}}{dt}$
In the above equation $a_B$ and $a_A$ represent the absolute acceleration of the two particles, while $a_{B/A} represents the relative acceleration between particle A and particle B. The absolute
accelerations and relative acceleration can be related to each other through vector addition as seen in the equation below.
(Eq 5) $a_B=a_A+a_{B/A}$
What is the magnitude and direction of the relative velocity of particle A in respect to particle B?
Step one, use equation 3 and write the velocity components out in x and y coordinates.
Solve for $V_{A/B}$
Find the magnitude of the velocity and angle.
$tan(θ)=\frac{-56.3}{-2.5}$, $θ=87.5^o$
Previous | Next
⇑ | ⇓
You must be logged in to post a comment. | {"url":"https://sbainvent.com/dynamics/kinematics-of-a-particle/relationship-of-two-particles/","timestamp":"2024-11-11T14:52:59Z","content_type":"text/html","content_length":"73746","record_id":"<urn:uuid:8733d70e-4061-44a1-be72-35fb3028756f>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00047.warc.gz"} |
Understanding T-SNE: Visualizing High-Dimensional Data Easily
Understanding t-SNE: Visualizing High-Dimensional Data Easily
By RoX818 / October 7, 2024
Unlocking the Secrets of Complex Data
Ever feel like you’re drowning in data? High-dimensional datasets are everywhere, and they can be overwhelming to understand.
That’s where t-SNE comes in. Offering a lifeline for anyone trying to make sense of the seemingly incomprehensible. This algorithm takes massive amounts of information and simplifies it into
something you can actually see—revealing hidden patterns that were once locked away in a tangle of dimensions.
Curious to dive deeper into how it works and why it’s so powerful? Let’s break it down.
What Exactly is t-SNE?
t-SNE, or t-Distributed Stochastic Neighbor Embedding, is a machine learning algorithm used for dimensionality reduction. It takes large, complex datasets (usually with many dimensions) and reduces
them into two or three dimensions, which are easy to visualize. Think of it like a magician simplifying a complicated card trick so that you can finally understand the sleight of hand.
Unlike traditional methods such as PCA (Principal Component Analysis), t-SNE focuses on preserving local structures in data. That means if two data points are close in the high-dimensional space,
they’ll be close in the lower-dimensional representation too. It’s like creating a map that keeps the most important roads intact!
Why Do We Need t-SNE?
If you’ve ever worked with high-dimensional data, you know the struggle. You’re staring at rows upon rows of numbers, wondering where the meaningful patterns are hidden. The human brain is fantastic
at recognizing patterns, but only in a few dimensions—two or three at most.
When you increase that to 20 or 100 dimensions, it’s nearly impossible for the naked eye to make sense of the relationships.
That’s where t-SNE steps in. By reducing the data into a manageable number of dimensions while maintaining structural integrity, it allows you to see clusters, outliers, and patterns that would be
invisible otherwise. Whether you’re dealing with text embeddings, gene expression data, or image classification results, t-SNE can give you a much clearer picture.
How Does t-SNE Work?
At its core, t-SNE tries to balance two things: keeping similar points close together and keeping dissimilar points far apart. It does this by calculating the probability that two points are
neighbors in both the original high-dimensional space and the reduced space. t-SNE then adjusts the positions in the reduced space until these probabilities match as closely as possible.
There’s a clever trick here: t-SNE uses a technique called Stochastic Neighbor Embedding, where instead of hard-cut distances, it calculates probabilities. This allows the algorithm to focus more on
preserving small neighborhoods rather than large global structures, making it perfect for data that has clusters or groups.
t-SNE Process Step-by-Step
Tuning t-SNE: Perplexity and Learning Rate
t-SNE isn’t a plug-and-play algorithm—you need to tune it to get the best results. Two of the key hyperparameters you’ll need to tweak are perplexity and learning rate.
Perplexity is a measure of how well t-SNE captures the local density of data points. A lower perplexity means the algorithm focuses on very small, dense clusters, while a higher value leads to a
broader focus on larger structures. The right value depends on your data and goals, but typical values range between 5 and 50.
Meanwhile, the learning rate controls how much the positions of points change in each iteration of the optimization process. If it’s too high, you risk overshooting and getting a messy plot. If it’s
too low, the algorithm could get stuck and take too long to converge.
t-SNE Hyperparameter Tuning: Learning Rate
Pros of Using t-SNE in Data Science
The beauty of t-SNE lies in its ability to capture non-linear structures that other dimensionality reduction techniques may miss. For example, it’s excellent for visualizing clusters that represent
different classes in your data, such as different handwritten digits in the popular MNIST dataset. With t-SNE, you get an insightful, often stunning view of your data’s underlying patterns—something
that can guide further analysis or decision-making.
Challenges of t-SNE: It’s Not Always Smooth Sailing
While t-SNE is a powerful tool, it isn’t without its limitations. One of the main issues is that it can be computationally expensive. The algorithm doesn’t scale well to very large datasets, meaning
if you’re working with millions of data points, it could take an impractical amount of time to run.
Additionally, t-SNE visualizations can sometimes be tricky to interpret. The way the algorithm compresses distances can lead to plots where the distance between clusters is not always meaningful. You
might see clear separation between groups, but those gaps don’t necessarily represent the true relationship in the original high-dimensional space. This is important to remember, as interpreting
distances between clusters can mislead your analysis if you’re not careful.
Moreover, random initialization means that t-SNE results can vary from run to run. Each time you apply the algorithm, it may produce slightly different visualizations because it starts with random
positions for your data points. This isn’t usually a dealbreaker, but it does mean you should run t-SNE multiple times to ensure consistent insights.
Comparing t-SNE with Other Dimensionality Reduction Methods
You might be wondering how t-SNE stacks up against other techniques like PCA (Principal Component Analysis) or UMAP (Uniform Manifold Approximation and Projection). While t-SNE is highly effective at
visualizing local relationships and clusters, it differs from PCA in that it doesn’t attempt to preserve large-scale global structure. PCA, on the other hand, captures linear relationships in the
data and is much faster to compute, but it may miss non-linear patterns.
Distances in t-SNE: Preserving Local Neighborhoods
Then there’s UMAP, a newer algorithm that is considered a competitor to t-SNE. UMAP has the advantage of being faster and often producing more meaningful global structures, while still capturing
local relationships. The choice between these tools depends largely on your dataset and what you hope to achieve with your visualization.
Common Use Cases: Where t-SNE Shines
One of the most popular applications of t-SNE is in natural language processing (NLP). It’s often used to visualize word embeddings—high-dimensional vectors that represent the meaning of words based
on their context. With t-SNE, you can easily see how similar words cluster together, revealing interesting linguistic patterns. For instance, words like “king” and “queen” might appear near each
other in a t-SNE plot, as their embeddings are similar.
High-Dimensional Data to Low-Dimensional Projection
Another key use case is in genomics. Researchers often deal with massive datasets that record gene expressions across thousands of samples. t-SNE helps reduce this complex data into something more
understandable, showing which genes are similarly expressed and potentially helping to uncover biological insights.
Lastly, image processing is another field where t-SNE is widely used. When you apply t-SNE to high-dimensional image data, it can reveal patterns of similarity between different images, often helping
in tasks like classification or clustering.
Pitfalls to Avoid When Using t-SNE
While t-SNE is incredibly useful, there are some common mistakes that can lead to poor results. One of the most frequent pitfalls is overinterpreting the visualization. Just because two clusters are
far apart in a t-SNE plot doesn’t necessarily mean they’re that different in the high-dimensional space. t-SNE focuses on local neighborhoods and can distort global relationships, so be cautious
about reading too much into the distances between clusters.
Another issue is using t-SNE without enough preprocessing. Since t-SNE is sensitive to noise and outliers, it’s essential to clean and normalize your data before applying the algorithm. Failing to do
so could result in a messy, unclear visualization that doesn’t provide any real insight.
Lastly, if you’re working with extremely large datasets, you might want to consider subsampling before running t-SNE. Running the algorithm on millions of points isn’t just slow—it’s also likely to
produce a confusing plot. By taking a representative subset of your data, you can get clearer, more informative visualizations.
Interpreting t-SNE Results: What to Look For
When you run t-SNE and get that beautiful, colorful plot, it’s tempting to immediately start drawing conclusions. But before you do, there are a few key things to keep in mind.
First, focus on the clustering. Are there clear groups of points that seem to naturally fall together? These clusters might represent categories or classes within your data that you didn’t even know
existed. On the other hand, if your data doesn’t show any clear clusters, that might suggest it’s more homogeneous than you thought.
Next, watch out for outliers. t-SNE is great at highlighting points that don’t belong in any cluster—these outliers can be a signal that something interesting is happening in your data, or they might
simply represent noise that you need to clean up.
Finally, remember that t-SNE is a tool for exploration rather than definitive answers. The insights you gain from a t-SNE plot should guide further investigation, not serve as the final word on your
Best Practices for Running t-SNE Efficiently
Running t-SNE efficiently is key to avoiding unnecessary bottlenecks. Since the algorithm can be slow on large datasets, there are a few strategies to speed things up without sacrificing too much
detail. First, you can reduce the number of features beforehand by applying a technique like PCA to cut down on dimensions. This way, t-SNE has fewer variables to work with, resulting in faster
Another helpful tip is to subsample your data. t-SNE often works well with a representative sample of your dataset rather than the entire thing. If your data has millions of points, selecting a few
thousand representative ones can provide nearly the same insights while drastically cutting down on processing time.
Using GPU acceleration is another powerful option. Some implementations of t-SNE, such as those in TensorFlow, offer GPU support, allowing for much faster performance than traditional CPU-only
approaches. If you’re working with large datasets regularly, taking advantage of a GPU can make a world of difference.
Interpreting Clusters: How to Make Sense of Your t-SNE Plot
Once you’ve generated a t-SNE plot, the next step is interpreting the clusters you see. But what exactly do these clusters represent? In most cases, the clusters signify similar data points that
share common features. For example, in a t-SNE plot of image data, each cluster might represent images of the same object, such as different views of a dog or cat.
But keep in mind that t-SNE’s focus is local—it aims to keep close points together, but the distances between different clusters might not be as meaningful. Instead of focusing on the absolute
distance between clusters, pay attention to the density of points within clusters. Tight clusters with little overlap may indicate distinct classes or groups, while more spread-out clusters could
suggest overlapping or ambiguous categories.
Also, examine whether the clusters are aligned with your expectations. If you’ve labeled data, do the clusters match known categories? If not, you may have uncovered something new—an opportunity to
delve deeper into why certain points are grouping together in unexpected ways.
t-SNE and Noise: Handling Messy Datasets
Real-world data is rarely clean, and noise can interfere with your t-SNE results. Noisy data can lead to messy plots where points that should belong to the same cluster get scattered across different
parts of the visualization. To mitigate this, it’s crucial to clean your data as much as possible before running t-SNE.
Standard preprocessing steps like scaling and normalization are essential when using t-SNE. Ensuring that all your features are on the same scale can help the algorithm better capture the true
relationships between data points. Additionally, removing outliers—data points that are extreme or don’t belong—can improve the clarity of your t-SNE plot by ensuring the algorithm focuses on the
important patterns.
Another way to reduce noise in your t-SNE results is by adjusting the perplexity setting. Lower perplexity values force t-SNE to focus more on small, tight clusters, which can help in identifying
core structures in a noisy dataset.
Limitations of t-SNE: What It Can and Can’t Do
While t-SNE is undoubtedly powerful, it’s not without limitations. One of the biggest challenges is its tendency to distort global relationships in the data. While it preserves local neighborhood
structures, t-SNE does not accurately reflect the overall distances between widely separated clusters. This means that while you can trust the grouping of nearby points, you shouldn’t rely too
heavily on the distances between different groups.
Another limitation is that t-SNE is primarily a visualization tool. It doesn’t provide a direct way to classify or predict based on the clusters it uncovers. Instead, you’ll need to use it in
conjunction with other methods to draw actionable conclusions from your data.
Finally, t-SNE’s results can be hard to reproduce. Since it uses random initialization, running t-SNE multiple times on the same dataset can produce slightly different visualizations. To address
this, you can set a random seed before running the algorithm, ensuring that your results are consistent across different runs.
Exploring Alternatives: When t-SNE Isn’t the Right Fit
t-SNE isn’t always the best option for every dataset. If you’re working with extremely large datasets or need faster performance, you might want to consider alternatives like UMAP. UMAP tends to be
faster than t-SNE and can often produce more meaningful global structures while still preserving local neighborhoods. It’s gaining popularity in fields like genomics and image processing, where data
size and complexity are significant concerns.
Another alternative is PCA, which, while less powerful at capturing non-linear relationships, is excellent for visualizing simpler datasets and has the advantage of being much faster. If your data
has a linear structure, PCA might be all you need to extract meaningful insights.
t-SNE vs PCA
Ultimately, the choice between t-SNE, UMAP, PCA, or other dimensionality reduction techniques comes down to the specifics of your data and your goals. For purely exploratory analysis or creating
beautiful visualizations, t-SNE is often the go-to choice. But for tasks that require faster computation or clearer interpretation of global structures, other methods might be better suited.
Real-World Examples of t-SNE in Action
To see t-SNE’s power in action, let’s explore a few real-world examples where it has made a significant impact. One well-known use of t-SNE is in the analysis of handwritten digit data, specifically
the MNIST dataset. This dataset consists of thousands of images of handwritten numbers, and using t-SNE to reduce the dimensionality reveals distinct clusters for each digit, showing how well the
algorithm can separate similar-looking but subtly different patterns.
Another striking example is in biological data analysis. In genomics, researchers frequently work with data on gene expression across different tissues or conditions. This type of data can have
thousands of dimensions, making it difficult to interpret. With t-SNE, researchers have been able to reduce the dimensionality of these datasets and identify distinct groups of genes that are
similarly expressed, often pointing to shared biological functions or disease markers.
In the field of natural language processing, t-SNE is used to visualize word embeddings—high-dimensional representations of words that capture their meaning based on context. Applying t-SNE to these
embeddings often produces fascinating results, with similar words clustering together in the reduced space. For example, in a t-SNE plot of word vectors, you might see “man” and “woman” in one
cluster, while “king” and “queen” form another, demonstrating how t-SNE helps reveal semantic relationships between words.
Practical Tips for Using t-SNE
If you’re ready to dive into using t-SNE on your own datasets, here are some practical tips to get you started. First, always remember to normalize your data. t-SNE is sensitive to differences in
scale, so ensuring that your features are on a common scale can make a big difference in the quality of the visualization.
Next, experiment with different values for the perplexity parameter. Perplexity controls the balance between local and global data structure, and while there are common default values (like 30), the
optimal value can vary depending on your dataset. Try running t-SNE with different perplexity settings to see how it affects the clustering.
Another tip is to be mindful of the size of your dataset. t-SNE doesn’t perform well on extremely large datasets, so if you’re working with a big dataset, consider subsampling to reduce the number of
data points. This allows t-SNE to run faster without losing the general trends and patterns you’re trying to uncover.
Finally, consider running t-SNE multiple times, especially if your results seem inconsistent. Since t-SNE’s random initialization can lead to slightly different results on each run, it’s often a good
idea to rerun the algorithm and compare the outputs, ensuring that the key patterns hold across different iterations.
The Future of t-SNE and Data Visualization
t-SNE has already transformed the way we visualize high-dimensional data, but what does the future hold for this powerful tool? As more researchers and data scientists look for ways to tackle
ever-larger datasets, the demand for faster, more scalable alternatives to t-SNE will continue to grow. Tools like UMAP are already offering faster, more robust options for visualizing large, complex
datasets, and we can expect further developments in this area as the field evolves.
That said, t-SNE will likely remain a go-to tool for certain use cases, especially when it comes to exploring local data structures and producing visually stunning representations of complex data.
With improvements in hardware, such as faster GPUs, and optimizations to the algorithm itself, t-SNE’s limitations may become less of a barrier over time.
As we continue to develop new ways of analyzing and interpreting data, t-SNE will remain a cornerstone of modern data science, offering invaluable insights into the hidden patterns and relationships
that shape our understanding of the world around us.
FAQ: Understanding t-SNE for Visualizing High-Dimensional Data
1. What is t-SNE?
t-SNE (t-Distributed Stochastic Neighbor Embedding) is a machine learning algorithm used for dimensionality reduction and data visualization. It takes high-dimensional data and reduces it to two or
three dimensions, making it easier to visualize complex patterns and structures in the data.
2. How does t-SNE work?
t-SNE works by calculating the probability that two points are neighbors in both the high-dimensional space and the reduced, low-dimensional space. It then adjusts the points in the lower dimension
to match these probabilities as closely as possible, preserving local relationships between data points. This means that data points that are close together in the original space remain close in the
reduced space, forming clusters that represent similar data.
3. What are the key advantages of using t-SNE?
t-SNE excels at:
• Visualizing clusters of similar data points.
• Revealing non-linear relationships that traditional methods like PCA might miss.
• Creating intuitive visual representations of complex, high-dimensional data.
It is particularly useful in fields like natural language processing, genomics, and image classification, where understanding local data patterns is crucial.
4. What are the limitations of t-SNE?
t-SNE has some limitations:
• Computationally expensive: It can be slow, especially with large datasets.
• Global structure distortion: While t-SNE preserves local neighborhoods, it can distort the global structure of the data, meaning the distance between clusters in the plot may not reflect
real-world differences.
• Non-reproducibility: Each time you run t-SNE, the result might be different due to random initialization unless you set a fixed seed.
5. What is perplexity in t-SNE, and how do I set it?
Perplexity is a key hyperparameter in t-SNE that controls the balance between focusing on local and global structures. It defines the number of close neighbors each point considers when reducing
dimensions. Typical values for perplexity range between 5 and 50, but the optimal value depends on the size and nature of your dataset. Larger perplexity values work better for bigger datasets, while
smaller values are more effective for smaller, denser data.
Perplexity and Local vs Global Structures
6. How is t-SNE different from PCA (Principal Component Analysis)?
While both are dimensionality reduction techniques, PCA captures linear relationships and is faster but might miss more complex, non-linear structures. t-SNE, on the other hand, preserves local
relationships and excels at identifying non-linear patterns, making it more effective for visualizing clusters but less useful for understanding large-scale global structure.
7. Can t-SNE be used for large datasets?
t-SNE doesn’t scale well with large datasets because of its computational complexity. For very large datasets (thousands to millions of points), it’s common to subsample the data or use alternatives
like UMAP (Uniform Manifold Approximation and Projection), which offers similar results but with faster performance.
8. How do I handle noise when using t-SNE?
Preprocessing your data is essential when using t-SNE. This includes scaling, normalizing, and removing outliers to reduce noise. Noise in the data can lead to cluttered, unclear visualizations,
making it harder to identify meaningful patterns. Proper data cleaning and choosing the right hyperparameters (like perplexity) can improve results significantly.
9. How do I interpret a t-SNE plot?
In a t-SNE plot:
• Clusters represent groups of similar data points.
• Tight clusters often indicate well-defined categories or classes.
• Overlapping clusters may suggest ambiguity or shared characteristics between groups.
However, remember that the distance between clusters doesn’t always carry meaning. t-SNE focuses on local relationships, so global distances might be distorted.
10. How can I reproduce the same t-SNE results?
Since t-SNE uses random initialization, results can vary each time it is run. To reproduce results consistently, you can set a random seed before running the algorithm. Many t-SNE implementations
allow you to do this by specifying the seed when you run the function.
11. What are some alternatives to t-SNE?
If t-SNE isn’t a perfect fit, consider:
• PCA: For faster dimensionality reduction, especially with linear data.
• UMAP: A newer algorithm that is faster than t-SNE and often better at preserving global structure while maintaining local clusters.
• LLE (Locally Linear Embedding): Another method focused on preserving local relationships in data but less popular than t-SNE.
12. How can I use t-SNE in a machine learning pipeline?
t-SNE can be used in exploratory data analysis to visualize high-dimensional data before applying machine learning models. It helps identify clusters or patterns that inform further analysis. After
running t-SNE, you might apply clustering algorithms like k-means on the reduced data to classify or group data points.
13. Can I combine t-SNE with other dimensionality reduction techniques?
Yes! A common approach is to first apply PCA to reduce the dimensionality of the dataset to a manageable level (e.g., 50 dimensions), and then apply t-SNE for further reduction and visualization.
This two-step process can improve t-SNE’s efficiency and clarity, especially with noisy or large datasets.
14. Is t-SNE suitable for unsupervised learning?
Yes, t-SNE is widely used in unsupervised learning tasks where the goal is to uncover hidden structures in unlabeled data. It helps in visualizing and identifying clusters in the data, which can then
inform the use of clustering algorithms or further analysis.
Top Resources for Learning and Mastering t-SNE
1. Official Documentation and Tutorials
• Scikit-learn Documentation:
Scikit-learn provides an excellent implementation of t-SNE with a detailed explanation of the algorithm, usage examples, and hyperparameter settings.
Link: Scikit-learn t-SNE
• TensorFlow t-SNE Tutorial:
TensorFlow also offers t-SNE support and provides GPU acceleration for faster computation. Their tutorial walks through applying t-SNE for high-dimensional data visualization.
Link: TensorFlow t-SNE
2. Research Papers and Theoretical Background
• Original t-SNE Paper by Laurens van der Maaten and Geoffrey Hinton:
This foundational paper introduces the t-SNE algorithm, discussing its mathematical foundation and applications in various fields.
• Visualizing Data Using t-SNE:
This paper explores the intuition behind the algorithm and includes practical examples of how t-SNE can reveal insights from high-dimensional data.
Link: ResearchGate
3. Online Courses and Tutorials
• Coursera – Applied Machine Learning: Dimensionality Reduction:
This course covers a variety of dimensionality reduction techniques, including t-SNE, PCA, and UMAP, focusing on practical implementations.
Link: Coursera Applied Machine Learning
• Kaggle Notebooks:
Kaggle offers interactive t-SNE notebooks where you can practice coding and visualize t-SNE outputs on real datasets.
4. Blog Posts and Tutorials
• Distill.pub – Visualizing MNIST:
This interactive blog post explains how t-SNE can be applied to visualize the MNIST dataset (handwritten digits), offering visual step-by-step insights into how the algorithm works.
Link: Distill.pub MNIST
• Towards Data Science – A Visual Guide to t-SNE:
This beginner-friendly tutorial explains the core principles behind t-SNE using intuitive visuals and step-by-step examples.
5. Open-Source Libraries and Tools
• Scikit-learn:
A popular machine learning library in Python that includes an easy-to-use implementation of t-SNE.
Link: Scikit-learn GitHub
• TensorFlow:
TensorFlow offers a t-SNE implementation optimized for large datasets and supports GPU acceleration.
Link: TensorFlow GitHub
• t-SNE-CUDA:
A GPU-accelerated version of t-SNE for those working with very large datasets, significantly improving computation speed.
Link: t-SNE CUDA GitHub
6. Visual and Interactive Tools
• Projector by TensorFlow:
A powerful interactive tool for visualizing high-dimensional data using t-SNE and other dimensionality reduction algorithms like PCA and UMAP. Great for hands-on experimentation with word
embeddings and other data.
• OpenAI’s Embedding Projector:
An easy-to-use interface for exploring and visualizing embeddings using t-SNE. Perfect for getting started with your own datasets.
Link: OpenAI Embedding Projector
7. Books for In-Depth Learning
• “Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow” by Aurélien Géron:
This book offers comprehensive examples of t-SNE and other dimensionality reduction techniques, with a focus on practical implementations using popular Python libraries.
• “Python Data Science Handbook” by Jake VanderPlas:
A fantastic resource for learning data science and machine learning in Python, with examples of t-SNE implementations and visualizations.
8. GitHub Repositories with t-SNE Projects
• t-SNE by Laurens van der Maaten (Original):
The original t-SNE codebase written by the algorithm’s creator, providing an excellent reference point for understanding how t-SNE is implemented.
Link: t-SNE GitHub
• Kaggle Datasets and Kernels:
Kaggle is a great platform to find real-world datasets and explore t-SNE implementations on data such as MNIST, CIFAR-10, and more.
Link: Kaggle
Leave a Comment Cancel Reply
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://aicompetence.org/understanding-t-sne/","timestamp":"2024-11-06T14:51:04Z","content_type":"text/html","content_length":"429642","record_id":"<urn:uuid:739e285a-12e3-4859-87b9-d33e7d4619b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00079.warc.gz"} |
Physics & Astronomy
Free Fermion Systems: Topological Classification and Real-Space Invariants
Abstract: One of the major progress of modern condensed matter physics is the discovery of topological phases beyond Landau's paradigm—phases that are characterized by topology besides symmetries. In
this thesis, we address topological phases of free fermionic systems by considering their topological classification and real-space invariants.
Although the theory for topological classification is fairly complete in momentum space, essentially based on the topological classification of fiber bundles, the theory in real space is more
difficult. In this thesis, we discuss a formula for the Z2 invariant of topological insulators. As a real-space formula, it is valid with or without translational invariance. Moreover, our formula is
a local expression, in the sense that the contributions mainly come from quantities near a point. It is the local nature of this invariant that guarantees the existence of gapless mode on the
boundary. Based on almost commute matrices, we provide a method to approximate this invariant with local information. The validity of the formula and the approximation method is rigorously proved.
The topological classification problem can be extended to non-Hermitian systems, an effective theory for systems with loss and gain. In this thesis, we propose a novel framework towards the
topological classification of non-Hermitian band structures. Different from previous K-theoretical approaches, this approach is homotopical, which enables us to find more topological invariants. We
find that the whole classification set is decomposed into several sectors, based on the braiding of energy levels. Each sector can be further classified based on the topology of eigenstates (wave
functions). Due to the interplay between energy level braiding and eigenstates topology, we find some torsion invariants, which only appear in the non-Hermitian world. We further prove that these new
topological invariants are unstable, in the sense that adding more bands will trivialize these invariants.
Thesis Title
Free Fermion Systems: Topological Classification and Real-Space Invariants
Graduate Advisor
Roger Mong | {"url":"https://www.physicsandastronomy.pitt.edu/people/zhi-li","timestamp":"2024-11-07T13:04:19Z","content_type":"text/html","content_length":"37649","record_id":"<urn:uuid:1549290d-d281-41d2-a4c3-98e6b6a17f1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00114.warc.gz"} |
Python Operators and Expressions - a2ztechie - Be a Programmer
Python Operators and Expressions
Python Operator and Expressions are the building blocks of Python programming language. They allow you to perform various operations on data, manipulate values, and make decisions based on
conditions. In this post, we will learn about the all different types of operators along with the examples and we will leverage the power of expressions in Python. Let’s learn about Python Operators
and Expressions.
Arithmetic Operators
Arithmetic Operators are used to perform arithmetical operations on numeric values. These are the same operators which we have used in any other programming language for perming numeric operations.
• + (Addition): Adds two values together.
• - (Subtraction): Subtracts the second value from the first.
• * (Multiplication): Multiplies two values.
• / (Division): Divides the first value by the second.
• % (Modulus): Returns the remainder of the division.
• ** (Exponentiation): Raises the first value to the power of the second.
• // (Floor Division): Divides and rounds down to the nearest whole number.
Let’s see some examples of Arithmetic operators
a , b = 20, 3. # assign 20 to a and 3 to b
a+b. # perform addition of a and b
a-b. # perform subtraction of a and b
a*b. # perform multiplication of a and b
a/b. # perform division of a and b
a%b. # perform module of a and b and return remainder
a**b. # perform exponent of a and b and return Cube
a//b. # perform division of a and b and return integer value
Comparison Operators
Comparison operators are used to compare two values and return a Boolean result ( True or False ). The Comparison operators help us to perform conditional operations. Mostly we use comparison
operators with if, while or is in operators.
• == (Equal to): Checks if two values are equal.
• != (Not equal to): Checks if two values are not equal.
• < (Less than): Checks if the first value is less than the second.
• > (Greater than): Checks if the first value is greater than the second.
• <= (Less than or equal to): Checks if the first value is less than or equal to the second.
• >= (Greater than or equal to): Checks if the first value is greater than or equal to the second.
Let’s see some examples of Comparison operators
a , b = 10, 20 # assign 10 to a and 20 to b
a == b # Check both values are equal. Here returns False
a != b # Check both values are not equal. Here returns True
a < b # Check first value is less than second. Here returns True
a > b # Check first value is greater than second. Here returns False
a <= b # Check first value is less than or equal to second. Here returns True
a >= b # Check first value is greater than or equal to second. Here returns False
Logical Operators
Logically combining multiple statements together involves the use of logical operators. There are three logical operators present in Python.
• and : Returns True if both values are True .
• or : Returns True if at least one value is True .
• not : Returns the opposite of the Boolean value.
Let’s see some examples of Logical operators
a , b = 10, 20 # assign 10 to a and 20 to b
a != b and a<b # Check both conditions are True. Here returns True
a != b and a>b # Check both conditions are True. Here returns False
a != b or a<b # Check any condition is True. Here returns True
a == b or a>b # Check any condition is True. Here returns False
not (a == b) # Return negative value of result. Here returns True
not (a != b) # Return negative value of result. Here returns False
Assignment Operators
When using shorthand notation, assignment operators assign values either with or without performing arithmetic operations.
• ** =** : Assigns the value on the right to the variable on the left.
• += (Add and assign): Adds the right value to the variable’s current value.
• -= (Subtract and assign): Subtracts the right value from the variable’s current value.
• *= (Multiply and assign): Multiplies the variable’s current value by the right value.
• /= (Divide and assign): Divides the variable’s current value by the right value.
Let’s see some examples of Assignment operators
a = 10 # Assign value 10 to variable a
b, c = 20, 30 # Assign value 20 to variable b and value 30 to variable c
c += b # Perform c = c + b
c -= b # Perform c = c - b
c *= b # Perform c = c * b
c/= b # Perform c = c / b
Bitwise Operators
Bitwise operators perform operations at the bit level.
• & (Bitwise AND): Performs bitwise AND operation.
• | (Bitwise OR): Performs bitwise OR operation.
• ^ (Bitwise XOR): Performs bitwise exclusive OR operation.
• ~ (Bitwise NOT): Inverts the bits.
• << (Left shift): Shifts bits to the left.
• >> (Right shift): Shifts bits to the right.
Conditional (Ternary) Operator
Conditional or Ternary operator is a different way to write If and else in different way or in a single line.
value_if_true if condition else value_if_false
Returns value_if_true if the condition is True , otherwise returns value_if_false .
a, b = 20, 10 # Assign value 20 to variable a and value 10 to variable b
c = 'True' if a > b else 'False'. # Return True if condition satisfy else return False
Membership Operators
The membership operators enable checking for membership within any sequence, such as List, Tuple, array, and more.
• in (Membership test)
• not in (Negated membership test)
a = [1, 2, 3, 4, 5] # Initialise list with 5 values
5 in a # Return True if 5 is present in a otherwise return False
6 not in a # Return True if 6 is not present in a otherwise return False
Identity Operators
In Python, you use identity operators to compare the memory locations of two objects in order to determine if they represent the same object or not. These operators are useful for checking object
identity, particularly when dealing with mutable objects like lists and dictionaries
• is : This operator returns True if two variables reference the same object in memory and False otherwise.
• is not : This operator returns True if two variables do not reference the same object in memory and False if they do.
x = [1, 2, 3]
y = x
z = [1, 2, 3]
print(x is y) # True, x and y reference the same object
print(x is z) # False, x and z are different objects
print(x is not z) # True, x and z are not the same object
To produce a result, expressions evaluate combinations of values, variables, and operators.
x = 5
y = 10
result = x + y * 2
In this expression, x + y * 2 , the addition and multiplication operators are used to calculate the result.
I hope all the above sections clear the concepts of Python Operator and Expressions. There are some more important concepts related to operations in Python which we can cover in the next session. To
learn about SQL, follow SQL Tutorials. Happy offline learning… Let’s meet in the next session. Check the quiz before next session Click here.
Leave a Comment | {"url":"https://a2ztechie.com/python-operators-and-expressions/","timestamp":"2024-11-09T14:14:57Z","content_type":"text/html","content_length":"93901","record_id":"<urn:uuid:c5adf045-7688-4c86-a8c3-ef5885811788>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00100.warc.gz"} |
Final Exam Quiz Answer
In this article i am gone to share Coursera Course Divide and Conquer, Sorting and Searching, and Randomized Algorithms Week 4 | Final Exam Quiz Answer with you..
Divide and Conquer, Sorting and
Searching, and Randomized Algorithms
visit link: Problem Set #4 Quiz Answer
Final Exam Quiz Answer
Question 1) Recall the Partition subroutine that we used in both QuickSort and RSelect. Suppose that the following array has just been partitioned around some pivot element: 3, 1, 2, 4, 5, 8, 7, 6,9
Which of these elements could have been the pivot element? (Hint: Check all that apply, there could be more than one possibility!)
Question 2) Here is an array of ten integers: 5 3 8 9 1 7 0 2 6 4
Suppose we run Merge Sort on this array. What is the number in the 7th position of the partially sorted array after the outermost two recursive calls have completed (i.e., just before the very last
Merge step)? (When we say “7th” position, we’re counting positions starting at 1; for example, the input array has a “0” in its 7th position.)
Question 3) What is the asymptotic worst-case running time of MergeSort, as a function of the input array length n?
• θ(log n)
• θ(n log n)
• θ(n)
• θ(n2)
Question 4) What is the asymptotic running time of Randomized QuickSort on arrays of length n, in expectation over the choice of random pivots) and in the worst case, respectively?
• Ө(n log n) [expected) and Ө(n log n) [worst case]
• Ө(n2) [expected] and Ө(n2) [worst case]
• Ө(n) [expected] and Ө(n log n) [worst case]
• Ө(n log n) [expected] and Ө(n2) [worst case]
Question 5) Let f and g be two increasing functions, defined on the natural numbers, with f(1),g(1) ≥1. Assume that f(n) =O(g(n)). Is 2f(n) ) = 0(29(n)) ? (Multiple answers may be correct, check all
that apply.)
• Never
• Maybe, maybe not (depends on the functions f and g).
• Always
• Yes if f(n) ≤ g(n) for all sufficiently large n
Question 6) Let 0 < a <.5 be some constant. Consider running the Partition subroutine on an array with no duplicate elements and with the pivot element chosen uniformly at random (as in QuickSort and
RSelect). What is the probability that, after partitioning, both subarrays (elements to the left of the pivot, and elements to the right of the pivot) have size at least a times that of the original
Question 7) Suppose that a randomized algorithm succeeds (e.g., correctly computes the minimum cut of a graph) with probability p (with 0 <p < 1). Let e be a small positive number (less than 1).
How many independent times do you need to run the algorithm to ensure that, with probability at least 1 — ∈, at least one trial succeeds?
Question 8) Suppose you are given k sorted arrays, each with n elements, and you want to combine them into a single array of kn elements. Consider the following approach. Divide the k arrays into k/2
pairs of arrays, and use the Merge subroutine taught in the Merge Sort lectures to combine each pair. Now you are left with k/2 sorted arrays, each with 2n elements. Repeat this approach until you
have a single sorted array with kn elements. What is the running time of this procedure, as a function of k and n?
• θ(n log k)
• θ(nk log n)
• θ(nk log k)
• θ(nk2)
Question 9) Running time of Strassen’s matrix multiplication algorithm: Suppose that the running time of an algorithm is governed by the recurrence T(n) = 7✴T(n/2) + n2. What’s the overall asymptotic
running time (i.e., the value of T(n))?
• θ(n2 log n)
• θ(nlog2(7))
• θ(n2)
• θ(nlog 2/ log 7)
Question 10) Recall the Master Method and its three parameters a, b, d. Which of the following is the best interpretation of ba, in the context of divide-and-conquer algorithms?
• The rate at which the total work is growing (per level of recursion).
• The rate at which the number of subproblems is growing (per level of recursion).
• The rate at which the work-per-subproblem is shrinking (per level of recursion).
• The rate at which the subproblem size is shrinking (per level of recursion). | {"url":"https://niyander.com/divide-and-conquer-sorting-and-searching-and-randomized-algorithms-week-4-final-exam-quiz-answer/","timestamp":"2024-11-13T10:53:19Z","content_type":"text/html","content_length":"68216","record_id":"<urn:uuid:922a9ed4-45db-49ee-9d71-934becc5988d>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00839.warc.gz"} |
CSC165H1: Problem Set 4 solved
1. [4 marks] Binary representation and algorithm analysis. Consider the following algorithm, which
manually counts up to a given number n, using an array of 0’s and 1’s to mimic binary notation.1
from math import floor, log2
def count(n: int) -> None:
# Precondition: n > 0.
p = floor(log2(n)) + 1 # The number of bits required to represent n.
bits = [0] * p # Initialize an array of length p with all 0’s.
for i in range(n): # i = 0, 1, …, n-1
# Increment the current count in the bits array. This adds 1 to
# the current number, basically using the loop to act as a “carry” operation.
j = p – 1
while bits[j] == 1:
bits[j] = 0
j -= 1
bits[j] = 1
For this question, assume each individual line of code in the above algorithm takes constant time, i.e.,
counts as a single step. (This includes the [0] * p line.)
(a) [3 marks] Prove that the running time of this algorithm is O(n log n).
(b) [1 mark] Prove that the running time of this algorithm is Ω(n).
1This is an extremely inefficient way of storing binary, and is certainly not how modern hardware does it. But it’s useful
as an interesting algorithm on which to perform runtime analysis.
Page 2/4
CSC165H1, Problem Set 4
2. [10 marks] Worst-case and best-case algorithm analysis. Consider the following function, which
takes in a list of integers.
def myprogram(L: List[int]) -> None:
n = len(L)
i = n – 1
x = 1
while i > 0:
if L[i] % 2 == 0:
i = i // 2 # integer division, rounds down
x += 1
i -= x
Let W C(n) and BC(n) be the worst-case and best-case runtime functions of myprogram, respectively,
where n represents the length of the input list L. You may take the runtime of myprogram on a given list
L to be equal to the number of executions of the while loop.
(a) [3 marks] Prove that W C(n) ∈ O(n).
(b) [2 marks] Prove that W C(n) ∈ Ω(n).
(c) [2 marks] Prove that BC(n) ∈ O(log n).
(d) [3 marks] Prove that BC(n) ∈ Ω(log n).
Note: this is actually the hardest question of this problem set. A correct proof here needs to argue
that the variable x cannot be too big, so that the line i -= x doesn’t cause i to decrease too quickly!
Page 3/4
CSC165H1, Problem Set 4
3. [14 marks] Graph algorithm. Let G = (V, E) be a graph, and let V = {0, 1, . . . , n − 1} be the vertices
of the graph. One common way to represent graphs in a computer program is with an adjacency matrix,
a two-dimensional n-by-n array2 M containing 0’s and 1’s. The entry M[i][j] equals 1 if {i, j} ∈ E, and
0 otherwise; that is, the entries of the adjacency matrix represent the edges of the graph.
Keep in mind that graphs in our course are symmetric (an edge {i, j} is equivalent to an edge {j, i}), and
that no vertex can ever be adjacent to itself. This means that for all i, j ∈ {0, 1, . . . , n − 1}, M[i][j] ==
M[j][i], and that M[i][i] = 0.
The following algorithm takes as input an adjacency matrix M and determines whether the graph contains
at least one isolated vertex, which is a vertex that has no neighbours. If such a vertex is found, it then
does a very large amount of printing!
def has_isolated(M):
n = len(M) # n is the number of vertices of the graph
found_isolated = False
for i in range(n): # i = 0, 1, …, n-1
count = 0
for j in range(n): # j = 0, 1, …, n-1
count = count + M[i][j]
if count == 0:
found_isolated = True
if found_isolated:
for k in range(2 ** n):
print(’Degree too small’)
(a) [3 marks] Prove that the worst-case running time of this algorithm is Θ(2n
(b) [3 marks] Prove that the best-case running time of this algorithm is Θ(n
(c) [1 mark] Let n ∈ N. Find a formula for the number of adjacency matrices of size n-by-n that
represent valid graphs. For example, a graph G = (V, E) with |V | = 4 has 64 possible adjacency
Note: a graph with the single edge (1, 2) is considered different from a graph with the single edge
(2, 3), and should be counted separately. (Even though these graphs have the same “shape”, the
vertices that are adjacent to each other are different for the two graphs.)
(d) [2 marks] Prove the formula that you derived in Part (c).
(e) [2 marks] Let n ∈ N. Prove that the number of n-by-n adjacency matrices that represent a graph
with at least one isolated vertex is at most n · 2
(f) [3 marks] Finally, let AC(n) be the average-case runtime of the above algorithm, where the set of
inputs is simply all valid adjacency matrices (same as what you counted in part (c)).
Prove that AC(n) ∈ Θ(n
In Python, this would be a list of length n, each of whose elements is itself a list of length n.
Page 4/4 | {"url":"https://codeshive.com/questions-and-answers/csc165h1-problem-set-4-solved/","timestamp":"2024-11-14T08:43:43Z","content_type":"text/html","content_length":"103801","record_id":"<urn:uuid:15cf6b09-f96f-4390-b917-1599ca89c55b>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00319.warc.gz"} |
Calculate P-Value from F Statistic (2024)
Understanding the significance of your statistical results is crucial in research and data analysis. The p-value, derived from the F statistic in ANOVA tests, tells you whether your results are
statistically significant. Calculating the p-value from an F statistic involves understanding the degrees of freedom for both the numerator and the denominator, and the level of significance you’ve
set for your test. This guide aims to simplify these concepts and help you accurately perform this calculation.
Additionally, we will explore how Sourcetable lets you calculate the p-value from F statistic and more using its AI powered spreadsheet assistant.
How to Calculate P Value from F Statistic
To calculate the p value from an F statistic, an F Distribution Calculator is essential. This process involves entering the F statistic's value, along with the degrees of freedom for both the
numerator (between-treatments) and denominator (within-treatments), into the calculator.
Required Inputs
Begin by inputting the F statistic into the designated F-ratio value box of the calculator. Proceed to enter the numerator's degrees of freedom into the DF - numerator box, and the denominator's
degrees of freedom into the DF - denominator box.
Calculation Process
After inputting all required values, select the desired significance level for your analysis. The significance level impacts the interpretation of whether the observed differences are statistically
significant. Complete the process by pressing the "Calculate" button to obtain the p value.
Understanding the Output
The output p value is a crucial statistic in hypothesis testing, providing a probability between 0 and 1. This value helps to determine whether observed differences are statistically significant or
if they are likely due to random chance. A p value below the selected significance threshold indicates statistically significant differences.
This method and understanding are vital for researchers and statisticians in analyzing the reliability of their experimental results.
How to Calculate P Value from F Statistic
Calculating the p-value from an F statistic is a critical step in statistical analysis, particularly when comparing variances across multiple group means. The F statistic, part of the F-test in ANOVA
(Analysis of Variance), helps in understanding the variance within and between groups to determine if the observed differences are statistically significant.
Using an F Distribution Calculator
To calculate the p-value from an F statistic, start with an F Distribution Calculator. This tool streamlines the computation by automating the critical conversion from the F statistic value to the
Step-by-Step Calculation
First, input the F statistic into the calculator. Include the degrees of freedom for the numerator df_1 and the degrees of freedom for the denominator df_2. These values are necessary as they shape
the F-distribution curve, which is crucial for determining the right p-value.
After entering these values, press 'calculate' or a similar button depending on the interface of the F Distribution Calculator. The calculator will output the cumulative probability, representing the
area to the left of your F statistic on the F-distribution curve. The p-value is then calculated as 1 - cumulative probability, giving you the area to the right of the F statistic, which indicates
the probability of observing such an F statistic under the null hypothesis.
Understanding the p-value is essential as it helps in determining whether the differences between groups are due to a significant effect or just random chance. A low p-value (0.05 is standard)
suggests that the observed difference is unlikely to be due to chance, indicating a statistically significant result.
By following these steps, researchers and statisticians can efficiently calculate the p-value from the F statistic to draw meaningful conclusions from their data.
Calculate anything with Sourcetable AI. Tell Sourcetable what you want to calculate. Sourcetable does the rest.
Calculating P-Value from F-Statistic
Understanding how to calculate the p-value from an F-statistic is crucial for interpreting the results of an ANOVA test. Below are concise examples demonstrating this calculation using different
Example 1: Basic ANOVA Test
In a basic ANOVA setup, assume a calculated F-statistic of F = 5.2 and degrees of freedom as df1 = 3 (between groups) and df2 = 24 (within groups). To find the p-value, use an F-distribution
calculator entering these values.
Example 2: Two-Way ANOVA without Replication
For a two-way ANOVA without replication with an F-statistic of 8.5, and degrees of freedom df1 = 2 and df2 = 30, the p-value is again obtained by inputting these figures into an F-distribution
calculator or relevant statistical software. This reflects how the interaction between two factors affects the dependent variable.
Example 3: Two-Way ANOVA with Replication
Consider an F-statistic of 4.1 with degrees of freedom df1 = 3 for factor interaction and df2 = 45. Obtain the p-value by employing an F-distribution calculator. This method evaluates the impact of
replicated observations under different conditions.
Example 4: ANOVA for Regression Analysis
In regression analysis, calculating the p-value from an F-statistic such as 10.3 involves knowing the regression degrees of freedom df1 = 5 and residual degrees of freedom df2 = 50. Input these
details into an F-distribution calculator to determine how well the regression model fits the data.
Mastering Calculations with Sourcetable
Sourcetable, powered by advanced AI, revolutionizes the way you perform complex calculations. It's designed to handle any mathematical computation, including tasks like how to calculate p value from
f statistic. This makes Sourcetable an ideal solution for academic studies, professional work, and personal projects.
Efficient and Accurate Calculations
The AI assistant in Sourcetable ensures all calculations are accurate. For example, if you need to compute a p-value from an F-statistic, simply input your data and query, and the AI handles the
complex statistical processes. The power to transform statistical data analysis into a clear, simple task is now at your fingertips.
Instant Explanations and Results
What sets Sourcetable apart is not just the ability to calculate – it's also its capability to communicate. After performing a calculation, such as p = P(F > f) for an F-statistic, Sourcetable
displays the results clearly in a spreadsheet and explains in a conversational chat interface how it reached those results. This dual display of information enhances understanding and eases the
learning process.
Ideal for Varied Uses
Whether you're a student preparing for exams, a professional analyzing data, or simply exploring new knowledge, Sourcetable caters to all your computational needs. It simplifies complex statistical
calculations and offers insights into results, making it an indispensable tool across various fields.
Use Cases for Calculating P-Value from F-Statistic
1. Improving Investment Decisions Calculating the P-value using the F-statistic enables comparison between different types of investments or portfolios. By determining the strength of the evidence
against the null hypothesis, investors can assess the reliability of investment opportunities.
2. Enhancing Research in ANOVA Researchers utilize the F-statistic in ANOVA to compare means across more than two groups. Calculating the P-value from the F-statistic provides insights into
whether observed differences are statistically significant, guiding further analysis and interpretation.
3. Strengthening Hypothesis Testing In scientific research, calculating the P-value from the F-statistic supports hypothesis testing by quantifying the likelihood that differences between groups are
in Scientific Studies due to random chance. This helps in verifying or refuting scientific theories and models.
4. Optimizing Post-Hoc Testing in After rejecting the null hypothesis in an ANOVA, calculating the P-value is crucial for conducting post-hoc tests. Tests such as Tukey, Bonferroni, and Scheffe
Statistical Analysis rely on these calculations to determine specific group differences, enhancing the reliability of statistical conclusions.
Frequently Asked Questions
How do you calculate the p-value from an F statistic?
To calculate the p-value from an F statistic, use an F Distribution Calculator. Input the F statistic and the degrees of freedom for the numerator and denominator. The calculator will provide the
cumulative probability, which is the area to the left of the F statistic. The p-value is then equal to 1 minus the cumulative probability.
What does the p-value represent in relation to the F statistic?
The p-value represents how extreme the F statistic is within the context of the F distribution. It is a tail probability that indicates the likelihood of observing an F statistic as extreme as, or
more extreme than, the observed value under the null hypothesis.
How do the degrees of freedom affect the p-value calculation from an F statistic?
The degrees of freedom, both numerator and denominator, are crucial inputs for calculating the p-value from the F statistic. They help determine the shape of the F distribution used in finding the
cumulative probability for a given F statistic.
What does it mean if the p-value is greater than the alpha level?
If the p-value is greater than the alpha level, it suggests that there is not enough evidence to reject the null hypothesis at the chosen level of significance. This means the observed F statistic is
not sufficiently extreme to conclude a statistically significant difference between group means.
Why is understanding the F-statistic important for interpreting the p-value?
Understanding the F-statistic is important because it measures the ratio of variation between sample means to the variation within samples. A larger F-statistic indicates more evidence of a
difference between group means, which contextualizes the p-value in assessing statistical significance.
Understanding how to calculate the p value from F statistic is crucial for accurately interpreting statistical models, particularly in ANOVA tests. This computation helps in determining whether the
observed variances between group means are statistically significant.
Simplifying Calculations with Sourcetable
Sourcetable, an AI-powered spreadsheet, streamlines complex calculations like deriving p values from F statistics. Its intuitive interface is designed to enhance productivity and accuracy in
processing statistical data. Sourcetable also supports experimentation with AI-generated data, enabling users to test scenarios and hypotheses effortlessly.
Start exploring the robust capabilities of Sourcetable and enhance your statistical analysis efficiency. Try it for free at app.sourcetable.com/signup.
Calculate anything with Sourcetable AI. Tell Sourcetable what you want to calculate. Sourcetable does the rest.
Recommended Guides
• Go from CSV to insights in seconds
• how to calculate p value in r
• how to calculate p value from t value
• how to calculate p value in excel
• how to calculate p value from x2
• how to calculate the p value in spss
• how do you find p-value on calculator
• how to calculate p value from chi square | {"url":"https://bet10x10.com/article/calculate-p-value-from-f-statistic","timestamp":"2024-11-14T11:14:36Z","content_type":"text/html","content_length":"79995","record_id":"<urn:uuid:994401cf-9a95-47d1-ad42-77a6774513c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00288.warc.gz"} |
Explore in R packages | Data Science Dojo
Programming has an extremely vast package ecosystem. It provides robust tools to master all the core skill sets of data science.
For someone like me, who has only some programming experience in Python, the syntax of R programming felt alienating, initially. However, I believe it’s just a matter of time before you adapt to the
unique logicality of a new language. The grammar of R flows more naturally to me after having to practice for a while. I began to grasp its kind of remarkable beauty, a beauty that has captivated the
heart of countless statisticians throughout the years.
If you don’t know what R programming is, it’s essentially a programming language created for statisticians by statisticians. Hence, it easily becomes one of the most fluid and powerful tools in the
field of data science.
Here I’d like to walk through my study notes with the most explicit step-by-step directions to introduce you to the world of R.
Why learn R for data science?
Before diving in, you might want to know why should you learn R for Data Science. There are two major reasons:
1. Powerful analytic packages for data science
Firstly, R programming has an extremely vast package ecosystem. It provides robust tools to master all the core skill sets of Data Science, from data manipulation, and data visualization, to machine
learning. The vivid community keeps the R language’s functionalities growing and improving.
2. High industry popularity and demand
With its great analytical power, R programming is becoming the lingua franca for data science. It is widely used in the industry and is in heavy use at several of the best companies that are hiring
Data Scientists including Google and Facebook. It is one of the highly sought-after skills for a Data Science job.
You can also learn Python for data science.
Quickstart installation guide
To start programming with R on your computer, you need two things: R and RStudio.
Install R language
You have to first install the R language itself on your computer (It doesn’t come by default). To download R, go to CRAN, https://cloud.r-project.org/ (the comprehensive R archive network). Choose
your system and select the latest version to install.
Install RStudio
You also need a hefty tool to write and compile R code. RStudio is the most robust and popular IDE (integrated development environment) for R. It is available on http://www.rstudio.com/download
(open source and for free!).
Overview of RStudio
Now you have everything ready. Let’s have a brief overview at RStudio. Fire up RStudio, the interface looks as such:
Go to File > New File > R Script to open a new script file. You’ll see a new section appear at the top left side of your interface. A typical RStudio workspace composes of the 4 panels you’re seeing
right now:
RStudio interface
Here’s a brief explanation of the use of the 4 panels in the RStudio interface:
This is where your main R script located.
This area shows the output of code you run from script. You can also directly write codes in the console.
This space displays the set of external elements added, including dataset, variables, vectors, functions etc.
This space displays the graphs created during exploratory data analysis. You can also seek help with embedded R’s documentation here.
Running R codes
After knowing your IDE, the first thing you want to do is to write some codes.
Using the console panel
You can use the console panel directly to write your codes. Hit Enter and the output of your codes will be returned and displayed immediately after. However, codes entered in the console cannot be
traced later. (i.e. you can’t save your codes) This is where the script comes to use. But the console is good for the quick experiment before formatting your codes in the script.
Using the script panel
To write proper R programming codes,
you start with a new script by going to File > New File > R Script, or hit Shift + Ctrl + N. You can then write your codes in the script panel. Select the line(s) to run and press Ctrl + Enter. The
output will be shown in the console section beneath. You can also click on little Run button located at the top right corner of this panel. Codes written in script can be saved for later review (File
> Save or Ctrl + S).
Basics of R programming
Finally, with all the set-ups, you can write your first piece of R script. The following paragraphs introduce you to the basics of R.
A quick tip before going: all lines after the symbol # will be treated as a comment and will not be rendered in the output.
Let’s start with some basic arithmetics. You can do some simple calculations with the arithmetic operators:
Addition +, subtraction -, multiplication *, division / should be intuitive.
# Addition
1 + 1
#[1] 2
# Subtraction
2 - 2
#[1] 0
# Multiplication
3 * 2
#[1] 6
# Division
4 / 2
#[1] 2
The exponentiation operator ^ raises the number to its left to the power of the number to its right: for example 3 ^ 2 is 9.
# Exponentiation
2 ^ 4
#[1] 16
The modulo operator %% returns the remainder of the division of the number to the left by the number on its right, for example 5 modulo 3 or 5 %% 3 is 2.
# Modulo
5 %% 2
#[1] 1
Lastly, the integer division operator %/% returns the maximum times the number on the left can be divided by the number on its right, the fractional part is discarded, for example, 9 %/% 4 is 2.
# Integer division
5 %/% 2
#[1] 2
You can also add brackets () to change the order of operation. Order of operations is the same as in mathematics (from highest to lowest precedence):
• Brackets
• Exponentiation
• Division
• Multiplication
• Addition
• Subtraction
# Brackets
(3 + 5) * 2
#[1] 16
Variable assignment
A basic concept in (statistical) programming is called a variable.
A variable allows you to store a value (e.g. 4) or an object (e.g. a function description) in R. You can then later use this variable’s name to easily access the value or the object that is stored
within this variable.
Create new variables
Create a new object with the assignment operator <-. All R statements where you create objects and assignment statements have the same form: object_name <- value.
num_var <- 10
chr_var <- "Ten"
To access the value of the variable, simply type the name of the variable in the console.
#[1] 10
#[1] "Ten"
You can access the value of the variable anywhere you call it in the R script, and perform further operations on them.
first_var <- 1
second_var <- 2
first_var + second_var
#[1] 3
sum_var <- first_var + second_var
#[1] 3
Naming variables
Not all kinds of names are accepted in R programming. Variable names must start with a letter, and can only contain letters, numbers, . and _. Also, bear in mind that R is case-sensitive,
i.e. Cat would not be identical to cat.
Your object names should be descriptive, so you’ll need a convention for multiple words. It is recommended to snake case where you separate lowercase words with _.
Assignment operators
If you’ve been programming in other languages before, you’ll notice that the assignment operator in R programming is quite strange. It uses <- instead of the commonly used equal sign = to assign
Indeed, using = will still work in R, but it will cause confusion later. So you should always follow the convention and use <- for assignment.
<- is a pain to type as you’ll have to make lots of assignments. To make life easier, you should remember RStudio’s awesome keyboard shortcut Alt + – (the minus sign) and incorporate it into your
regular workflow.
Look at the environment panel in the upper right corner, you’ll find all of the objects that you’ve created.
Basic data types
You’ll work with numerous data types in R. Here are some of the most basic ones:
Knowing the data type of an object is important, as different data types work with different functions, and you perform different operations on them. For example, adding a numeric and a character
together will throw an error.
To check an object’s data type, you can use the class() function.
# usage class(x)
# description Prints the vector of names of classes an object inherits from. # arguments : An R object. x
Here is an example:
int_var <- 10
#[1] "numeric"
dbl_var <- 10.11
#[1] "numeric"
lgl_var <- TRUE
#[1] "logical"
chr_var <- "Hello"
#[1] "character"
Functions are the fundamental building blocks of R. In programming, a named section of a program that performs a specific task is a function. In this sense, a function is a type of procedure or
R comes with a prewritten set of functions that are kept in a library. (class() as demonstrated in the previous section is a built-in function.) You can use additional functions in other libraries by
installing packages.You can also write your own functions to perform specialized tasks.
Here is the typical form of an R function:
function_name(arg1 = val1, arg2 = val2, ...)
function_name is the name of the function. arg1 and arg2 are arguments. They’re variables to be passed into the function. The type and number of arguments depend on the definition of
the function. val1 and val2 are values of the arguments correspondingly.
Passing arguments
R can match arguments both by position > and by name. So you don’t necessarily have to supply the names of the arguments if you have the positions of the arguments placed correctly.
class(x = 1)
#[1] "numeric"
#[1] "numeric"
Functions are always accompanied with loads of arguments for configurations. However, you don’t have to supply all of the arguments for a function to work.
Here is documentation of the sum() function.
# usage
sum(..., na.rm = FALSE)
# description Returns the sum of all the values present in its arguments. # arguments ... : Numeric or complex or logical vectors. na.rm : Logical. Should missing values (including NaN) be removed?
From the documentation, we learned that there are two arguments for the sum() function: ... and na.rm Notice that na.rm contains a default value FALSE. This makes it an optional argument. If you
don’t supply any values to the optional arguments, the function will automatically fill in the default value to proceed.
sum(2, 10)
#[1] 12
sum(2, 10, NaN)
#[1] NaN
sum(2, 10, NaN, na.rm = TRUE)
#[1] 12
Getting help
There is a large collection of functions in R and you’ll never remember all of them. Hence, knowing how to get help is important.
RStudio has a handy tool ? to help you in recalling the use of the functions:
Look how magical it is to show the R documentation directly at the output panel for quick reference.
Last but not least, if you get stuck, Google it! For beginners like us, our confusions must have gone through numerous R learners before and there will always be something helpful and insightful on
the web.
Contributors: Cecilia Lee
Cecilia Lee is a junior data scientist based in Hong Kong | {"url":"https://datasciencedojo.com/tags/r-packages/","timestamp":"2024-11-04T23:26:34Z","content_type":"text/html","content_length":"880285","record_id":"<urn:uuid:54b8af23-3489-40ec-9702-f758d80384b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00223.warc.gz"} |
Create a Hog project starting from a local project
Create a Hog project starting from a local project#
This tutorial will show how to create a hog project starting from an example Vivado project (IBERT). For this tutorial we will use Hog version Hog2021.2-9
Step 1: create an example Vivado project#
Let’s start our tutorial by switching to a new empty directory:
mkdir Example_project
cd Example_project
Now we open Vivado console and we create an empty RTL project, we leave everything as default and we select Kintex-7 KC705 Evaluation Platform in the Default Part panel. Now we go to the IP catalogue
and we type IBERT in the search bar and we click on IBERT 7 Series GTX.
We leave the IP configuration as default, and we click on IP location. We change the IP directory to be IP (we create the new directory on the GUI).
Now we click on Done and we skip the “Generate Output Product”. Now we go on IP Sources tab, we right click on the IP and we click on Open IP Example Design.
We can now close the first project (project_1).
If we browse the new project, we notice that there are:
• 1 verilog file ibert_7series_gtx_0_ex/imports/example_ibert_7series_gtx_0.v
• 1 IP (xci) IP/ibert_7series_gtx_0/ibert_7series_gtx_0.xci
• 1 constraint file (xdc) ibert_7series_gtx_0_ex/imports/example_ibert_7series_gtx_0.xdc
• 2 txt files ibert_7series_gtx_0_ex/imports/example_top_verilog.txt and ibert_7series_gtx_0_ex/imports/xdc_7ser_gtx.txt
Step 2: create and initialise a git repository#
It’s time to create a new git repository and add our project files on it. We start by creating a new blank project on CERN gitlab by clicking here.
We select Example_project as project name and we click on Create Project.
Now we copy the project URL, e.g. https://gitlab.cern.ch/$USER/example_project.git, that will be added as our repo URL. Now, on a shell, we type:
git init
git remote add origin https://gitlab.cern.ch/$USER/example_project.git
We can now add the source files (not the project files) on the git repository.
git add ibert_7series_gtx_0_ex/imports/example_ibert_7series_gtx_0.v
git add IP/ibert_7series_gtx_0/ibert_7series_gtx_0.xci
git add ibert_7series_gtx_0_ex/imports/example_ibert_7series_gtx_0.xdc
git add ibert_7series_gtx_0_ex/imports/example_top_verilog.txt ibert_7series_gtx_0_ex/imports/xdc_7ser_gtx.txt
git commit -m "First commit, imported project files"
Step 3: import Hog submodule and create Hog project#
Now that we have a project in a git repository, we can convert it to a Hog project, to benefit from all the Hog functionalities.
The first step will be to add Hog submodule in the root path of your repository, preferably using its relative path.
The relative path can be found from out repository URL:
so in our case we will have to add the Hog submodule as:
git submodule add ../../hog/Hog.git
cd Hog
git checkout Hog2021.2-9
cd ..
If you are working on gitlab.com, we advice to add the Hog submodule using its gitlab.com mirror.
To add the repository using the relative path, you should then do
git submodule add ../../hog-cern/Hog.git
cd Hog
git checkout Hog2021.2-9
cd ..
Next step will be to create the text files containing all project properties and list of project files, as required by hog. We have to create the following files:
• Top/<project name>/hog.conf containing all the project properties, such as PART, Synthesis Strategy…
• Top/<project name>/list/<library>.srccontaining all the source files (VHDL, verilog, IPs, etc.) that will be put in the VHDL library <library>
• Top/<project name>/list/<simulation_set>.sim containing all the simulation files that will be put in the simulation set <simulation_set>
• Top/<project name>/list/<constraints_name>.con containing all the constraints files
We can let Hog handle the creation of the list files automatically, by launching source Hog/Init.sh (only in Vivado). However, in our case, we will create all the list files manually.
So we start by creating hog.conf:
mkdir -p Top/Example_project/list
vim Top/Example_project/hog.conf
and we fill it with all the properties required by our project. In our case, we left almost everything as default, so we’ll only have to set the PART property, which is a main property, according to
hog manual:
Now we add all the project files to hog list files. We start by adding all the source files in their HDL library. Since we have only IPs and verilog files, that can’t be associated to an HDL library,
the library name won’t be important. So we can add all the files in a single src file called work.src
vim Top/Example_project/list/work.src
Here we add all our files apart from the constraints and simulation ones, and we have to specify which one is the top file:
ibert_7series_gtx_0_ex/imports/example_ibert_7series_gtx_0.v top=example_ibert_7series_gtx_0
Finally, we add the constraints in a file called constraints.con:
vim Top/Example_project/list/constraints.con
We now add all the files we just created on git and we created a new tag called v0.0.1 as required by Hog:
git add Top/Example_project/hog.conf
git add Top/Example_project/list/work.src
git add Top/Example_project/list/constraints.con
git commit -m "Converted project to hog project"
git tag v0.0.1 -m "First hog tag"
We can now create the Hog project:
./Hog/CreateProject.sh Example_project
The new project will be created in Projects/Example_project/Example_project.xpr. We can open it and confront with the old one; we notice that they are identical.
We can finally build the project by running
./Hog/LaunchWorkflow.sh Example_project
Step 4: git push#
Now we push everything on git:
git checkout -b Test_branch
git push origin Test_branch
git push --tags | {"url":"https://hog.readthedocs.io/en/2024.1/07-Hog-Tutorial/00-new-hog-project.html","timestamp":"2024-11-10T07:25:45Z","content_type":"text/html","content_length":"40900","record_id":"<urn:uuid:2fa36dda-345c-4af6-9a83-cfb17872abef>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00332.warc.gz"} |
Learn about the exciting research our graduate students are preparing to defend in their final steps toward earning their degree in mathematics.
Upcoming Events
• 2024-10-24, 12:15 pm, GAB 473: Martin Guild (Master's)
Kempe's Universality Theorem and Infinitely-Sized Linkages
In this defense, we will discuss the theory of mechanical linkages consisting of bars and joints. We will first go through a proof of Kempe's Universality Theorem, which illustrates the
relationship between mechanical linkages and algebraic curves. A.B. Kempe's original proof had a few minor flaws, which we will be correcting. Then, we will discuss the notion of infinitely-sized
linkages and use this to demonstrate a surprising connection between 5-bar linkages and Lissajous curves.
Recent Events
Date Speaker Title
2024-6-21 Ifeanyichukwu (Valentine) Ukenazor, PhD Scale Invariant Equations and Its Modified EM Algorithm for Estimating A Two-Component Mixture Model
2024-6-20 Jill Kaiser, PhD Club Isomorphisms Between Subtrees of Aronszajn Trees
2024-6-18 Johanson Berlie, MS Graph self-similarity and the Hausdorff dimension of the Heighway Dragon boundary
2024-6-7 James Atchley, PhD Questions Involving Countable Intersection Games
2024-3-20 Jacob Williams, MS Ramanujan Congruences for the Partition Function
2024-3-20 Aaron Jackson, MS A Tree-based Discrete Time Model for Jump Diffusion Option Valuation
2024-2-28 Brandon Mather, MS Quantum Group Actions and Hopf Algebras
2023-12-12 Helena Tiller, MS The Subgroup Structure of GL(2, p) for p a prime
2023-10-30 Xuan Nguyen, MS Hidden Markov Models with Applications
2023-10-30 Ugochukwu (Oliver) Adiele, PhD Option Pricing Under New Classes of Jump-Diffusion Processes | {"url":"https://math.unt.edu/research/defenses.html","timestamp":"2024-11-08T20:45:45Z","content_type":"text/html","content_length":"56979","record_id":"<urn:uuid:79a8884e-6efb-411a-a433-755108f7e3fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00552.warc.gz"} |
-Modules, Right Ideals
Left Ideals are Left A-Modules, Right Ideals are Right A-Modules
Examples of Modules - Left Ideals J are Left X-Modules, Right Ideals J are Right X-Modules
Let $\mathfrak{A}$ be an algebra and let $J$ be a left-ideal of $\mathfrak{A}$. Then $\mathfrak{A}J \subseteq J$. Let $f : \mathfrak{A} \times J \to J$ be the module multiplication defined by $f(a,
j) = aj$, that is, the product is simply the multiplication given by the multiplication in $\mathfrak{A}$.
For each fixed $a \in \mathfrak{A}$ the mapping $f_a : J \to J$ defined by $f_a(j) = aj$ is clearly linear, since if $j_1, j_2 \in J$ then $a(j_1 + j_2) = aj_1 + aj_2$ by distributivity in $\mathfrak
{A}$. So axiom $LM1$ is satisfied.
For each fixed $j \in J$ the mapping $f_j : \mathfrak{A} \to J$ defined by $f_j(a) = aj$ is also linear since if $a_1, a_2 \in \mathfrak{A}$ then $(a_1 + a_2)j = a_1j + a_2j$, again by distributivity
in $\mathfrak{A}$. So axiom $LM2$ is satisfied.
Lastly, for all $a_1, a_2 \in \mathfrak{A}$ and all $j \in J$ we certainly have by the associativity of multiplication in $\mathfrak{A}$ that:
\quad a_1(a_2j) = (a_1a_2)j
So axiom $LM3$ is satisfied. Thus any left-ideal $J$ of $\mathfrak{A}$ is a left $\mathfrak{A}$-module.
In a similar fashion it can be shown that any right ideal of $\mathfrak{A}$ is a right $\mathfrak{A}$-module, and that any two-sided ideal of $\mathfrak{A}$ is an $\mathfrak{A}$-bimodule. | {"url":"http://mathonline.wikidot.com/examples-of-modules-left-ideals-j-are-left-a-modules-right-i","timestamp":"2024-11-04T22:08:56Z","content_type":"application/xhtml+xml","content_length":"15546","record_id":"<urn:uuid:cddd4681-8b95-4bd8-8af8-389afb001c49>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00230.warc.gz"} |
Learn math fundamentals with the Elevate app
Understanding the fundamentals of math
Math is an essential part of daily life, whether you realize it or not.
From calculating the tip on a restaurant bill to understanding complex scientific theories, math is, well, everywhere. And that’s exactly why having a strong foundation in math is crucial for both
real-world and academic scenarios.
So what are the fundamentals of mathematics, exactly? In this article, we'll discuss them step by step and highlight how a strong foundation in math can benefit various aspects of life.
The fundamentals of math: what they are and why they matter
Math fundamentals are the basic building blocks of mathematics. They include addition, subtraction, multiplication, and division, and things like number theory, algebra, and geometry. These concepts
are crucial because they serve as the foundation for more complex math topics.
So, without a solid understanding of these math fundamentals, it can be difficult to grasp more advanced concepts.
It’s also important to note that math fundamentals aren’t just important for academic success; they also have practical applications in everyday life. For instance, calculating a tip at a restaurant
or creating a budget requires a basic understanding of arithmetic operations. And measuring ingredients for cooking or designing a room layout involves knowledge of geometric relationships.
Basic math skills you might use daily
Basic math skills are essential for everyone, regardless of age, profession, or background. These skills provide the foundation for more advanced mathematical concepts and are crucial for success in
both academic and real-world scenarios.
Addition and subtraction are fundamental skills that everyone should understand. These operations are used in everyday life, from calculating the price of groceries to balancing a checkbook.
Similarly, multiplication and division are essential for tasks such as calculating discounts or figuring out how much to tip at a restaurant.
Understanding fractions and decimals is also important for many real-world applications. This includes tasks such as measuring ingredients for cooking or calculating interest rates on loans. Being
able to convert between fractions and decimals is also a valuable skill that can be applied to many scenarios.
And finally, knowing how to calculate percentages is an essential skill that has practical applications in many fields. This includes finance, where percentages are used to calculate interest rates
and investment returns, as well as in scientific research, where percentages are used to measure changes in data over time.
Are you surprised to see how applicable math is on an everyday basis?
6 tips to become a better problem solver
If you’re feeling a little rusty in the math department, it’s okay. Here are some additional tips to keep in mind as you get started building up your foundation in—and changing your perspective
1. Use online resources: There are many online resources available that can help you learn math fundamentals. Websites like Khan Academy offer free tutorials and practice problems.
2. Practice regularly: Consistent practice is the key to building your math skills. Set aside time each day or week to work on math problems or practice exercises. (We like playing the Elevate app’s
math games for a few minutes each day.)
3. Apply math in real-world situations: Math is used in countless real-world scenarios, from calculating the cost of groceries to designing buildings. Look for opportunities to apply mental math in
your daily life, and you'll find that it becomes more intuitive and applicable.
4. Visualize the problem: If you're struggling to understand a math problem, try visualizing it in a different way. For example, you could draw a diagram or use manipulatives to represent the
5. Stay positive: Math can be frustrating at times, but it's important to stay positive and persistent. Celebrate your successes and learn from your mistakes.
6. Use mnemonic devices: Mnemonic devices are memory aids that can help you remember math formulas or concepts. For example, PEMDAS (Parentheses, Exponents, Multiplication and Division, Addition and
Subtraction) is a common mnemonic used to remember the order of operations.
Common fundamental core math concepts to know
Mathematics is a vast field with many different areas of study. So what are some fundamental math concepts you should prioritize understanding? Here are our suggestions:
• Trigonometry: Trigonometry is the study of triangles and their properties, including angles and sides, and is used extensively in fields such as engineering, physics, and navigation.
• Statistics: Statistics is the study of data analysis and probability and is used to collect, analyze, and interpret data in various fields such as finance, healthcare, and social sciences.
• Logic: Logic is the study of reasoning and argumentation and is used to evaluate arguments and draw conclusions based on evidence.
• Number systems: Understanding number systems, including integers, fractions, decimals, and irrational numbers, is important for working with numbers in various contexts.
Using the Elevate app's learning games for adults to hone your math skills
Ready to start building up your math skills? Our favorite way to do this in a fun and low-stakes setting is by playing the Elevate app. It has more than 40 games spread across math, reading, writing,
speaking, and memory skill groups and features personalized skill training programs based on your goals—like improving your math fundamentals.
What sets Elevate apart from, well, your high school math teacher, is that Elevate’s brain games don’t teach you math for the sake of math. Instead, you’ll practice techniques that you’ll actually
use in everyday life, like splitting the bill at dinner or cooking dinner yourself!
The best part? No homework.
Taking the first step in understanding math fundamentals
Whether you realize it or not, having a strong foundation in math fundamentals is important in so many areas of life, from real-world scenarios to academic settings.
However, it's just as important to understand that improving your math skills isn’t always easy, but the benefits are well worth it. With dedication and practice, you can build a strong foundation in
math that will serve you well throughout your life.
One easy (and fun) way to do this is by downloading the Elevate app, available on iOS or Android. You can play brain games that are specifically dedicated to improving your math skills in a fun and
low-stakes setting.
Soon enough, you’ll be calculating discounts, tips, measurements, and more—without a calculator. So start bringing your math skills to the next level as soon as today by downloading Elevate! | {"url":"https://elevateapp.com/blog/math-fundamentals","timestamp":"2024-11-13T15:10:34Z","content_type":"text/html","content_length":"46556","record_id":"<urn:uuid:665ba006-ad40-46c8-946e-3e60af3339a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00363.warc.gz"} |
Use cases | AlignedLayer
Soft finality for Rollups and Appchains: Aligned provides fast verification of ZK proofs, which can be used to provide soft finality for rollups or other applications.
Fast bridging: building ZK bridges requires checking a succinct proof to show that current state of a chain is correct and then users need to show that their account state is correct. Many ZK
protocols use hash functions such as Poseidon or Rescue Prime, which do not have Precompiles on Ethereum, making both the verification of the chain's state and account expensive. With Aligned,
you can show your account state using another ZK proof, and all proofs can be verified cheaply and with low latency in Aligned.
New settlement layers (use Aligned + EigenDA) for Rollups and Intent based systems.
P2P protocols based on SNARKs such as payment systems and social networks.
Alternative L1s interoperable with Ethereum: similar to fast bridging.
Verifiable Machine Learning (ML): with general-purpose zkvms we can prove code written in Rust, solving part of the problem of using ML. However, most zkVMs use STARK-based proof systems, which
leads to high on-chain costs or expensive wrapping. With Aligned, you can directly verify your proof from the zkVM for much less than Ethereum.
Cheap verification and interoperability for Identity Protocols.
ZK Oracles: With ZK oracles we can show that we have a piece of information off-chain and produce a ZK proof doing some computation with that data. Aligned reduces the cost of using those
oracles. For more background, see the following post.
New credential protocols such as zkTLS based systems: you can create proofs of data shown on your web browser and have the result verified in Ethereum. See the following thread for an ELI5 on TLS
ZK Coprocessor: ZK allows complex computations to be delegated from the blockchain to a coprocessor. This can retrieve information from the blockchain and perform the computations securely in a
more efficient way.
Encrypted Mempools using SNARKs to show the correctness of the encryption.
Protocols against misinformation and fake news: you can generate proofs that an image or audio comes from a given device, and show that a published image is the result of certain transformations
performed on the original image.
Projects built using Aligned
The Mina <> Ethereum bridge (in development) uses Aligned's fast mode for ZK proof verification. See the github repo for more information. | {"url":"https://docs.alignedlayer.com/introduction/2_use_cases","timestamp":"2024-11-06T00:58:51Z","content_type":"text/html","content_length":"153922","record_id":"<urn:uuid:ef9a3144-aa21-45e3-ba40-3533f3ca17d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00077.warc.gz"} |
Sudo Null - Latest IT News
How to draw a black hole. Geodesic ray tracing in a curved space-time
“It's easy. We take the Schwarzschild metric, look for the Christoffel symbols, calculate their derivative, write the geodesic equation, change some Cartesian coordinates (so as not to suffer),
obtain a large multi-line ODE - and solve it. Like that".
Now it is clear that black holes sucked me. They are infinitely exciting. The last time I dealt with the visualization of Schwarzschild geometry. I was absorbed by the problem of accurately
representing how the curvature of such a space-time affects the appearance of the sky (since photons from distant sources move along geodetic lines bent by a black hole) to create interactive
Here is the result
(works in the browser). The trick is to maximize the pre-calculated deflection of light rays. Everything works more or less normally, but of course, such a simulation is far from ideal, because in
reality no tracing is done there (for non-specialists: restoring the location of the light rays falling into the camera back in time).
My new project corrects this flaw by discarding efficiency / interactivity in the simplest way:
it is a raytracer purely on the CPU
. Tracing is performed as accurately and as long as possible. The rendering of the image at the top took
5 minutes (thanks, RK4) on my laptop.
There is no improvement compared with similar works. I just really like to do it. I am writing this article to share not only the results, like the image above (
especially as others did better
), but also the
process of creating these images
, with a discussion / explanation of physics and implementation. Ideally, it can inspire or become a guide for people with similar interests.
Look for fresh renders on the starless tag on the tumlr.
Some pseudo-Riemann optics
If you have already tried my
, then you are familiar with this picture: The main features are well marked on it: a black disk and a strange ring of distortions. In discussions, people often pay attention: it is wrong to say that
the black disk is the event horizon. It is actually wrong to say that
the image area is
. This is an
image of an
object. Indeed, there are trajectories that when tracked from your eye to the source will be in the event horizon (HS). These are black pixels, because no photon can travel along this path from a
black hole (BH) to your eye. Thus, this black disc is a very clear
image of the event horizon.
, in the sense that if you draw (in the distant past) something right above the horizon, external observers will be able to see it directly on this black disk (we will actually perform this
experiment later). In some publications, this black region is also called the “shadow” of the black hole.
However, it is interesting to note that this is also the
image of the photon sphere
(FS). The gnuplot graph at the top depicts the geodesy of incoming photons from infinity (looking at the BHs from afar when zooming) along with the HS (black) and FS (green). The radius of the photon
sphere is 1.5 times larger than the radius of the event horizon (in the Schwarzschild geometry) and here circular orbits of photons around the BH are allowed (although unstable). On the graph, some
rays fall into nonexistence, while others are scattered (and, thus, turn out to be at another point in the celestial sphere). It is seen that in the absorbed rays, the exposure parameter is less than
~ 2.5 radii. This is the apparent radius of the black disk, and it is
much larger
than the HS and FS.
In any case, the following fact is important:
A light beam freely falling into the photon sphere will also reach the event horizon.
This means that the image of the photon sphere is included in the image of the event horizon. But since the HS is clearly located inside the FS, the image of the former should also be a subset of the
latter. Then the two images must match.
Why do we check that the black disk is also an image of the file system? Because it means that the
edge of the
black disk is filled with photons that
along the photon sphere. A pixel immediately outside the black disk corresponds to a photon, which (when traced back) spirals into the photon sphere, getting closer and closer to an unstable circular
orbit, spinning many times (the closer you look, the faster it twists), and then spontaneously jumps out - because the orbit is unstable - and runs away to infinity.
Such behavior will cause an interesting optical effect similar to a separatrix in a dynamical system. Theoretically, if the beam is launched exactly along the edge, it will forever spiral into a
spiral, closer and closer to the circular orbit of the photon sphere.
Influence on the sky
We will not dwell on this topic, because the last
applet is
dedicated to it , and it gives a much better idea of the distortions in the sky (including the UV grid option for clearer distortions).
Just a few words about Einstein's ring. The gravitational lens is optically distinguishable, because it is an image of a single point, which is directly opposite the observer. The ring is formed at a
viewing angle when the rays from the observer are bent in parallel. The outer rays do not bend strongly enough and remain divergent; inside they bend too much, converge, and in reality they can even
go backwards or in a circle, as we have seen.
But think about this: if you get close enough to the black disk, the light rays can make one circle and then go parallel. There we must see Einstein's secondary ring. In fact, there may be rings of
any order (any number of windings). Also between them there should be “odd” rings where the rays of light bend in parallel, but directed towards the viewer. This infinite series of rings exists, but
it is completely invisible in our image (in fact, in most of these images), because it is too close to the edge of the disk.
Distortion of the event horizon
In this new picture, something has changed. First of all, it is made in the best resolution and with background filtering to make it more distinguishable. Then I zoomed into the BH image (without
approaching, we are still at a distance of ~ 10 radii from it, just a zoom). But most importantly, I drew a
grid on the horizon
The horizon is "just a sphere." Technically, it is not a standard Riemann sphere with a spatial metric. The horizon is light-like! This is a colorful way to say that it spreads at the speed of light.
However, in the Schwarzschild coordinates, it is still a surface
The grid allows you to see a special effect that can be inferred by analyzing the photon scattering / absorption graph above:
The entire surface of the horizon is visible simultaneously from any point.
It is very interesting. When you look at a fixed sphere in a standard flat space-time, you see no more than 50% of its surface at any time (if you get closer, then less than 50% because of the
perspective). But the horizon
is completely visible at the same time
as a black disk: pay attention, in particular, to the North and South Poles. Nevertheless, although the entire surface fits on a black disk,
it does not cover it entirely
: if you zoom to the edge, you will see that the image of the GE ends
the end of the shadow. You will find a ring located very close to the outer edge, but not completely. This is the image of a point opposite the observer and it defines the boundaries of this “first”
image of the GE inside. So what is between this ring and the actual edge? I have not yet generated a zoomed image, but there
is another whole image of the event horizon
. And then another, and one more, ad infinitum. There are endless concentric images of the entire horizon compressed in the shadows.
(Thank you very much / u / xXxDeAThANgEL99xXx for pointing out this phenomenon, which I missed)
Add accretion disk
What modern BH rendering will do without an accretion disk? Although it is clearly a controversial question, is Nolan's Interstellar available for observation, not to mention accuracy, but we
definitely have to thank the blockbuster for popularizing a specific distortion of the accretion disk. Here we have an infinitely thin, flat, horizontal accretion disk, extending from the photon
sphere (although this is very unrealistic, because the orbits are lower
For this image, I moved the observer a little higher to look at the disk a little higher. You see
images of two faces of the disk
: the top and bottom. The image is bent over the shadow of the BH shadow, because the beam directed directly above the black hole leans down to meet the upper surface of the disk behind the hole
opposite the observer.
This also explains the very existence of the lower image: the rays going under the BH are bent to the lower surface of the disk, which is behind the BH. If you look closely, the image spreads
throughout the shadow, but at the top it is much thinner. This corresponds to the light rays that go above the BH, make an almost complete circle around the hole and hit the lower surface
in front of the
Of course, it is easy to conclude that there is an infinite series of images of accretion disks, which very quickly become thinner as they approach the edge. The image of the next order is already
very thin, barely visible at the bottom of the edge.
GIFs are still relevant
In this convulsive animation, I turn on / off the deflection of the light (formally Schwarzschild / Minkowski) to clarify some points that we talked about.
These two strange gifs are created at the request of readers. In the first, the observer circles around a black hole at a distance of 10 radii. This should not be understood as an actual orbit, since
in reality there is no aberration when moving in orbit. Here is a series of stationary BH snapshots from several points, where the observer moves from place to place between frames; this is an
“adiabatic” orbit, if you like.
And the stereo is also still relevant
Interestingly, the shadow looks pretty flat.
Enough science
Enough with us informative pictures in the style of the 90s in low resolution with poisonous colors. Here are a few "pop" renders (click for full size).
This image is generated by / n / dfzxh with fourfold oversampling A close-up Larger Image ring Cult effect of "light ring" when viewed from the equatorial plane If you download the program, it is the
current default scene is much broader drive
Well, nothing special. There is no artwork here, just renders from the program. Let's temporarily return to science: the
third image
which doesn't seem to make any sense, is actually very valuable. This is an enlarged area between the upper edge of the black disk and the main image of the accretion disk. The observer is on the
outer edge of the accretion disk itself and zooms the picture. The goal was to depict as many rings of different order as possible. Three orders are visible: the brighter zone in the upper part is
only the lower edge of the first image of the upper distal surface of the disk. The strip below, under the calm sea of sprawling stars, is the upper part of the image of the lower front part of the
disk. At the very bottom is a thin line of light with a width of no more than a pixel glued to the black disk of the photon sphere. This is basically the third image: the upper far surface again, but
after the light has completed an additional rotation around the black hole. Merged with him but ever thinner images are higher order rings. Well, this is also worthy of the <blockquote> tag:
Here are endless images of both the upper and lower surfaces of the accretion disk, and they all show the entire surface of the disk simultaneously. Moreover, except for the very first one, these
images do not pass either in front of the black disk or in front of each other, and therefore are “concentric”.
Accept rendering requests
Interested in a specific visualization, but not ready to go through the difficulties of installing the program and rendering yourself? Just email me reddit or mail. Rendering 1080p on my laptop takes
no more than 10-20 minutes.
Realistic accretion disk
Accretion disk in renders is quite cartoonish. It's just a wacky texture disc. What happens when real physics is included in the visual appearance of the disk? What happens when you consider the
redshift from the orbital motion, for example?
A popular model of an accretion disk is an infinitely thin disk of matter in an almost circular orbit. It starts with ISCO (the innermost stable circular orbit,
which is definitely abnormal in the general theory of relativity for realistic liquids, but here it is (in any case, you will not notice the difference).
Now the free parameter is a common scale for temperatures, for example, the temperature in ISCO. This temperature
is huge
for most black holes. We are talking about
hundreds of millions of
Kelvin; it's hard to imagine any human artifact that could
by being exposed to disk radiation (peak in X-rays) at such temperatures, not to mention photographing. So we obviously need to reduce the temperature. Obviously, supermassive black holes are colder,
but not enough. We need to go down to 10,000 K in ISCO so that we can see at least something. This is very inaccurate, but that's all I can do.
Two questions should be asked. First:
what is the color of the
black body at this temperature? Second:
how bright is it
? Formally, the answer to these two questions is in the scalar product of functions describing the R, G, B channels with a black body spectrum. In practice, some approximations are used.
For the color is quite accurate and effective,
this formula
Tanner Helland, but it includes numerous conditions that are impossible with my ray tracing (see below for more details). The fastest way is to use a simple texture:
This texture is one of many useful things in the Mitchell Charity compilation of
“What is the color of a black body?”
. For reference, it corresponds to the white point E (whitepoint E).
The scale shows the color of the black body at temperatures from 1000 K to 30 000 K, with higher temperatures corresponding to approximately the same shade of blue. Since there is a huge difference
in brightness between temperatures, this texture cannot transmit and does not transmit brightness; rather, it normalizes the colors. Our task is to calculate the relative brightness and apply it. An
analytical formula is suitable for this. If we assume that the visible spectrum is very narrow, then the total apparent intensity is proportional to the spectrum of the black body itself:
where I got rid of the stupid common constants (we're still going to scale the brightness to see something). You can simply insert
It is quite simple. As a test, we note that the relative intensity quickly drops to zero as T approaches zero, and practically does not change when T goes to infinity.
We discussed orbital velocities in Schwarzschild geometry in the applet description. To calculate the redshift, use the redshift formula from the SRT:
Where as
It must be multiplied by the coefficient of gravitational redshift:
this coefficient does not depend on the trajectory of the light beam, but only on the radius of the radiation, since the Schwarzschild geometry is stationary.
It also means that the contribution of the observer position to the gravitational redshift is constant over the entire field of view. Our entire image has a constant total blue offset, because we are
deep in BH. Therefore, this effect gives only a faint tint that can be ignored.
We also neglect the redshift from the observer's movement, because our observer is stationary in the Schwarzschild geometry. Here is the final result:
As you can see, most of the disk is completely white due to the maximum brightness in the color channels. If you lower these channels to the range of 0.0-1.0, then the outer parts of the disk will
become pale or black. The increase in brightness is too great to see and appreciate. I tried to show the effect through post-processing, so that the brightest parts showed a transition of colors, but
this is hardly enough.
Pretty messy picture. Here is an image without brightness, where you can evaluate colors:
These pictures have lower resolution, because they
rendered on my laptop for a
very long time
(square roots are bad, children).
In any case, this render is a thousand times less spectacular than others (mainly because the inner edge of the disc is already far enough from the GE, so that the lens is too large), but the render
is at least
. If you find a black hole with a temperature of 10,000 K and good sunglasses, you will see exactly that.
Another shot from close range. I unnaturally raised saturation for beauty:
We write the black hole ray tracer
Source code on github
There is a very big and obvious difference between the optics of black holes and the numerical integrator, which produces beautiful wallpapers for the desktop with a resolution of 1080p. Last time I
did not publish my reasoning, but just picked up a big and dirty git repository. Now I want to explain in a bit more detail, and also try to keep the code more accurate and with comments.
My tracer was not created good, powerful, fast. First of all, I wanted it to be easy to set up, so that it was simple, so that people could get
and see the potential for improvement: even its imperfection can push someone to decide to write their own version. Here is a brief overview of the algorithms and their implementation.
"Magical" potential
So, the general theory of relativity, everything is clear. It is easy. We take the Schwarzschild metric, look for the Christoffel symbols, calculate their derivative, write the geodesic equation,
change some Cartesian coordinates to avoid endless suffering, we get a huge multi-line ODE, solve it. Like that.
Just kidding. Of course, there is one trick.
If you remember,
last time
I derived the following equation for the orbit of a massless particle in its orbital plane in Schwarzschild geometry (
The trick is to see
the Binet formula
here . For having a mass of Newtonian particles in the Newtonian potential of the field of central forces:
then the particle will obviously move in its orbital plane and corresponds to the Binet formula for
Let's stop for a moment to think about what we actually got. The equation says that
if we
imagine a hypothetical mechanical system of a particle under the action of a certain central force, then its trajectory will be a solution to the Binet formula. Then the mechanical system becomes a
formula calculation tool.
That is what I offer here. We specified
which is exactly a geodesic equation
Therefore, we solve the Newtonian equation in Cartesian coordinates, which is generally the simplest (I decided to use the Runge – Kutta method in order to increase the step size and reduce the
rendering time, but in the future the user will be able to choose a different solution method). Then we get just the actual light-like geodesy, where
This is much better than the previous method, which worked with the polar coordinates in the orbit plane. Here calculations are performed very efficiently.
Ray tracing in numpy
If you look at the sources, you will see a Python script. Horror! Why write ray tracing in Python? Everyone knows how
slow the
cycles run in Python, which always (almost) puts an end to work. The fact is that we perform calculations in numpy - and in parallel. That is why this program cannot gradually display already drawn
parts on the screen: it renders everything at the same time.
First, create an array of initial conditions. For example, an array
(numPixel, 3)
with vectors for all image pixels (numPixel - image width × image height). Then the calculation of each ray is reduced to arrays of the type
(numPixel, ...)
. Since the operations with arrays in numpy are performed very quickly, and everything is statically typed here (I hope I’m not saying anything stupid right now), it should be calculated fairly
quickly. Maybe it's not C, but still fast. At the same time, we have the flexibility and clarity of Python.
This method is terrible for standard ray tracing, where objects have diffuse, reflective, refractive parts, and it is important to take the lighting conditions into account. For example, the
selective reflection of parts of an array of rays is a real nightmare; tracking boolean values or loop indices requires multiple masks, and loops cannot be split. But here is another case: all
objects in our scene only emit light: the sky, a hot accretion disk, a pitch black event horizon and bright dust. They are not affected by the incident light, and the light itself also quietly passes
through them, unless it reduces the intensity. This leads us to the color determination algorithm:
Mixing colors
It's easy: you just need to mix all the objects between us and the source of the beam with their corresponding alpha values, and place them one above the other, where the furthest will be below. We
initialize the color buffer with alpha-transparent black color, then at the intersection with the object we update the buffer by mixing the color from the object
our color buffer. We perform the same steps for dust (use the density profile
The obvious disadvantage of this method is that it is impossible to stop the ray tracing after it has been calculated, because it is part of an array where other ray tracing continues. For example,
after a collision with the horizon, the rays continue to randomly wander after they hit the singularity - you can see what happens if you explicitly turn off the horizon object. The alpha blending
algorithm ensures that they do not affect the final image, but these rays still load the CPU. | {"url":"https://sudonull.com/post/10873-How-to-draw-a-black-hole-Geodesic-ray-tracing-in-a-curved-space-time","timestamp":"2024-11-07T01:17:13Z","content_type":"text/html","content_length":"41787","record_id":"<urn:uuid:14a675dc-2047-42b0-9fec-4348ed846513>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00788.warc.gz"} |
Control Systems/Noise Driven Systems - Wikibooks, open books for an open world
The topics in this chapter will rely heavily on topics from a calculus-based background in probability theory. There currently are no wikibooks available that contain this information. The reader
should be familiar with the following concepts: Gaussian Random Variables, Mean, Expectation Operation.
Systems frequently have to deal with not only the control input u, but also a random noise input v. In some disciplines, such as in a study of electrical communication systems, the noise and the data
signal can be added together into a composite input r = u + v. However, in studying control systems, we cannot combine these inputs together, for a variety of different reasons:
1. The control input works to stabilize the system, and the noise input works to destabilize the system.
2. The two inputs are independent random variables.
3. The two inputs may act on the system in completely different ways.
As we will show in the next example, it is frequently a good idea to consider the noise and the control inputs separately:
Example: Consider a moving automobile. The control signals for the automobile consist of acceleration (gas pedal) and deceleration (brake pedal) inputs acting on the wheels of the vehicle, and
working to create forward motion. The noise inputs to the system can consist of wind pushing against the vertical faces of the automobile, rough pavement (or even dirt) under the tires, bugs and
debris hitting the front windshield, etc. As we can see, the control inputs act on the wheels of the vehicle, while the noise inputs can act on multiple sides of the vehicle, in different ways.
We are going to have a brief refesher here for calculus-based probability, specifically focusing on the topics that we will use in the rest of this chapter.
The expectation operatior, E, is used to find the expected, or mean value of a given random variable. The expectation operator is defined as:
${\displaystyle E[x]=\int _{-\infty }^{\infty }xf_{x}(x)dx}$
If we have two variables that are independent of one another, the expectation of their product is zero.
The covariance matrix, Q, is the expectation of a random vector times it's transpose:
${\displaystyle E[x(t)x'(t)]=Q(t)}$
If we take the value of the x transpose at a different point in time, we can calculate out the covariance as:
${\displaystyle E[x(t)x'(s)]=Q(t)\delta (t-s)}$
Where δ is the impulse function.
We can define the state equation to a system incorporating a noise vector v:
${\displaystyle x'(t)=A(t)x(t)+H(t)u(t)+B(t)v(t)}$
For generality, we will discuss the case of a time-variant system. Time-invariant system results will then be a simplification of the time-variant case. Also, we will assume that v is a gaussian
random variable. We do this because physical systems frequently approximate gaussian processes, and because there is a large body of mathematical tools that we can use to work with these processes.
We will assume our gaussian process has zero-mean.
We would like to find out how our system will respond to the new noisy input. Every system iteration will have a different response that varies with the noise input, but the average of all these
iterations should converge to a single value.
For the system with zero control input, we have:
${\displaystyle x'(t)=A(t)x(t)+B(t)v(t)}$
For which we know our general solution is given as:
${\displaystyle x(t)=\phi (t,t_{0})x_{0}+\int _{t_{0}}^{t}\phi (t,\tau )B(\tau )v(\tau )d\tau }$
If we take the expected value of this function, it should give us the expected value of the output of the system. In other words, we would like to determine what the expected output of our system is
going to be by adding a new, noise input.
${\displaystyle E[x(t)]=E[\phi (t,t_{0})x_{0}]+E[\int _{t_{0}}^{t}\phi (t,\tau )B(\tau )v(\tau )d\tau ]}$
In the second term of this equation, neither φ nor B are random variables, and therefore they can come outside of the expectaion operation. Since v is zero-mean, the expectation of it is zero.
Therefore, the second term is zero. In the first equation, φ is not a random variable, but x[0] does create a dependancy on the output of x(t), and we need to take the expectation of it. This means
${\displaystyle E[x(t)]=\phi (t,t_{0})E[x_{0}]}$
In other words, the expected output of the system is, on average, the value that the output would be if there were no noise. Notice that if our noise vector v was not zero-mean, and if it was not
gaussian, this result would not hold.
We are now going to analyze the covariance of the system with a noisy input. We multiply our system solution by its transpose, and take the expectation: (this equation is long and might break onto
multiple lines)
${\displaystyle E[x(t)x'(t)]=E[\phi (t,t_{0})x_{0}+\int _{t_{0}}^{t}\phi (\tau ,t_{0})B(\tau )v(\tau )d\tau ]}$${\displaystyle E[(\phi (t,t_{0})x_{0}+\int _{t_{0}}^{t}\phi (\tau ,t_{0})B(\tau )v
(\tau )d\tau )']}$
If we multiply this out term by term, and cancel out the expectations that have a zero-value, we get the following result:
${\displaystyle E[x(t)x'(t)]=\phi (t,t_{0})E[x_{0}x_{0}']\phi '(t,t_{0})=P}$
We call this result P, and we can find the first derivative of P by using the chain-rule:
${\displaystyle P'(t)=A(t)\phi (t,t_{0})P_{0}\phi (t,t_{0})+\phi (t,t_{0})P_{0}\phi '(t,t_{0})A'(t)}$
${\displaystyle P_{0}=E[x_{0}x_{0}']}$
We can reduce this to:
${\displaystyle P'(t)=A(t)P(t)+P(t)A'(t)+B(t)Q(t)B'(t)}$
In other words, we can analyze the system without needing to calculate the state-transition matrix. This is a good thing, because it can often be very difficult to calculate the state-transition
Let us look again at our general solution:
${\displaystyle x(t)=\phi (t,t_{0})x(t_{0})+\int _{t_{0}}^{t}\phi (t,\tau )B(\tau )v(\tau )d\tau }$
We can run into a problem because in a gaussian distribution, especially systems with high variance (especially systems with infinite variance), the value of v can momentarily become undefined
(approach infinity), which will cause the value of x to likewise become undefined at certain points. This is unacceptable, and makes further analysis of this problem difficult. Let us look again at
our original equation, with zero control input:
${\displaystyle x'(t)=A(t)x(t)+B(t)v(t)}$
We can multiply both sides by dt, and get the following result:
${\displaystyle dx=A(t)x(t)dt+B(t)v(t)dt}$
This new term, dw, is a random process known as a Weiner Process, which the result of transforming a gaussian process in this manner.
We can define a new differential, dw(t), which is an infinitesimal function of time as:
${\displaystyle dw(t)=v(t)dt}$
Now, we can integrate both sides of this equation:
${\displaystyle x(t)=x(t_{0})+\int _{t_{0}}^{t}A(\tau )x(\tau )d\tau +\int _{t_{0}}^{t}B(\tau )dw(\tau )}$
However, this leads us to an unusual place, and one for which we are (probably) not prepared to continue further: in the third term on the left-hand side, we are attempting to integrate with respect
to a function, not a variable. In this instance, the standard Riemann integrals that we are all familiar with cannot solve this equation. There are advanced techniques known as Ito Calculus however
that can solve this equation, but these methods are currently outside the scope of this book. | {"url":"https://en.wikibooks.org/wiki/Control_Systems/Noise_Driven_Systems","timestamp":"2024-11-10T06:37:13Z","content_type":"text/html","content_length":"105916","record_id":"<urn:uuid:bef36b98-93e4-49fe-a9e9-eab58cb5a2be>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00818.warc.gz"} |
Relativistic rocket dynamics as a scientific trend
Consideration of the basic premises of relativistic rocket dynamics as one of the promising trends of modern theoretical physics and future engineering physics. The relation between relativistic
mechanics and classical mechanics of constant and variable rest mass is discussed, taking into account the role of the generalized Meshcherskii equation. It is recommended that relativistic rocket
dynamics be treated as the theory of flight of relativistic jet vehicles using the ambient cosmic medium under the action of forces applied to the vehicles. The basis of relativistic rocket dynamics
will then be relativistic mechanics with a variable rest mass, which is based on the relativistic generalized Meshcherskii equation. The basic problem of relativistic rocket dynamics is then
formulated, after making a number of physically justified simplifying assumptions, on the basis of the relativistic generalized Meshcherskii equation. For the solution of this problem it is necessary
to make certain assumptions concerning the nature of the addition of interstellar hydrogen to the relativistic jet vehicle.
Theory of relativity and gravitation
Pub Date:
□ Astrodynamics;
□ Interstellar Spacecraft;
□ Relativistic Velocity;
□ Rocket Flight;
□ Variable Mass Systems;
□ Classical Mechanics;
□ Hydrogen;
□ Interstellar Gas;
□ Jet Propulsion;
□ Technology Assessment;
□ Astrodynamics | {"url":"https://ui.adsabs.harvard.edu/abs/1974trg..conf...47F/abstract","timestamp":"2024-11-13T20:11:57Z","content_type":"text/html","content_length":"36400","record_id":"<urn:uuid:4f90d8bd-824c-422f-9422-7bf848308705>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00360.warc.gz"} |
Complex Analysis: The Geometric Viewpoint: Second Editionsearch
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
Complex Analysis: The Geometric Viewpoint: Second Edition
MAA Press: An Imprint of the American Mathematical Society
Hardcover ISBN: 978-0-88385-035-0
Product Code: CAR/23
List Price: $65.00
MAA Member Price: $48.75
AMS Member Price: $48.75
eBook ISBN: 978-0-88385-968-1
Product Code: CAR/23.E
List Price: $55.00
MAA Member Price: $41.25
AMS Member Price: $41.25
Hardcover ISBN: 978-0-88385-035-0
eBook: ISBN: 978-0-88385-968-1
Product Code: CAR/23.B
List Price: $120.00 $92.50
MAA Member Price: $90.00 $69.38
AMS Member Price: $90.00 $69.38
Click above image for expanded view
Complex Analysis: The Geometric Viewpoint: Second Edition
MAA Press: An Imprint of the American Mathematical Society
Hardcover ISBN: 978-0-88385-035-0
Product Code: CAR/23
List Price: $65.00
MAA Member Price: $48.75
AMS Member Price: $48.75
eBook ISBN: 978-0-88385-968-1
Product Code: CAR/23.E
List Price: $55.00
MAA Member Price: $41.25
AMS Member Price: $41.25
Hardcover ISBN: 978-0-88385-035-0
eBook ISBN: 978-0-88385-968-1
Product Code: CAR/23.B
List Price: $120.00 $92.50
MAA Member Price: $90.00 $69.38
AMS Member Price: $90.00 $69.38
• The Carus Mathematical Monographs
Volume: 23; 2004; 219 pp
MSC: Primary 30; Secondary 32
Recipient of the Mathematical Association of America's Beckenbach Book Prize in 1994!
In this second edition of a Carus Monograph Classic, Steven Krantz develops material on classical non-Euclidean geometry. He shows how it can be developed in a natural way from the invariant
geometry of the complex disc. He also introduces the Bergman kernel and metric and provides profound applications, some of them never having appeared before in print.
In general, the new edition represents a considerable polishing and re-thinking of the original successful volume. This is the first and only book to describe the context, the background, the
details, and the applications of Ahlfors's celebrated ideas about curvature, the Schwarz lemma, and applications in complex analysis.
Beginning from scratch, and requiring only a minimal background in complex variable theory, this book takes the reader up to ideas that are currently active areas of study. Such areas include a)
the Caratheodory and Kobayashi metrics, b) the Bergman kernel and metric, and c) boundary continuation of conformal maps. There is also an introduction to the theory of several complex variables.
Poincaré's celebrated theorem about the biholomorphic inequivalence of the ball and polydisc is discussed and proved.
□ Chapters
□ Chapter 0. Principal Ideas of Classical Function Theory
□ Chapter 1. Basic Notions of Differential Geometry
□ Chapter 2. Curvature and Applications
□ Chapter 3. Some New Invariant Metrics
□ Chapter 4. Introduction to the Bergman Theory
□ Chapter 5. A Glimpse of Several Complex Variables
□ A first-rate book, which can be used either as a text or reference.
□ In five very nicely written chapters this book gives an introduction to the approach to function theory via Riemannian geometry. Very little function-theoretic background is needed and no
knowledge whatsoever of differential geometry is assumed.
Mathematical Reviews
• Book Details
• Table of Contents
• Additional Material
• Reviews
• Requests
Volume: 23; 2004; 219 pp
MSC: Primary 30; Secondary 32
Recipient of the Mathematical Association of America's Beckenbach Book Prize in 1994!
In this second edition of a Carus Monograph Classic, Steven Krantz develops material on classical non-Euclidean geometry. He shows how it can be developed in a natural way from the invariant geometry
of the complex disc. He also introduces the Bergman kernel and metric and provides profound applications, some of them never having appeared before in print.
In general, the new edition represents a considerable polishing and re-thinking of the original successful volume. This is the first and only book to describe the context, the background, the
details, and the applications of Ahlfors's celebrated ideas about curvature, the Schwarz lemma, and applications in complex analysis.
Beginning from scratch, and requiring only a minimal background in complex variable theory, this book takes the reader up to ideas that are currently active areas of study. Such areas include a) the
Caratheodory and Kobayashi metrics, b) the Bergman kernel and metric, and c) boundary continuation of conformal maps. There is also an introduction to the theory of several complex variables.
Poincaré's celebrated theorem about the biholomorphic inequivalence of the ball and polydisc is discussed and proved.
• Chapters
• Chapter 0. Principal Ideas of Classical Function Theory
• Chapter 1. Basic Notions of Differential Geometry
• Chapter 2. Curvature and Applications
• Chapter 3. Some New Invariant Metrics
• Chapter 4. Introduction to the Bergman Theory
• Chapter 5. A Glimpse of Several Complex Variables
• A first-rate book, which can be used either as a text or reference.
• In five very nicely written chapters this book gives an introduction to the approach to function theory via Riemannian geometry. Very little function-theoretic background is needed and no
knowledge whatsoever of differential geometry is assumed.
Mathematical Reviews
Please select which format for which you are requesting permissions. | {"url":"https://bookstore.ams.org/CAR/23","timestamp":"2024-11-12T23:27:57Z","content_type":"text/html","content_length":"94806","record_id":"<urn:uuid:528ee2ee-8277-4c9f-9b59-0c5150d8ef37>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00556.warc.gz"} |
Force exerted by jet on moving cart1 you need to determine
Force Exerted By Jet On Moving cart.
1. You need to determine the velocity of water that comes out from the nozzle of this system.
need the equation please formulate the equation.
2. This water will strike a small cart at rest.
Can you calculate the velocity of the cart after the water hit the cart ?
The velocity of the cart unknown
Need the equations for exponential work.
The pressure inside the pipe is 80 psi and pipe diameter is 0.01m and the length is 1 meter. The cart wight is 2kg.
3. Including all the forces acting on the cart like the friction and the drag force and the mass force.
Attachment:- Force Exerted By Jet On Moving cart.rar
Related Questions in Mechanical Engineering
The aim of the project is to demonstrate certain aspects of
The aim of the project is to demonstrate certain aspects of engineering materials in different applications. The projects will be assessed on the basis of a written Research Report. The report should
clearly show what yo ...
Force exerted by jet on moving cart1 you need to determine
Force Exerted By Jet On Moving cart. 1. You need to determine the velocity of water that comes out from the nozzle of this system. need the equation please formulate the equation. 2. This water will
strike a small cart a ...
Mechanical engineering assignment task - solve the given
Mechanical Engineering Assignment Task - Solve the given problem. Task 1 - A spring with a one-turn loop of 40mm mean radius is formed from a round section of wire having 5 mm diameter. The straight
tangential legs of th ...
Projectflow processing of liquor in a mineral refining
Project Flow Processing of Liquor in a Mineral Refining Plant The aim of this project is to design a flow processing system of liquor (slurry) in a mineral (aluminum) refining plant. Aluminum is
manufactured in two phase ...
Heat transfer and combustionyou will need graph paper a
HEAT TRANSFER AND COMBUSTION You will need graph paper, a calculator, a copy of Appendix 1 from lesson HTC - 4 - 2 and access to steam tables to answer this TMA. 1. A fuel gas consists of 75% butane
(C 4 H 10 ), 10% prop ...
Assignment -q1 explain the difference between the
Assignment - Q1. Explain the difference between the metacentric height of a ship during 'Partially Afloat condition and 'Free Floating' condition; aid a sketch to support your answer. Q2. With the
aid of sketches, explai ...
Materials behaviour from atoms to bridges assignment -
Materials Behaviour from Atoms to Bridges Assignment - Distributed loads and static equilibrium (Please note: you should show your steps with necessary figures) Q1. Two beam sections are jointed at C
and supported at A b ...
Questions -q1 a qualitative estimate of the effect of a
Questions - Q1. A qualitative estimate of the effect of a wind-tunnel contraction (Figure) on turbulent motion can be obtained by assuming that the angular momentum of eddies does not change through
the contraction. Let ...
Assignment - machine learning for signal processingproblem
Assignment - Machine Learning for Signal Processing Problem 1: Instantaneous Source Separation 1. As you might have noticed from my long hair, I've got a rock spirit. However, for this homework I
dabbled to compose a pie ...
Problem -a long pipe od 1413 mm id 1318 mm kp 20 wmk
Problem - A long pipe (OD = 141.3 mm, ID =131.8 mm, k p = 20 W/m.K) supplies hot pressurized liquid water at 400 K to a heater in a factory. The pipe has an insulation thickness of 100 mm. A new
efficient heater replaces ...
Have any Question? | {"url":"http://www.mywordsolution.com/question/force-exerted-by-jet-on-moving-cart1-you-need-to-determine/93111613","timestamp":"2024-11-09T03:40:31Z","content_type":"application/xhtml+xml","content_length":"36351","record_id":"<urn:uuid:2f85bc8e-649e-4654-92d4-e48b49dd00af>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00553.warc.gz"} |
16.3 Emerging Topics
Rendering research continues to be a vibrant field, as should be evident by the length of the “Further Reading” sections at the conclusions of the previous chapters. In addition to the topics
discussed earlier, there are two important emerging areas of rendering research that we have not covered in this book—inverse and differentiable rendering and the use of machine learning techniques
in image synthesis. Work in these areas is progressing rapidly, and so we believe that it would be premature to include implementations of associated techniques in pbrt and to discuss them in the
book text; whichever algorithms we chose would likely be obsolete in a year or two. However, given the amount of activity in these areas, we will briefly summarize the landscape of each.
16.3.1 Inverse and Differentiable Rendering
This book has so far focused on forward rendering, in which rendering algorithms convert an input scene description (“”) into a synthetic image (“”) taken in the corresponding virtual world. Assuming
that the underlying computation is consistent across runs, we can think of the entire process as the evaluation of an intricate function satisfying . The main appeal of physically based
forward-rendering methods is that they account for global light transport effects, which improves the visual realism of the output .
However, many applications instead require an inverse to infer a scene description that is consistent with a given image , which may be a real-world photograph. Examples of disciplines where such
inverses are needed include autonomous driving, robotics, biomedical imaging, microscopy, architectural design, and many others.
Evaluating is a surprisingly difficult and ambiguous problem: for example, a bright spot on a surface could be alternatively explained by texture or shape variation, illumination from a light source,
focused reflection from another object, or simply shadowing at all other locations. Resolving this ambiguity requires multiple observations of the scene and reconstruction techniques that account for
the interconnected nature of light transport and scattering. In other words, physically based methods are not just desirable—they are a prerequisite.
Directly inverting is possible in some cases, though doing so tends to involve drastic simplifying assumptions: consider measurements taken by an X-ray CT scanner, which require further processing to
reveal a specimen’s interior structure. (X-rays are electromagnetic radiation just like visible light that are simply characterized by much shorter wavelengths in the 0.1–10nm range.) Standard
methods for this reconstruction assume a purely absorbing medium, in which case a 3D density can be found using a single pass over all data. However, this approximate inversion leads to artifacts
when dense bone or metal fragments reflect some of the X-rays.
The function that is computed by a physically based renderer like pbrt is beyond the reach of such an explicit inversion. Furthermore, a scene that perfectly reproduces images seen from a given set
of viewpoints may not exist at all. Inverse rendering methods therefore pursue a relaxed minimization problem of the form
where refers to a loss function that quantifies the quality of a rendered image of the scene . For example, the definition could be used to measure the distance to a reference image . This type of
optimization is often called analysis-by-synthesis due to the reliance on repeated simulation (synthesis) to gain understanding about an inverse problem. The approach easily generalizes to
simultaneous optimization of multiple viewpoints. An extra regularization term depending only on the scene parameters is often added on the right hand side to encode prior knowledge about reasonable
parameter ranges. Composition with further computation is also possible: for example, we could alternatively optimize , where is a neural network that produces the scene from learned parameters .
Irrespective of such extensions, the nonlinear optimization problem in Equation (16.1) remains too challenging to solve in one step and must be handled using iterative methods. The usual caveats
about their use apply here: iterative methods require a starting guess and may not converge to the optimal solution. This means that selecting an initial configuration and incorporating prior
information (valid parameter ranges, expected smoothness of the solution, etc.) are both important steps in any inverse rendering task. The choice of loss and parameterization of the scene can also
have a striking impact on the convexity of the optimization task (for example, direct optimization of triangle meshes tends to be particularly fragile, while implicit surface representations are
better behaved).
Realistic scene descriptions are composed of millions of floating-point values that together specify the shapes, BSDFs, textures, volumes, light sources, and cameras. Each value contributes a degree
of freedom to an extremely high-dimensional optimization domain (for example, a quadrilateral with a RGB image map texture adds roughly 1.7 million dimensions to ). Systematic exploration of a space
with that many dimensions is not possible, making gradient-based optimization the method of choice for this problem. The gradient is invaluable here because it provides a direction of steepest
descent that can guide the optimization toward higher-quality regions of the scene parameter space.
Let us consider the most basic gradient descent update equation for this problem:
where denotes the step size. A single iteration of this optimization can be split into four individually simpler steps via the chain rule:
where and are the Jacobian matrices of the rendering algorithm and loss function, and and respectively denote the number of scene parameters and rendered pixels. These four steps correspond to:
1. Rendering an image of the scene .
2. Differentiating the loss function to obtain an image-space gradient vector . (A positive component in this vector indicates that increasing the value of the associated pixel in the rendered image
would reduce the loss; the equivalent applies for a negative component.)
3. Converting the image-space gradient into a parameter-space gradient .
4. Taking a gradient step.
In practice, more sophisticated descent variants than the one in Equation (16.3) are often used for step 4—for example, to introduce per-variable momentum and track the variance of gradients, as is
done in the commonly used Adam (Kigma and Ba 2014) optimizer. Imposing a metric on the optimization domain to pre-condition gradient steps can substantially accelerate convergence, as demonstrated by
Nicolet et al. (2021) in the case of differentiable mesh optimization.
The third step evaluates the vector-matrix product , which is the main challenge in this sequence. At size , the Jacobian of the rendering algorithm is far too large to store or even compute, as both
and could be in the range of multiple millions of elements. Methods in the emerging field of differentiable rendering therefore directly evaluate this product without ever constructing the matrix .
The remainder of this subsection reviews the history and principles of these methods.
For completeness, we note that a great variety of techniques have used derivatives to improve or accelerate the process of physically based rendering; these are discussed in “Further Reading”
sections throughout the book. In the following, we exclusively focus on parametric derivatives for inverse problems.
Inverse problems are of central importance in computer vision, and so it should be of no surprise that the origins of differentiable rendering as well as many recent advances can be found there:
following pioneering work on OpenDR by Loper and Black (2014), a number of approximate differentiable rendering techniques have been proposed and applied to challenging inversion tasks. For example,
Rhodin et al. (2015) reconstructed the pose of humans by optimizing a translucent medium composed of Gaussian functions. Kato et al. (2018) and Liu et al. (2019a) proposed different ways of
introducing smoothness into the traditional rasterization pipeline. Laine et al. (2020) recently proposed a highly efficient modular GPU-accelerated rasterizer based on deferred shading followed by a
differentiable antialiasing step. While rasterization-based methods can differentiate the rendering of directly lit objects, they cannot easily account for effects that couple multiple scene objects
like shadows or interreflection.
Early work that used physically based differentiable rendering focused on the optimization of a small number of parameters, where there is considerable flexibility in how the differentiation is
carried out. For example, Gkioulekas et al. (2013b) used stochastic gradient descent to reconstruct homogeneous media represented by a low-dimensional parameterization. Khungurn et al. (2015)
differentiated a transport simulation to fit fabric parameters to the appearance in a reference photograph. Hašan and Ramamoorthi (2013) used volumetric derivatives to enable near-instant edits of
path-traced heterogeneous media. Gkioulekas et al. (2016) studied the challenges of differentiating local properties of heterogeneous media, and Zhao et al. (2016) performed local gradient-based
optimization to drastically reduce the size of heterogeneous volumes while preserving their appearance.
Besides the restriction to volumetric representations, a shared limitation of these methods is that they cannot efficiently differentiate a simulation with respect to the full set of scene
parameters, particularly when and are large (in other words, they are not practical choices for the third step of the previous procedure). Subsequent work has adopted reverse-mode differentiation,
which can simultaneously propagate derivatives to an essentially arbitrarily large number of parameters. (The same approach also powers training of neural networks, where it is known as
Of particular note is the groundbreaking work by Li et al. (2018) along with their redner reference implementation, which performs reverse-mode derivative propagation using a hand-crafted
implementation of the necessary derivatives. In the paper, the authors make the important observation that 3D scenes are generally riddled with visibility-induced discontinuities at object
silhouettes, where the radiance function undergoes sudden changes. These are normally no problem in a Monte Carlo renderer, but they cause a severe problem following differentiation. To see why,
consider a hypothetical integral that computes the average incident illumination at some position . When computing the derivative of such a calculation, it is normally fine to exchange the order of
differentiation and integration:
The left hand side is the desired answer, while the right hand side represents the result of differentiating the simulation code. Unfortunately, the equality generally no longer holds when is
discontinuous in the argument being integrated. Li et al. recognized that an extra correction term must be added to account for how perturbations of the scene parameters cause the discontinuities to
shift. They resolved primary visibility by integrating out discontinuities via the pixel reconstruction filter and used a hierarchical data structure to place additional edge samples on silhouettes
to correct for secondary visibility.
Building on the Reynolds transport theorem, Zhang et al. (2019) generalized this approach into a more general theory of differential transport that also accounts for participating media. (In that
framework, the correction by Li et al. (2018) can also be understood as an application of the Reynolds transport theorem to a simpler 2D integral.) Zhang et al. also studied further sources of
problematic discontinuities such as open boundaries and shading discontinuities and showed how they can also be differentiated without bias.
Gkioulekas et al. (2016) and Azinović et al. (2019) observed that the gradients produced by a differentiable renderer are generally biased unless extra care is taken to decorrelate the forward
and differential computation (i.e., steps 1 and 3)—for example, by using different random seeds.
Manual differentiation of simulation code can be a significant development and maintenance burden. This problem can be addressed using tools for automatic differentiation (AD), in which case
derivatives are obtained by mechanically transforming each step of the forward simulation code. See the excellent book by Griewank and Walther (2008) for a review of AD techniques. A curious aspect
of differentiation is that the computation becomes unusually dynamic and problem-dependent: for example, derivative propagation may only involve a small subset of the program variables, which may not
be known until the user launches the actual optimization.
Mirroring similar developments in the machine learning world, recent work on differentiable rendering has therefore involved combinations of AD with just-in-time (JIT) compilation to embrace the
dynamic nature of this problem and take advantage of optimization opportunities. There are several noteworthy differences between typical machine learning and rendering workloads: the former tend to
be composed of a relatively small number of arithmetically intense operations like matrix multiplications and convolutions, while the latter use vast numbers of simple arithmetic operations. Besides
this difference, ray-tracing operations and polymorphism are ubiquitous in rendering code; polymorphism refers to the property that function calls (e.g., texture evaluation or BSDF sampling) can
indirectly branch to many different parts of a large codebase. These differences have led to tailored AD/JIT frameworks for differentiable rendering.
The Mitsuba 2 system described by Nimier-David et al. (2019) traces the flow of computation in rendering algorithms while applying forward- or reverse-mode AD; the resulting code is then JIT-compiled
into wavefront-style GPU kernels. Later work on the underlying Enoki just-in-time compiler added more flexibility: in addition to wavefront-style execution, the system can also generate megakernels
with reduced memory usage. Polymorphism-aware optimization passes simplify the resulting kernels, which are finally compiled into vectorized machine code that runs on the CPU or GPU.
A fundamental issue of any method based on reverse-mode differentiation (whether using AD or hand-written derivatives) is that the backpropagation step requires access to certain intermediate values
computed by the forward simulation. The sequence of accesses to these values occurs in reverse order compared to the original program execution, which is inconvenient because they must either be
stored or recomputed many times. The intermediate state needed to differentiate a realistic simulation can easily exhaust the available system memory, limiting performance and scalability.
Nimier-David et al. (2020) and Stam (2020) observed that differentiating a light transport simulation can be interpreted as a simulation in its own right, where a differential form of radiance
propagates through the scene. This derivative radiation is “emitted” from the camera, reflected by scene objects, and eventually “received” by scene objects with differentiable parameters. This idea,
termed radiative backpropagation, can drastically improve the scalability limitation mentioned above (the authors report speedups of up to compared to naive AD). Following this idea, costly recording
of program state followed by reverse-mode differentiation can be replaced by a Monte Carlo simulation of the “derivative radiation.” The runtime complexity of the original radiative backpropagation
method is quadratic in the length of the simulated light paths, which can be prohibitive in highly scattering media. Vicini et al. (2021) addressed this flaw and enabled backpropagation in linear
time by exploiting two different flavors of reversibility: the physical reciprocity of light and the mathematical invertibility of deterministic computations in the rendering code.
We previously mentioned how visibility-related discontinuities can bias computed gradients unless precautions are taken. A drawback of the original silhouette edge sampling approach by Li et al. (
2018) was relatively poor scaling with geometric complexity. Zhang et al. (2020) extended differentiable rendering to Veach’s path space formulation, which brings unique benefits in such challenging
situations: analogous to how path space forward-rendering methods open the door to powerful sampling techniques, differential path space methods similarly enable access to previously infeasible ways
of generating silhouette edges. For example, instead of laboriously searching for silhouette edges that are visible from a specific scene location, we can start with any triangle edge in the scene
and simply trace a ray to find suitable scene locations. Zhang et al. (2021b) later extended this approach to a larger path space including volumetric scattering interactions.
Loubet et al. (2019) made the observation that discontinuous integrals themselves are benign: it is the fact that they move with respect to scene parameter perturbations that causes problems under
differentiation. They therefore proposed a reparameterization of all spherical integrals that has the curious property that it moves along with each discontinuity. The integrals are then static in
the new coordinates, which makes differentiation under the integral sign legal.
Bangaru et al. (2020) differentiated the rendering equation and applied the divergence theorem to convert a troublesome boundary integral into a more convenient interior integral, which they
subsequently showed to be equivalent to a reparameterization. They furthermore identified a flaw in Loubet et al.’s method that causes bias in computed gradients and proposed a construction that
finally enables unbiased differentiation of discontinuous integrals.
Differentiating under the integral sign changes the integrand, which means that sampling strategies that were carefully designed for a particular forward computation may no longer be appropriate for
its derivative. Zeltner et al. (2021) investigated the surprisingly large space of differential rendering algorithms that results from differentiating standard constructions like importance sampling
and MIS in different ways (for example, differentiation followed by importance sampling is not the same as importance sampling followed by differentiation). They also proposed a new sampling strategy
specifically designed for the differential transport simulation. In contrast to ordinary rendering integrals, their differentiated counterparts also contain both positive and negative-valued regions,
which means that standard sampling approaches like the inversion method are no longer optimal from the viewpoint of minimizing variance. Zhang et al. (2021a) applied antithetic sampling to reduce
gradient variance involving challenging cases that arise when optimizing the geometry of objects in scenes with glossy interreflection.
While differentiable rendering still remains challenging, fragile, and computationally expensive, steady advances continue to improve its practicality over time, leading to new applications made
possible by this capability.
16.3.2 Machine Learning and Rendering
As noted by Hertzmann (2003) in a prescient early paper, machine learning offers effective approaches to many important problems in computer graphics, including regression and clustering. Yet until
recently, application of ideas from that field was limited. However, just as in other areas of computer science, machine learning and deep neural networks have recently become an important component
of many techniques at the frontiers of rendering research.
This work can be (roughly) organized into three broad categories that are progressively farther afield from the topics discussed in this book:
1. Application of learned data structures, typically based on neural networks, to replace traditional data structures in traditional rendering algorithms.
2. Using machine learning–based algorithms (often deep convolutional neural networks) to improve images generated by traditional rendering algorithms.
3. Directly synthesizing photorealistic images using deep neural networks.
Early work in the first category includes Nowrouzezahrai et al. (2009), who used neural networks to encode spherical harmonic coefficients that represented the reflectance of dynamic objects;
Dachsbacher (2011), who used neural networks to represent inter-object visibility; and Ren et al. (2013), who encoded scenes’ radiance distributions using neural networks.
Previous chapters’ “Further Reading” sections have discussed many techniques based on learned data structures, including approaches that use neural networks to represent complex materials (Rainer et
al. 2019, 2020; Kuznetsov et al. 2021), complex light sources (Zhu et al. 2021), and the scene’s radiance distribution to improve sampling (Müller et al. 2019, 2020, 2021). Many other techniques
based on caching and interpolating radiance in the scene can be viewed through the lens of learned data structures, spanning Vorba et al.’s (2014) use of Gaussian mixture models even to techniques
like irradiance caching (Ward et al. 1988).
One challenge in using learned data structures with traditional rendering algorithms is that the ability to just evaluate a learned function is often not sufficient, since effective Monte Carlo
integration generally requires the ability to draw samples from a matching distribution and to quantify their density. Another challenge is that online learning is often necessary, where the learned
data structure is constructed while rendering proceeds rather than being initialized ahead of time. For interactive rendering of dynamic scenes, incrementally updating learned representations can be
especially beneficial.
More broadly, it may be desirable to represent an entire scene with a neural representation; there is no requirement that the abstractions of meshes, BRDFs, textures, lights, and media be separately
and explicitly encoded. Furthermore, learning the parameters to such representations in inverse rendering applications can be challenging due to the ambiguities noted earlier. At writing, neural
radiance fields (NeRF) (Mildenhall et al. 2020) are seeing widespread adoption as a learned scene representation due to the effectiveness and efficiency of the approach. NeRF is a volumetric
representation that gives radiance and opacity at a given point and viewing direction. Because it is based on volume rendering, it has the additional advantage that it avoids the challenges of
discontinuities in the light transport integral discussed in the previous section.
In rendering, work in the second category—using machine learning to improve conventionally rendered images—began with neural denoising algorithms, which are discussed in the “Further Reading” section
at the end of Chapter 5. These algorithms can be remarkably effective; as with many areas of computer vision, deep convolutional neural networks have rapidly become much more effective at this
problem than previous non-learned techniques.
Figure 16.2: Effectiveness of Modern Neural Denoising Algorithms for Rendering. (a) Noisy image rendered with 32 samples per pixel. (b) Feature buffer with the average surface albedo at each pixel.
(c) Feature buffer with the surface normal at each pixel. (d) Denoised image. (Image denoised with the NVIDIA OptiX 7.3 denoiser.)
Figure 16.2 shows an example of the result of using such a denoiser. Given a noisy image rendered with 32 samples per pixel as well as two auxiliary images that encode the surface albedo and surface
normal, the denoiser is able to produce a noise-free image in a few tens of milliseconds. Given such results, the alternative of paying the computational cost of rendering a clean image by taking
thousands of pixel samples is unappealing; doing so would take much longer, especially given that Monte Carlo error only decreases at a rate in the number of samples . Furthermore, neural denoisers
are usually effective at eliminating the noise from spiky high-variance pixels, which otherwise would require enormous numbers of samples to achieve acceptable error.
Most physically based renderers today are therefore used with denoisers. This leads to an important question: what is the role of the renderer, if its output is to be consumed by a neural network?
Given a denoiser, the renderer’s task is no longer to try to make the most accurate or visually pleasing image for a human observer, but is to generate output that is most easily converted by the
neural network to the desired final representation. This question has deep implications for the design of both renderers and denoisers and is likely to see much attention in coming years. (For an
example of recent work in this area, see the paper by Cho et al. (2021), who improved denoising by incorporating information directly from the paths traced by the renderer and not just from image
The question of the renderer’s role is further provoked by neural post-rendering approaches that do much more than denoise images; a recent example is GANcraft, which converts low-fidelity blocky
images of Minecraft scenes to be near-photorealistic (Hao et al. 2021). A space of techniques lies in between this extreme and less intrusive post-processing approaches like denoising: deep shading (
Nalbach et al. 2017) synthesizes expensive effects starting from a cheaply computed set of G-buffers (normals, albedo, etc.). Granskog et al. (2020) improved shading inference using additional
view-independent context extracted from a set of high-quality reference images. More generally, neural style transfer algorithms (Gatys et al. 2016) can be an effective way to achieve a desired
visual style without fully simulating it in a renderer. Providing nuanced artistic control to such approaches remains an open problem, however.
In the third category, a number of researchers have investigated training deep neural networks to encode a full rendering algorithm that goes from a scene description to an image. See Hermosilla et
al. (2019) and Chen et al. (2021) for recent work in this area. Images may also be synthesized without using conventional rendering algorithms at all, but solely from characteristics learned from
real-world images. A recent example of such a generative model is StyleGAN, which was developed by Karras et al. (2018, 2020); it is capable of generating high-resolution and photorealistic images of
a variety of objects, including human faces, cats, cars, and interior scenes. Techniques based on segmentation maps (Chen and Koltun 2017; Park et al. 2019) allow a user to denote that regions of an
image should be of general categories like “sky,” “water,” “mountain,” or “car” and then synthesize a realistic image that follows those categories. See the report by Tewari et al. (2020) for a
comprehensive summary of recent work in such areas. | {"url":"https://www.pbr-book.org/4ed/Retrospective_and_the_Future/Emerging_Topics","timestamp":"2024-11-02T09:43:25Z","content_type":"text/html","content_length":"170102","record_id":"<urn:uuid:3983007e-ad45-41a3-95a4-51892c53819e>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00887.warc.gz"} |
How To Copy And Paste Exact Formula In Excel | SpreadCheaters
How to copy and paste exact formula in Excel
Copying and pasting an exact formula in Excel means copying a cell or range containing a formula and pasting it to another location, such that the formula is replicated exactly as it was originally
written, including any relative or absolute cell references. This allows you to quickly and accurately apply the same calculation to different sets of data or reuse complex formulas without having to
rewrite them every time.
Our dataset comprises a list of items purchased from a grocery store, including the name of the product, the quantity purchased, and the price per unit. We use a basic multiplication formula to
calculate the total cost of each item and a summation formula to calculate the net total of the entire purchase. To maintain consistency and accuracy in our calculations, we have three methods for
copying the exact formulas
Method 1: Copy the Exact formula of a Single Cell
Step 1 – Double Click on the Cell
• Double-click on the cell containing the formula to be copied and the formula will be shown on the cell
Step 2 – Add the Dollar symbol
• In the formula add the Dollar symbol($) in the form before every row and column header. This can be done manually or it can be done by pressing F4 key while keeping the cursor at each cell
Step 3 – Copy the Formula
• After placing the Dollar symbol select the formula
• And press the CTRL+C keys to copy the formula
Step 4 – Select the cell
• Click on the cell where you want to paste the formula
Step 5 – Paste the Formula
• After selecting the Cell paste the formula by pressing the CTRL+V keys
Step 6 – Press the Enter key
• After pasting the formula, press the Enter key to get the required result
Method 2: Keep the copied Formulas similar using special symbol
Step 1 – Select the Range of Cells
• Select the range of cells whose formulas are to be copied
Step 2 – Click on the Find and Select option
• Click on the Find and Select option in the Editing group of the Home tab and a dropdown menu will appear
Step 3 – Click on the Replace option
• From the drop-down menu, click on the Replace option and a dialog box will appear
Step 4 – Select the Symbols in the Dialog box
• Select the (=) in the box next to the Find What option
• And select (#) in the Replace with option
Step 5 – Click on the Replace All option
• After selecting the symbols, click on the Replace All option at the left bottom of the dialog box and the symbols be changed.
• This will convert all the formulas to simple expressions which can be copied and pasted easily to any other range.
Step 6 – Copy the Range
• After changing the symbols, select the range and press the CTRL+C keys to copy the range
Step 7 – Select the Cell
• Click on the Cell where you want to paste the formula to select it
Step 8 – Paste the Range
• After selecting the cell, press the CTRL+V keys to paste the copied range
Step 9 – Click on the Find and Select option
• Click on the Find and Select option in the Editing group of the Home tab and a dropdown menu will appear
Step 10 – Click on the Replace option
• From the drop-down menu, click on the Replace option and a dialog box will appear
Step 11 – Replace the Symbols in the Dialog box
• Write the (#) symbol next to the Find What option.
• Write (=) in the Replace with option.
Step 12 – Click on the Replace All option
• After selecting the symbols, click on the Replace All option at the left bottom of the dialog box to get the required result
Step 13 – Select the Range
• Select the previous range that you copied.
Step 14 – Change the Symbol
• Locate the Find and Replace dialog box again.
• Write the (#) in the Find What option
• Write (=) in the Replace with option
• Then click on the Replace All option
Method 3: Copy the range of Formulas using the Show Formula option
Step 1 – Select the Range of Cells
• Select the range of cells whose formulas are to be copied
Step 2 – Click on the Show Formulas option
• After selecting the range of cells, click on the Show Formulas option in the Formula Auditing group of the Formulas tab
• And the formulas will appear in the selected range
Step 3 – Copy the range of Formulas
• To copy the formulas select the range of formula
• And press the CTRL+C
Step 4 – Paste the Range on the Notepad
• After copying the range open the Notepad
• Then paste the range on the notepad by pressing the CTRL+V.
Step 5 – Select and copy the Range
• After typing the range on the Notepad, press CTRL+A to select all the formulas.
• Copy the whole text by pressing the CTRL+C keys. This will copy the formulas as text and the same cell references will be copied.
Step 6 – Select the Cell
• After copying the formulas from Notepad, click on the cell where you want to paste them
Step 7 – Paste the Range
• After selecting the cell, press the CTRL+V keys to paste the copied formulas.
• When we paste the formulas from notepad then Excel doesn’t change the cell references and paste the formulas as they were written in the notepad as shown below.
Step 8 – Click on the Show Formulas option
• Select the range of pasted formula
• And click on the Show Formulas option in the Formula Auditing group of the Formulas tab to get the required results. | {"url":"https://spreadcheaters.com/how-to-copy-and-paste-exact-formula-in-excel/","timestamp":"2024-11-14T18:37:11Z","content_type":"text/html","content_length":"76461","record_id":"<urn:uuid:4208e34c-eb5c-4ea6-8beb-b5d52b18265b>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00374.warc.gz"} |
The Direct Product of Two Groups
The Direct Product of Two Groups
Suppose that $(G, \cdot)$ and $(H, *)$ are two groups. We would like to obtain a new group on the cartesian product $G \times H = \{ (g, h) : g \in G, h \in H \}$. There is a rather natural way to do
this as we prove in the following theorem.
Theorem 1: Let $(G, \cdot)$ and $(H, *)$ be two groups. Define an operation on $G \times H$ by $(g_1, h_1) (g_2, h_2) = (g_1 \cdot g_2, h_1 * h_2)$. Then $G \times H$ with this operation is a group.
• Proof: Let $e_G$ denote the identity in $G$ and let $e_H$ denote the identity in $H$.
• It is trivially clear that $G \times H$ is closed under $\cdot$.
• Let $(g_1, h_1), (g_2, h_2), (g_3, h_3) \in G \times H$. Then by the associativity in the components of the following product we have that:
\quad [(g_1, h_1)(g_2, h_2)] (g_3, h_3) &= (g_1 \cdot g_2, h_1 * h_2)(g_3, h_3) \\ &= ([g_1 \cdot g_2] \cdot g_3, [h_1 * h_2] * h_3) \\ &= (g_1 \cdot [g_2 \cdot g_3], h_1 * [h_2 * h_3]) \\ &= (g_1,
h_1)(g_2 \cdot g_3, h_2 * h_3) \\ &= (g_1, h_1)[(g_2, h_2) (g_3, h_3)]
• So this operation is associative on $G \times H$.
• Let $e = (e_G, e_H) \in G \times H$. Then for all $(g, h) \in G \times H$ we have that:
\quad e (g, h) = (e_G, e_H)(g, h) = (e_G \cdot g, e_H \cdot h) = (g, h)
\quad (g, h)e = (g, h) (e_G, e_H) = (g \cdot e_G, h * e_H) = (g, h)
• So an identity for $\cdot$ exists in $G \times H$.
• Lastly, for all $(g, h) \in G \times H$ let $(g, h)^{-1} = (g^{-1}, h^{-1})$. Then:
\quad (g, h)(g, h)^{-1} = (g, h)(g^{-1}, h^{-1}) = (g \cdot g^{-1}, h \cdot h^{-1}) = (e_G, e_H) = e
\quad (g, h)^{-1} (g, h) = (g^{-1}, h^{-1})(g, h) = (g^{-1} \cdot g, h^{-1} \cdot h) = (e_G, e_H) = e
• Therefore $(G \times H, \cdot)$ is indeed a group. $\blacksquare$
The group $G \times H$ with the operation above is well-defined for any two groups $(G, \cdot)$ and $(H, *)$ so we give it a special name.
Definition: Let $(G, \cdot)$ and $(H, \cdot)$ be two groups. Then the Direct Product of these two groups is the new group $G \times H$ with the binary operation on $G \times H$ defined for all $(g_1,
h_1), (g_2, h_2) \in G \times H$ by $(g_1, h_1)(g_2, h_2) = (g_1 \cdot g_2, h_1 * h_2)$.
Sometimes the term "External Direct Product" is used in the above definition to avoid confusion with the "internal direct product" which we define later.
For example, if we consider the group $(\mathbb{Z}_2, +)$ then the external direct product of this group with itself is the group $\mathbb{Z}_2 \times \mathbb{Z}_2$ whose operation table is given by:
$\cdot$ $(0, 0)$ $(1, 0)$ $(0, 1)$ $(1, 1)$
$(0, 0)$ $(0, 0)$ $(1, 0)$ $(0, 1)$ $(1, 1)$
$(1, 0)$ $(1, 0)$ $(0, 0)$ $(1, 1)$ $(0, 1)$
$(0, 1)$ $(0, 1)$ $(1, 1)$ $(0, 0)$ $(1, 0)$
$(1, 1)$ $(1, 1)$ $(0, 1)$ $(1, 0)$ $(0, 0)$ | {"url":"http://mathonline.wikidot.com/the-direct-product-of-two-groups","timestamp":"2024-11-02T21:43:08Z","content_type":"application/xhtml+xml","content_length":"18870","record_id":"<urn:uuid:c2873004-ec0f-44a7-a5bf-da24e5fd30f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00394.warc.gz"} |
Learning Scalable Decentralized Controllers for Heterogeneous Robot Swarms With Graph Neural Networks
Issue Section:
Special Papers
Distributed multi-agent systems are becoming increasingly crucial for diverse applications in robotics because of their capacity for scalability, efficiency, robustness, resilience, and the ability
to accomplish complex tasks. Controlling these large-scale swarms by relying on local information is very challenging. Although centralized methods are generally efficient or optimal, they face the
issue of scalability and are often impractical. Given the challenge of finding an efficient decentralized controller that uses only local information to accomplish a global task, we propose a
learning-based approach to decentralized control using supervised learning. Our approach entails training controllers to imitate a centralized controller's behavior but uses only local information to
make decisions. The controller is parameterized by aggregation graph neural networks (GNNs) that integrate information from remote neighbors. The problems of segregation and aggregation of a swarm of
heterogeneous agents are explored in 2D and 3D point mass systems as two use cases to illustrate the effectiveness of the proposed framework. The decentralized controller is trained using data from a
centralized (expert) controller derived from the concept of artificial differential potential. Our learned models successfully transfer to actual robot dynamics in physics-based Turtlebot3 robot
swarms in Gazebo/ROS2 simulations and hardware implementation and Crazyflie quadrotor swarms in Pybullet simulations. Our experiments show that our controller performs comparably to the centralized
controller and demonstrates superior performance compared to a local controller. Additionally, we showed that the controller is scalable by analyzing larger teams and diverse groups with up to 100
Issue Section:
Special Papers
1 Introduction
Robots have become prevalent for various applications such as mapping, surveillance, delivery of goods, transportation, and emergency response. As real-world problems become complex, the need for
multi-agent systems (MAS) instead of single-agent systems becomes paramount. In many robotics applications, distributed multi-agent systems are becoming crucial due to their potential for enhancing
efficiency, fault tolerance, scalability, robustness, and resilience. These applications span diverse domains such as space exploration [1], cooperative localization [2], collaborative robots in
production factories [3], search and rescue [4], and traffic control systems [5]. Hence, swarm robotics has been a focus of research for several years. Swarm robots are simple and large number of
robots that collaborate to accomplish a collective objective, drawing inspiration from natural behaviors. Coordination and control of multiple robots has garnered significant interest from
researchers and has been explored for various applications, including collective exploration, collective fault detection, collective transport, task allocation, search and rescue, etc. In nature,
various systems form groups, where agents unite or separate based on intended purpose or unique characteristics [6]. Control of several agents to achieve a desired pattern or shape is critical for
tasks that rely on coordinated action by multi-agent systems such as pattern formation, collective exploration, object clustering, and assembling. Most swarm research works concentrate on robots that
are identical or have similar hardware and software frameworks, known as homogeneous systems [7–9]. Nonetheless, many applications of multi-agent systems necessitate the collaboration of diverse
teams of agents with distinct characteristics to accomplish a specific task.
A key benefit of employing heterogeneous swarms lies in their capacity to tackle a diverse range of tasks, enabling the assignment of certain functions to a subset of the swarm [10]. In applications
like encircling areas with hazardous waste or chemicals, boundary protection, and surveillance, which require the collaboration of multiple robots with diverse sensors or actuation capabilities [11],
the robots need to synchronize their movements in a specific pattern. A viable approach would involve the robots organizing themselves into distinct groups for the subtasks. This behavior is known as
Segregation. A similar behavior called Aggregation occurs when the different robot types are intermixed homogeneously. In such circumstances, incorporating all the necessary actuation and sensory
functionalities within a single robot can be impractical; thus, a heterogeneous team with the appropriate blend of components becomes necessary.
Controlling large-scale swarms can be very challenging. Centralized methods are generally optimal and a good option for a small scale of robots. All the information is received by a central agent
responsible for calculating the actions of each individual agent. In other formulations of centralized controllers, actions are computed for different agents locally but require global information.
This is a bottleneck, not robust or scalable, and often impractical. With the increasing number of agents, decentralized control becomes essential. Each agent is responsible for determining its own
action based on the local information from its neighbors. However, finding an optimal decentralized controller to achieve a global task using local information is a complex problem. Particularly, the
effect of individual action on global behavior is hard to predict and is often governed by emergent behavior. This makes it very hard to solve the inverse problem of finding local agent actions that
can lead to desired global behavior.
This paper focuses on developing solutions for addressing the challenges of the decentralized systems—local information, control, and scalability—by exploiting the power of deep learning and Graph
Neural Networks. We developed a decentralized controller for segregating and aggregating a heterogeneous robotic swarm based on the robot type. Here, the proposed method focuses on the imitation
learning framework and graph neural network parameterization for learning a scalable control policy for holonomic mobile agents moving in a 2D and 3D Euclidean space. The imitation learning framework
aims to replicate a global controller's actions and behaviors. The policy is learned using the DAGGER (Dataset Aggregation) algorithm. Our experiments show that the learned controller, which uses
only local information, delivers performance close to the expert controller that uses global information for decision-making and outperforms a classical local controller. The number of robots was
also scaled up to 100 and 10 groups. Also, the controller was scaled to larger nonholonomic mobile robots and flying robot swarms.
2 Related Works
2.1 Segregation and Aggregation Behaviors.
Reynolds [12] proposed one of the pioneering research works to simulate swarming behaviors, including herding and the flocking of birds. The flocking algorithm incorporates three fundamental driving
rules: alignment, guiding agents toward their average heading; separation, to avoid collisions; and cohesion, steering agents toward their average position. Expanding upon this foundation, numerous
researchers have delved into the development of both centralized and decentralized control methodologies, including artificial potential [13], hydrodynamic control [14], leader–follower [7], and more
Segregation and Aggregation are behaviors seen in several biological systems and are widely studied phenomena. Segregation is a sorting mechanism in various natural processes observed in numerous
organisms—cells and animals [15]. Segregation seen in nature includes cell division in embryogenesis [15], strain odor recognition causing aggregation and segregation of cockroaches [16], and
tentacle morphogenesis in hydra [17].
Aggregation is an extensively studied behavior found in living organisms such as fish, bacteria, cockroaches, mammals, and arthropods [18,19]. Jeanson et al. [20] developed a model for aggregation in
cockroach larvae and reported that cockroaches leave their clusters and join back with the probabilities correlating to the cluster sizes. Garnier et al. [21] demonstrated aggregation behavior of
twenty homogeneous Alice robots by modeling this behavior found in German cockroaches. Similarly, [22] analyzed a comparable model showing the combination of locomotion speed and sensing radius
required for aggregation using probabilistic aggregation rules. The aggregation problem is quite challenging because the swarms can form clusters instead of being aggregated [23]. Examining methods
that display this behavior can assist in designing techniques for distributed systems [24].
It has been previously shown that the variations in intercellular adhesiveness result in sorting or aggregation in specific cellular interactions [25,26]. Steinberg [26] developed a Differential
Adhesion Hypothesis that states that the differences in the cohesion work in similar and dissimilar cells can achieve cell segregation or cell aggregation. As such, when a cell population encounters
more potent cohesive forces from cells of the same type than from dissimilar types, an imbalance emerges, which causes segregation. The reverse of this action causes aggregation.
2.2 Decentralized Control of Robot Swarms Using Graph Neural Networks.
Previous research in segregation and aggregation involves classical methods, including convex optimization approach [11], evolutionary methods [27], particle swarm optimization [28], probabilistic
aggregation algorithm [29,30], differential potential [24,25,31], model predictive control [32], etc. Probabilistic aggregation algorithms encounter the difficulties associated with unstable
aggregation, as robots are consistently entering and adapting [33]. In Ref. [34], genetic algorithm was used to train a neural network for static and dynamic aggregation but faced the issue of
scalability, unstable aggregation, and required large computational onboard resources. In Refs. [25] and [35], a differential potential concept was proposed though it uses global information.
However, large-scale systems frequently encounter challenges related to scalability, communication bottlenecks, and robustness. While centralized solutions, where a central agent determines the
actions of the entire team, may be viable in small-scale scenarios, the demand for decentralized solutions becomes paramount as the system size grows. In decentralized systems, communication among
agents is limited. Each agent must determine its actions using only local information to accomplish a global task. Some works have explored the segregation behavior with varying degrees of
decentralization [36–38]. For example, the authors in Edson Filho and Pimenta [36] proposed the use of abstractions and artificial potential functions to segregate the groups, but the proposed method
was not completely decentralized. Also, the work in Ref. [38] proposed a distributed mechanism that combines flocking behaviors, hierarchical abstractions, and collision avoidance based on the
concept called virtual group velocity obstacle (VGVO). However, the robots start in an already segregated state, so the paper focuses on navigation while maintaining segregation. The main limitations
of these previous approaches are that some require global information, analytical solutions, high computational resources, and careful tuning of control parameters. Additionally, most of the work
focuses on the segregation problem in 2D spaces. The aggregation problem and segregation in 3D spaces have not been fully explored in literature. Moreover, while much of the work in literature has
utilized analytical and control-theoretic methods, data-driven and learning-based controls have not been explored for the problems of segregation and aggregation.
As highlighted in the literature, deriving distributed multi-agent controllers has proven to be a complex task [39]. As a result, the challenge of finding these controllers motivates the adoption of
a deep-learning approach. A decentralized controller-trained robust behavioral policies in an imitation learning framework is proposed. These policies learn to imitate a centralized controller's
behavior but use only local information to make decisions. Moreover, dimensional growth issues arise as the number of robots increases. Both challenges are effectively addressed by harnessing the
capabilities of aggregation graph neural networks (Aggregation GNNs). Aggregation GNNs are particularly well-suited for distributed systems control due to their inherently local structure [40].
Graph neural networks (GNNs) are neural networks designed to work with data structured in the form of graphs. They act as function approximators that can explicitly model the interaction among the
entities within the graph. Graph Neural Networks function in a fully localized manner, with communication occurring solely between nearby neighbors; thus, they are well suited for developing
decentralized controllers for robot swarms. They are invariant to changes in the order or labeling of the agents within the team. This is particularly important for decentralized systems. GNNs can
also adapt to systems beyond the ones they were initially trained on, making them scalable to larger or smaller sets of robots. Graph Neural Networks are promising architectures for parameterization
in imitation learning [41,42] and RL algorithms [43,44]. In Ref. [45], aggregation GNNs with Multi-Hop communication coupled with imitation learning for flocking, 1-leader flocking, and flocking of
quadrotors in Airsim experiments was proposed. In Ref. [46], Graph Filters, Graph CNN, and Graph RNN were used in imitation learning for flocking and 2D grid path planning. In Ref. [42], the
effectiveness of graph CNN coupled with Policy gradient learning compared with Vanilla Graph Policy Gradient and PPO with a fully connected network for formation flying experiments was shown. In Ref.
[47], linear and nonlinear graph CNN coupled with imitation learning and PPO for coverage and exploration experiments were proposed. Blumenkamp et al. [48] presented the results showing the
application of GNN policies to five robots in ROS2 for navigation through a passage in real environments. We build on these methods for a different class of swarming behaviors—segregation and
aggregation which is challenging because of the instability that can occur with inaccurate grouping.
3 Main Contributions
As seen from the literature review in Sec. 2.2, most approaches in literature utilize mathematical equations as decentralized control laws that need to be analytically derived to obtain a global
behavior, such as aggregation and segregation behaviors, which this paper focuses on. Obtaining such individual control laws for robots to obtain a desired global behavior is an inverse and hard
problem and requires trials of many potential control laws. The Neural Network-based techniques proposed in this paper provide a data-driven approach that overcomes the challenge of deriving
mathematical control laws. As noted in Sec. 2.2, for the segregation and aggregation behaviors studied in this paper, there is no data-driven and learning-based controller available in literature.
Hence, in contrast to previous approaches to segegrative and aggregative behavior in robot swarms, we present an approach to demonstrate these behaviors using decentralized learning-based control.
This approach aims to design local controllers guiding a diverse group of robots to exhibit both Segregation behavior (forming distinct groups) and Aggregation behavior (forming homogeneous
mixtures). The approach utilizes Graph Neural Networks to parameterize the controller and trains them using imitation learning algorithm. The proposed method was first presented in our earlier work [
49], where the technique was applied to the segregation and aggregation problem for only 2D point mass swarms for up to 50 robots. This paper extends our prior work by: (i) improving the learned
controller by including more training features that include robot velocity and a distance parameter, (ii) scaling the 2D point mass simulation experiments to 100 robots and 10 groups, (iii) applying
the problem in 3D for point mass systems, (iv) extending application from point mass dynamics to nonholonomic systems, (v) extending the prior work to 3D Crazyflie quadrotor systems in Pybullet
simulation environment, and (vi) Implementing the controller on Turtlebot3 Burger both in simulations (Gazebo/ROS2) and in real-world experiments. To the best of our knowledge, this is the first
research that employs GNNs with multihop communication, trained through imitation learning, to address segregation and aggregation tasks for both 2D and 3D holonomic point mass robots and actual
robots systems—nonholonomic autonomous ground robots and autonomous aerial robots.
The primary contributions of this paper are:
• We combine Aggregation Graph Neural Network for time-varying systems, trained using imitation learning for segregating and aggregating heterogeneous robotic swarms in 2D and 3D Euclidean space.
This work achieves comparable performance to that of the expert (centralized) controller performance by aggregating information from remote neighbors and outperforms a local controller.
• We illustrated the scalability and generalization of the model by training it on a small teams and groups for segregation and testing its performance by progressively increasing the team size and
groups, reaching up to 100 robots and 10 groups. The proposed model can also generalize to the aggregation problem without further training.
• A transfer framework that transitioned from point mass systems to real robot systems within physics-based simulations was developed. The decentralized controller was implemented on mobile robots
in Gazebo/ROS2 and flying robot swarms in Pybullet.
• Zero-shot transfer of the learned policies to real-world systems—Turtlebot3 robot swarms showing the efficacy of the policies.
The rest of the paper is structured as follows: Sec. 4 presents the segregation and aggregation problem, along with classical centralized control for point mass systems. Section 5 describes the
optimal decentralized control paradigm and the proposed controller using GNN and Imitation Learning. Section 6 gives details of the actual mobile and flying robots kinematics used for the
experiments. Experimental results and discussion are detailed in Sec. 7 for the holonomic point mass systems and Sec. 8 for swarms of mobile(Turtlebot3) and flying robots(Crazyflie 2), including the
hardware experiments with the Turtlebots. Section 9 presents the conclusions and future directions.
4 Problem Formulation
This section describes the equations that govern robot swarms' segregative and aggregative behavior shown in Fig. 1. In addition, we defined the expert controller used in generating the data for
imitation learning in both 2D Euclidean space and 3D Euclidean space.
4.1 Point Mass Kinematics Model.
We consider a team of
fully actuated holonomic agents
$…, N}$
navigating within a 2D or 3D Euclidean environment. Each agent is defined by its position
, its velocity
and its acceleration
for time steps
$t=0,1,2,3,, …$
where the discrete time-index
denotes the sequential time instances occurring at the sampling time
. It is assumed that the acceleration remains constant during the time interval
. The system's dynamics is expressed as follows:
4.2 Decentralized Segregation and Aggregation.
In the segregation and aggregation problem, we assign each robot to a group N[k], $k={$1, $…, W}$ where W is the number of the groups. Therefore, the heterogeneous robot swarm is composed of robots
within the set of these partitions ${N1,N2,$…$,NW}, NW∈N$. Robots belonging to the identical group are classified as the same type. The neighbors of each robot can be either from robots of the same
type or of a different type.
For segregation, our objective is to develop a controller capable of sorting diverse types of robots into M distinct groups. This aims to create groups that exclusively consist of agents of the same
type. The team is considered segregated when the average distance between agents of the same type is less than that between agents of different types, as defined by Kumar et al. [24]. The controller
that solves this problem exhibits the segregative behavior. On the other hand, when the average distance between agents of the same type is greater than between agents of the different types, the
team is said to aggregate. This is referred to as the aggregation problem. The aim is to learn a controller that ensures that the swarm forms a homogeneous mixture of robots of different types while
flocking together.
4.3 Classical Centralized Control.
behavior can be achieved using a centralized controller defined in Ref. [
$ui*(t)=[−∑j≠iN∇riUij(‖ri(t)−rj(t)‖) −∑j≠iNvij(t)]−vi(t)$
where $ui*(t)$ is agent i control input. $∇riUij(|ri−rj|)$ represents the gradient of the artificial potential function governing the interaction between agents i and j. This gradient is taken with
respect to the position vector $ri$ and is evaluated at the positions $ri(t)$ and $rj(t)$ at time t. The second term in Eq. (3) accounts for damping, encouraging robots to synchronize their
velocities with one another, as described by Santos et al. [31].
The artificial potential
is a positive function of the relative distance between a pair of agents [
] is given by
and its gradient is given by
represents the scalar controller gain,
is the segregation or aggregation parameter and
. Segregation or aggregation can be achieved based on the local groups
$dij ={dAA,if i∈Nkandj∈NkdAB,if i∈Nkandj∉Nk$
shows that
controls the interactions between robots of the same and different types, respectively. Hence, the swarm demonstrates segregative behavior when
The system exhibits aggregative behavior when
The evaluation metrics are described in the Appendix.
5 Decentralized Control
Equation (3) represents a centralized/global controller that requires access to the positions and velocities of all agents. However, acquiring such global information is often challenging in
practical situations. Agents usually have access only to local information. This limitation is primarily attributed to the agents' sensing range. Each agent can only communicate with other agents
within its sensing range or communication radius R; that is, $|rij|≤R$. The aim of this paper is to design a decentralized controller that relies solely on local information. Figure 2 describes the
flow of the decentralized controller.
5.1 Communication Graph.
Here, we describe the agents' communication network. At time t, agents i and j can establish communication if $‖rij‖≤R$, where R denotes the communication radius of the agents. As a result, we
construct a communication network graph $G={V,E(t)}$, where $V$ represents the set of agents, and $E(t)$ is the set of edges, defined such that $(i,j)∈E(t)$ if and only if $‖rij‖≤R$. Consequently,
j can transmit data to i at time t, making j a neighbor of i. We denote $Ni(t)={j∈V:(j,i)∈E(t)}$ as the set of all agents that agent i can communicate with at time t.
5.2 Local Controller.
A viable approach to achieving decentralized control entails formulating a controller that adheres to local communication and sensing constraints. Following the design of the centralized controller
defined in Sec.
, the local controller is described thus
The local controller in Eq. (9) involves a summation over only the neighbors of agent i, i.e., all agents $j∈Ni$. This is different from the centralized controller that sums over all the agents in
the team. While the centralized and local controllers have identical stationary points, Eq. (9) typically requires more time for segregation given that the graph remains connected [42]. The following
sections introduce a novel learning-based approach to the segregation and aggregation problem. This method relies on an imitation learning algorithm known as Dataset Aggregation (DAgger) and utilizes
an Aggregation Graph Neural Network to parameterize the agents' policy. This approach imitates the centralized controller in Eq. (3). We will demonstrate that the GNN-based controller performs
similarly to the centralized controller and surpasses the performance of the local controller in Eq. (9).
5.3 Delayed Aggregation Graph Neural Network.
In the context of graph theory and signal processing on graphs, a graph signal, denoted by $×:V→ℝ$, is a function that assigns a scalar value to each node in a graph. Each node is represented by a
feature vector $xi(t)∈ℝF$, where $i∈V$={1, $…, N}$ and N is the number of nodes in the graph. In our application, each agent is a node. Hence, the set of all agent states is denoted by $X(t)∈ℝN×F$
, where each agent is described by an F-dimensional feature vector $xi(t)∈ℝF$, i.e., the rows of $X(t)$.
The communication between agents is defined using a graph shift operator (GSO) matrix,
. There are several types of shift operators used in literature, including Laplacian and adjacency matrix. In this paper, we opt for the binary adjacency matrix as the support
, which adheres to the sparsity of the graph , i.e.,
is 1 if and only if
. The linear operation
serves as an operator that shifts the information within the graph signal
producing another graph signal. The computation of its (
)-th entry is expressed as
$[S(t)X(t)]if= ∑i=1N[S(t)ij][X(t)]if=∑j∈Ni∪isij(t)xjf(t)$
Equation (10) indicates that $S(t)X(t)$ functions as a distributed and local operator. This is evident from the fact that each node undergoes updates based solely on local interactions with its
neighboring nodes. The reliance on local interactions is a fundamental feature in the development of controllers for decentralized systems, and it is frequently harnessed in graph-based approaches
for information processing and control.
Information exchanges between nodes occurs at time
, which is the exchange clock. These exchanges introduces a unit time delay, thus creating a delayed information structure [
where $Nik(t)$ is the set of nodes k-hops away from node i and it is defined recursively as $Nik(t)={j′∈Njk−1(t−1),j∈Ni(t)}$ with $Ni1(t)=Ni(t)$ and $Ni0={i}$. We denote $X(t)={Xi(t)}i=1,…,N$ as the
set of delayed information history $Xi(t)$ of all nodes. This structure shows that the information available to each node at a given time t is past and delayed information from neighbors that are
k-hops away.
The challenge in decentralized control learning is to devise a control policy that accommodates the delayed local information structure as outlined in Eq. (11). Consequently, a decentralized
controller must effectively handle historical information. It is well-established in the literature that achieving optimal decentralized control is very challenging, even in scenarios involving
linear quadratic regulators, which have relatively straightforward centralized solutions [39,50].
In contrast to centralized controllers, the intricacies associated with finding effective decentralized controllers underscore the importance of leveraging learning techniques. This paper hinges on
the utilization of graph convolutional neural networks (GCNNs) in conjunction with imitation learning. The choice of GCNNs is justified by their alignment with the local information structure
inherent in decentralized control. Imitation Learning is chosen for its relative simplicity in developing decentralized controllers by replicating the behavior observed using a centralized
Formally, we want to find a parameterized policy
that maps the decentralized and delayed information history to local action
and define a loss function
to determine the difference between the local action and the centralized policy
. This reduces to an optimization problem to find the tensor of network parameter
5.4 Graph Convolutional Neural Network.
Graph convolutional neural networks (GCNNs) are composed of consecutive layers. Each layer in a GCNN applies graph filters and nonlinear activation functions, enabling the network to learn
hierarchical representations of graph-structured data. They are distributed in nature which make them well-suited for parameterizing the decentralized policy
. We define a time-varying aggregation sequence
using multihop communication and the delayed information structure defined in Eq.
. This sequence includes the agents' neighborhoods through
repeated data exchanges with their immediate neighbors [
in the sequence is the delayed state information aggregated from
-hop neighbors. We represent
of matrix
as the state at node
, obtained locally through
exchanges with neighbors. An essential characteristic of the aggregation sequence is its regular temporal structure, which consists of nested aggregation neighborhoods. Subsequently, we can apply a
standard convolutional neural network (CNN) with a depth of
, effectively mapping the local information to an action. Thus each layer
is shown below
where $σ(l)$ is an activation function and $Θ(l)$ comprises a set of support filters with learnable parameters. The output of the final layer corresponds to the decentralized control action at node i
, at time t.
5.5 Imitation Learning Framework.
Training the neural network for decentralized control requires determining the appropriate values for
. In imitation learning, the training process involves acquiring a training set,
, representing sample trajectories using the centralized controller. Here,
denotes the time-series observations of the agents, and
represents the set of actions generated for the agents using the centralized controller. Using supervised learning, the objective is to minimize the loss function over the training set, where
gathers the output
from Eq.
at each node
It is important to emphasize that $Θ$ is uniform across all nodes. $Θ$ is not node or time dependent. As a result, the learned policy is independent of the size and structure of the network which
facilitates modularity, scalability to any number of agents and transfer learning.
5.6 State Representation.
To train the network, we need to define the input state vector. Both the centralized and local controllers exhibit nonlinear characteristics in the agents' states. Also, the aggregation graph neural
networks (AGNNs) do not inherently allow nonlinear operations before aggregation [
] so only the positions and velocities of the agents are not sufficient to describe the system. Therefore, to represent nonlinearity in the AGNN controller, we extracted important features from the
states of the agents that can be used in aggregation. These features are computed locally and depend on the relative distance and velocity between agents. The local controller similarly computes
these features to ensure a fair comparison. The input to the GNN is the same for both tasks and is defined thus
We added a relative distance to a goal position (goal position—
) given that the robot can see the goal, i.e., it is within its communication/sensing radius,
. This is needed to stabilize the training and ensure the robot segregates or aggregates around a particular location.
$dgi ={ri(t)−rg,if (ri(t)−rg)<R0,if (ri(t)−rg)>R$
6 Actual Robot Kinematics
The holonomic ideal point mass model aids in testing different scenarios and parameters for the task and provides a benchmark we can build on. However, we are interested in achieving the tasks in
real-world robotic systems under the constraints of delays in observations and slower control rates. In this section, we designed a framework to transfer the GNN-based policies for the point mass
model to physics-based nonholonomic mobile robots—Turtlebot3 burger platform in ROS2 both in simulation and real experiments and a swarm of quadrotors in a physics-based simulator—Pybullet using the
Crazyflie 2 model without further training.
6.1 Decentralized Control for Nonholonomic Robot Swarms.
The point mass system is holonomic. Nonetheless, the GNN controller can be used for a nonholonomic system such as the 2D differential drive model. We follow the feedback linearization approach
designed for expressing double integrator dynamics in differential drive robots described in Ref. [52].
6.1.1 Kinematic Modeling and Feedback Linearization Approach.
We consider a team of
differential drive mobile agents navigating within a 2D Euclidean space. For brevity, we omit the robots and groups indexes. Each agent dynamics is defined thus
$x˙=v cos θy˙=v sin θθ˙= ω$
are the
position, heading, and heading rate of the robot, respectively.
is the linear (forward) speed of the center of the robot. The point mass actuation are accelerations, therefore we differentiate Eq.
, resulting into
$x¨=a cos θ−ω v sin θy¨= a sin θ+ω v cos θ$
where $a= v˙$ is the linear acceleration of the center of the robot.
Given a point described by the following equation at a distance
from the center of the robot as seen in Fig.
$xa=da cos θ+xya= da sin θ+y$
and differentiating Eq.
, we have
$x˙a=x˙−da ω sin θy˙a= y˙+da ω cos θ$
$x¨a=x¨−da Υ sin θ−da ω2 cos θy¨a= y¨+da Υ cos θ−da ω2 sin θ$
is the angular acceleration. Substituting Eqs.
, we have in matrix form
$[x¨ay¨a] =[ cos θ−da sin θ sin θda cos θ][a Υ]+C$
$C =[−ω v sin θ−da ω2 cos θ ω v cos θ−da ω2 sin θ]$
$[a Υ] =[A]−1{−C+uv}$
is the control input for the differential drive model, this comes from the point mass acceleration with the defined parameter
We then integrate Eq. (27) to get the linear velocity, v and angular velocity, ω to pass into the differential drive model in Eq. (20).
6.2 Decentralized Control for Quadrotors.
In this section, we present the transfer of the point mass-trained GNN to a swarm of quadrotors in Pybullet simulation. Figure 4 shows the framework for transferring the trained GNN to control a
swarm of quadrotors. The position and velocity of the swarm are passed from the Pybullet environment into the point mass gym environment, where the local features are calculated and sent to the GNN
controller to calculate the actions and predict the next state. The current and next state of the quadrotor swarms is then passed into the gym's PID controller, which drives the swarm to the desired
state. Then, the current state is passed back into the point mass environment, and this loop continues until the task is achieved. The dynamics and PID control equations are described in Refs. [53–55
7 Point Mass Results and Discussion
We ran a series of experiments to study the performance and scalability of our approach. The experimental results are presented and evaluated using the intersection area of convex hulls metric $M
(r,N)$ in Eq. (A1), the number of clusters formed for segregation, and average distances for same and different groups for aggregation (see Appendix for details). We illustrate the scalability of the
controller by increasing the swarm size and discuss the performance comparison between traditional controllers (centralized and local) and the GNN controller.
7.1 Experiments.
For the 2D and 3D segregation task, the GNN controller was trained on 21 robots and 3 groups. We tested the learned controller on {(10,2); (20,5); (21,7); (30,5); (50,5); (100,5)} (written in format
of {(Robots, Groups)} for 2D and in 3D {(20,5); (21,7); (30,5)}{Robots, Groups} for 40 experiments with random initial locations. Without further training, we transferred the segregative GNN
controller to a different swarming behavior—Aggregation. All the velocities were set to zeros at the initial state, and positions were uniformly distributed independently of the robot's group. For
training, we set d[AA] and d[AB] to 3 and 5, respectively. The communication radius, R and exchanges, K were set to 6 and 3 for the state vector. However, we ran the test scenarios using $dAA=5$, d
[AB]=10, R=12, K=3. For all the experiments, the goal was randomized between $[−1,1]$ and maximum acceleration was set to 1. We compared the performance of all the controllers—centralized
described in Sec. 4.3, local described in Sec. 5.2, and learned described in Sec. 5.3 from the same initial configuration. We collected 400 trajectories, each of 500 steps using the centralized
control with α=3 in Sec. 4.3 for training. The GNN network is structured as a fully connected feed-forward neural network with a single hidden layer comprising 64 neurons and a Tanh activation
function. Implemented within the PyTorch framework, we trained the network using an Adam optimizer, an mean squared error (MSE) cost function, and a learning rate of $5×10−5$. To address challenges
in behavior cloning for imitation, we implemented the Dataset Aggregation (DAgger) algorithm [56]. The algorithm used a probability $1−β$ of selecting the GNN policy, while the probability β of
following the expert policy was set to decay by a factor of 0.993 to a minimum of 0.5 after each trajectory.
7.2 Discussion.
Successful Imitation We studied the effect of the nature of information sharing in the swarms of robots. In Fig. 5, a specific instance of the temporal progression of the segregation task is
depicted, using three controllers for 100 robots distributed across 5 groups in 2D. Additionally, Fig. 6 shows the same for 30 robots and 5 groups in 3D. In both scenarios, it is evident that the
local controller struggles to segregate the robots. This challenge is particularly notable when the robots begin in a configuration where not all are interconnected, but each has at least one
connection. In such cases, distinct clusters form, each containing robots of the same type. This outcome is expected, considering that the robots are not attracted to their respective groups due to a
lack of awareness of their existence. In contrast, the learned controller demonstrates the capability to effectively segregate the robots into groups of varying sizes. These results support the
rationale behind utilizing a graph neural network framework, as it enables the dissemination of information throughout the network, ultimately facilitating the successful completion of the task.
Comparison with Classical Approaches We compared the GNN controller with the traditional controllers. Figures 7–10 show all the controllers' mean and confidence interval of the segregation tasks
metrics for 40 trials for the 2D and 3D experiments. As time progresses in all scenarios with the learned controller, the mean and standard deviation of the area approach zero, while the number of
clusters converges to the actual number of groups. The learned controller is able to fully segregate the system. Likewise, experiments with a smaller number of robot groups exhibit faster convergence
compared to those with a larger number of groups. However, the local controller becomes trapped in a local minimum and is not able to achieve segregation in the system.
We show the comparative results between the centralized controller in Fig. 8 and our controller in Fig. 10, and in all the test cases, both exhibit comparable performance, indicating that our
controller is efficient. Also, it can be seen from the experiments that it is easier and faster for the 3D case to segregate than its 2D counterparts. This is because the robots have more
degree-of-freedom to maneuver in new directions. The GNN utilizes local information and still achieves performance comparable to the centralized controller, which relies on global information. This
highlights its capability for learning and generalization.
Scalability Study We examined a scenario where the number of robots was fixed at 50, and we varied the number of groups. Figure 8 shows that as the number of groups increases, the agents take a
longer time to segregate. More specifically, when the number of groups was 10, it took more than 3500 steps to segregate, while for groups 2 and 5, it took less than that. All the experiments show
that the policies, though trained on 21 robots, can scale up to 100 robots in 2D and handle groups it has not seen before. Similarly, we see the same behavior in the 3D case.
Generalization to Aggregation Task Another interesting aspect of our GNN controller is that it can generate other behaviors besides segregation. We extended the trained GNN for segregation to the
aggregation tasks by changing the d[AA]=10 and d[AB]=5. Without any further training, our segregation controller was able to aggregate the swarm with just the change of parameters. This
particularly proves the theory of intercellular adhesivity. A particular instance of the time series progression of the aggregative behavior of the system for the centralized and GNN controller for
the 100 robots and 5 groups for the 2D and the 30 robots and 5 groups for the 3D case in Figs. 11 and 12, respectively.
Figure 13 shows the plot of the average distances between agents of the same type and agents of different types. Although the trajectory evolution does not look the same for the centralized and GNN
controllers, the average distances plot shows that the swarm is aggregated. For example, in the case of 100 robots and 5 groups, the average distance $ravgAA$ and $ravgAB$ (see Appendix A2 for the
definition of these quantities) for the 2D case were found to be 4.29 and 4.13, respectively. For the 3D case in the example of 30 robots and 5 groups, the average distance $ravgAA$ and $ravgAB$ were
found to be 5.81 and 5.11, respectively. Both cases clearly show that the swarm was aggregated based on the condition in Sec. A2. The difference in the final trajectory shape is dependent on the
communication radius.
Effect of Communication Radius We analyze the effect of the communication radius, R, on the segregation and aggregation behavior by varying the sensing radius of each robot. We chose the following
values: R=${4,6,8,10,12}$m. We fixed the number of robots and groups to 20 robots and 5 groups and kept other parameters constant with d[AA]=5 and d[AB]=10. From Fig. 14, even in the case of
limited communication, the GNN learns efficient control strategies that enables the swarm to segregate up to the communication radius of $R=8m$. This shows the GNN controller's ability to transmit
information to the network even in limited communication. However, as the communication radius reduces to $R=6m$ or below, the network struggles to fully segregate (shown by larger number of clusters
than the number of groups and larger mean intersection area). This is because that the system is expected to segregate at a separation distance $dAA=5m$ between agents of same type. Hence, if the
communication radius is 6m or less, then there is a good chance of having fewer, or even no, agents in the communication radius. As a result, from the figure, it can be seen that the system started
from about 10 clusters and converged only until about 7 clusters for $R=6m$. Indeed, the ability to segregate completely depends on the parameters such as number of hops, R, d[AA] and d[AB].
As seen in Table 1 for the aggregation behavior, we started at a segregated state with an $ravgAA=2.17$ which is lower than $ravgAB=9.44$. The goal for aggregation is to ensure that $ravgAA$ is
greater than $ravgAB$. With limited communication, our GNN controller can complete the task showing its performance in limited communication. It may be noted that the system achieves the aggregation
successfully even with low communication radii (as compared to segregation behavior which fails when radius becomes lower than 6m). This is because the robots move closer to each other first thereby
reducing their separation distance and increasing their level of communication. It is also noted that at communication radius of $R=8m$ or higher, the system converges to the same average distances.
Table 1
Effect of the communication radius on the 2D GNN aggregation controller
$ravgAA$ (initial) $ravgAB$ (initial)
Values 2.17 9.44
Communication radius, R (m) $ravgAA$ (final) $ravgAB$ (final)
4 4.69 4.05
6 4.67 3.93
8 4.33 3.63
10 4.33 3.63
12 4.33 3.63
Effect of the communication radius on the 2D GNN aggregation controller
$ravgAA$ (initial) $ravgAB$ (initial)
Values 2.17 9.44
Communication radius, R (m) $ravgAA$ (final) $ravgAB$ (final)
4 4.69 4.05
6 4.67 3.93
8 4.33 3.63
10 4.33 3.63
12 4.33 3.63
8 Actual Robots Kinematics Results
This section presents the simulation and hardware experiments for the Turtlebot3 and Crazyflie2 robots. Table 2 lists the parameters used for the experiments.
8.1 Gazebo Simulations With ROS2.
To evaluate the feasibility of transferring the point mass GNN to a physics-based differential drive robot, we tested the trained policy with Turtlebot3 burger robots—10, 20, 50 robots and 2, 5, 5
groups, respectively. As seen in Fig. 15, we designed an OpenAI gym environment—$gym_gazeboswarm$ which carries out all the communication with Gazebo using ROS2 and interfaces with the GNN-policy to
control the swarm. We created a multi-Turtlebot3 Gazebo environment using ROS2. Each robot has its node. The gym environment receives the position of each robot, and the local features are calculated
and sent to the GNN to obtain acceleration commands which are then used to obtain the actions—$v,ω$ of each robot using the method described in Sec. 5.1.1. These actions are published on each robot
$cmd_vel$ topic to drive the swarm.
8.2 Pybullet Physics-Based Simulation Experiments.
We used an OpenAI Gym environment based on PyBullet^2 introduced by Panerati et al. [57] to evaluate the performance of our GNN controller in a realistic 3D environment. The environment is
parallelizable. It can be run with a GUI or in a headless mode, with or without a GPU. We chose this simulator because it supported realistic aerodynamic effects such as drag, ground effect,
downwash, and control algorithms suite. As a result, it gives us a testbed close to the real-world system to test our algorithm. The dynamics of the quadrotors are modeled based on the Crazyflie 2
We aim to use the GNN controller to control the quadrotor swarms in the simulator. The trained GNN is robust to any value of d[AA] and d[AB]. Hence, we adapt these parameters to suit the Pybullet
simulator. We initialized the swarm with varying yaw values, and roll and pitch were set to 0. We consider two types of experiments—fixed height simulation using the trained 2D point mass GNN and
varying height simulation using the trained 3D point mass GNN.
8.3 Fixed Height Results.
The fixed height simulations come from the fact that the 2D point mass model does not have a z-component. Hence, we set z to 1m and ran the 2D GNN to predict the next states in x- and y- directions.
8.4 Varying Height Results.
Here, we implemented the 3D-trained GNN to the Crazyflie simulation. Using the predicted next state from the point mass 3D gym environment for segregation and aggregation task, we were able to
achieve the same results as in the point mass model case for a swarm of Crazyflie quadrotors.
8.5 Real Robots.
We demonstrated a zero-shot transfer of the policies learned in the Gazebo simulation to the real Turtlebot3 burger robots—8 robots and 2 groups; 9 robots and 3 groups d[AA]=2 and d[AB]=4. We use
a Qualisys Motion Capture System with 18 cameras that provide position updates at 100Hz. With a multi-agent extended Kalman filter implemented in ROS to reduce noise, we obtained the position and
velocity of each agent at 100Hz. These updates are then used in the GNN controller environment to calculate the control for each robot, as seen in Fig. 16.
8.6 Discussion.
We reported the mean intersection area and number of clusters metrics for the Turtlebot3 and Crazyflie 2 (fixed and varying heights) experiments.
• Successful Imitation and Transfer: Figures 17, 18, and 19 shows a particular instance of the Turtlebot3 and Crazyflie 2 time series evolution of the swarm trajectory for the segregation and
aggregation tasks , respectively. For the Turtlebot3 burger robots, we also report the Mean intersection area, number of clusters, and average distances between agents in the same and different
groups results in Fig. 20. For the Crazyflie robots, we reported the mean intersection area and the number of cluster metrics for the fixed and varying height segregation experiments in Figs. 21 and
22, respectively.
The results show our GNN controller can successfully transfer to a physics-based quadrotor swarm with different number of robots and groups, communication radius, d[AA], and d[AB].
• Zero-Shot Transfer to Hardware: Figures 23 and 24 show the initial and final configuration of the robots trajectory and the metrics for segregation and aggregation tasks. Even with the presence of
noise and uneven terrains, the robots were still able to perform the tasks with the GNN controller in a decentralized fashion. This shows the efficacy of our controller to transfer successfully into
real-world applications.
9 Conclusions
Controlling large-scale dynamical systems in distributed settings poses a significant challenge in the search for optimal and efficient decentralized controllers. This paper uses learned heuristics
to address this challenge for agents in 2D and 3D Euclidean space. Our approach involves the design of these controllers parameterized by Aggregation Graph Neural Networks, incorporating information
from remote neighbors within an imitation learning framework. These controllers learn to imitate the behavior of an efficient artificial differential potential-based centralized controller, utilizing
only local information to make decisions.
Our results demonstrate that large-scale point mass systems, mobile robots, and quadrotors can perform segregation tasks across initial configurations where the swarm is not fully connected, with
varied limited communication radius and separation distances. Our policies trained with 21 robots using a point mass model generalizes to larger swarms up to 100 robots and to the aggregation task
without further training. Through varied experiments, we illustrated the controller's capability to be deployed in larger swarms.
Furthermore, we compared our controller with the centralized controller and a local controller that only utilized information from its immediate neighbors. The results showed that the system did not
converge to a segregated state; instead, multiple clusters of robots of the same type persisted. Thus, we resolved the issue by implementing our controller, demonstrating its superior efficacy over
the local controller and comparable performance to the centralized controller. This affirms the significance of multihop information in enhancing overall performance. Therefore, the GNN-based
controller is more suitable for distributed systems than the centralized controller, given its scalability, which is vital in practical scenarios where only local information is accessible.
In addition, we showed that the GNN-based policies trained for the holonomic point mass model can be transferred to physics-based robotics swarms in 2D with nonholonomic constraints and in
3D—quadrotors. We present the results demonstrating successful swarm coordination and control in simulation (Gazebo/ROS2 simulation for Turtlebot3 robots and Pybullet simulation for Crazyflie
quadrotors) and demonstrate the zero-shot transfer of the GNN policies to real Turtlebot3 robots. Potential future works to this approach will be to implement on the Crazyflie hardware platform,
environments with static and dynamic obstacles [58], explore other methods like deep reinforcement learning, and extend it to other swarm behaviors.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.
Appendix: Evaluation Metrics
A1 Segregation Tasks.
In order to evaluate segregation in the swarm, we employed the metrics proposed in Ref. [31]—the pairwise intersection area of the swarm positions' convex hulls $M(r,N)$ and number of clusters
metric. Segregation happens when $M(r,N)$ approaches zero, signifying the absence of overlap among clusters.
The pairwise intersection area metric is defined below:
where CH(Q) and A(Q) represent the convex hull and the area of the set Q, respectively.
A2 Aggregation Tasks.
We employed the average distances between agents of the same types ($ravgAA$) and average distances between agents of different types ($ravgAB$) to evaluate aggregation. The system is said to
aggregate when the $ravgAA$ is greater than $ravgAB$. This is an intergroup (same groups) and intragroup (different groups) distance.
Given that N[A] represents the number of unique pairs of robots in the same groups and N[B] represents the number of unique pairs of robots in different groups.
We define
We define
A. K.
, and
, “
Multirobot Coordination for Space Exploration
AI Mag.
), pp.
, and
, “
Distributed Quadrotor Uav Tracking Using a Team of Unmanned Ground Vehicles
Paper No. 2021–0266.
, and
, “
A Support-Design Framework for Cooperative Robots Systems in Labor-Intensive Manufacturing Processes
J. Manuf. Syst.
, pp.
J. P.
B. C.
V. K.
T. N.
, and
, “
Collaborative Multi-Robot Search and Rescue: Planning, Coordination, Perception, and Active Vision
IEEE Access
, pp.
, and
H. H.
, “
A Review of the Applications of Agent Technology in Traffic and Transportation Systems
IEEE Trans. Intell. Transp. Syst.
), pp.
Ferreira Filho
E. B.
, and
L. C.
, “
Segregation of Heterogeneous Swarms of Robots in Curves
,” IEEE International Conference on Robotics and Automation (
), Paris, France, May 31–Aug. 31, pp.
N. E.
, and
, “
Virtual Leaders, Artificial Potentials and Coordinated Control of Groups
Proceedings of the 40th IEEE Conference on Decision and Control
Cat. No. 01CH37228
), Orlando, FL, Dec. 4–7, pp.
, and
R. C.
, “
Behavior-Based Formation Control for Multirobot Teams
IEEE Trans. Robot. Automation
), pp.
, and
, “
Abstraction and Control for Groups of Robots
IEEE Trans. Robot.
), pp.
, and
, “
Diversity and Specialization in Collaborative Swarm Systems
Proceedings of the Second International Workshop on Mathematics and Algorithms of Social Insects
, Atlanta, GA, Dec, pp.
, and
M. A.
, “
Segregation of Heterogeneous Robotics Swarms Via Convex Optimization
Paper No. DSCC2016-9653.
C. W.
, “
Flocks, Herds and Schools: A Distributed Behavioral Model
ACM SIGGRAPH Computer Graphics
(4), pp.
, “
Flocking for Multi-Agent Dynamic Systems: Algorithms and Theory
IEEE Trans. Automatic Control
), pp.
L. C.
G. A.
R. C.
M. M.
, and
, “
Swarm Coordination Based on Smoothed Particle Hydrodynamics Technique
IEEE Trans. Rob.
), pp.
, and
D. G.
, “
Molecular Mechanisms of Cell Segregation and Boundary Formation in Development and Tumorigenesis
Cold Spring Harb. Perspect. Biol.
), p.
, and
, “
Cockroach Aggregation Based on Strain Odour Recognition
Anim. Behav.
), pp.
G. E.
, “
Tentacle Morphogenesis in Hydra: A Morphological and Biochemical Analysis of the Effect of Actinomycin d
Am. Zoologist
), pp.
N. R.
, and
, “
Self-Organization in Biological Systems
Self-Organization in Biological Systems
Princeton University Press
, Princeton, NJ.
J. K.
, and
, “
Complexity, Pattern, and Evolutionary Trade-Offs in Animal Aggregation
), pp.
, and
, “
Self-Organized Aggregation in Cockroaches
Anim. Behav.
), pp.
, and
, “
The Embodiment of Cockroach Aggregation Behavior in a Group of Micro-Robots
Artif. Life
), pp.
, and
, “
Modeling and Designing Self-Organized Aggregation in a Swarm of Miniature Robots
Int. J. Rob. Res.
), pp.
T. J.
, and
, “
Evolving Aggregation Behaviors in Multi-Robot Systems With Binary Sensors
Distributed Autonomous Robotic Systems: The 11th International Symposium
, Baltimore, MD, Nov, pp.
D. P.
, and
, “
Segregation of Heterogeneous Units in a Swarm of Robotic Agents
IEEE Trans. Autom. Control
), pp.
, and
D. P.
, “
Aggregation of Heterogeneous Units in a Swarm of Robotic Agents
Fourth International Symposium on Resilient Control Systems
, Boise, ID, Aug. 9–11, pp.
M. S.
, “
Reconstruction of Tissues by Dissociated Cells: Some Morphogenetic Tissue Movements and the Sorting Out of Embryonic Cells May Have a Common Explanation
), pp.
, and
A. L.
, “
Evolution of Swarm Robotics Systems With Novelty Search
Swarm Intell.
), pp.
F. R.
D. G.
, and
, “
Pso-Based Strategy for the Segregation of Heterogeneous Robotic Swarms
J. Comput. Sci.
, pp.
, and
, “
Re-Embodiment of Honeybee Aggregation Behavior in an Artificial Micro-Robotic System
Adaptive Behav.
), pp.
, “
A Probabilistic Geometric Model of Self-Organized Aggregation in Swarm Robotic Systems
Ph.D thesis
, Middle East Technical University, Ankara, Turkey.
V. G.
L. C.
, and
, “
Segregation of Multiple Heterogeneous Units in a Robotic Swarm
,” IEEE International Conference on Robotics and Automation (
), Hong Kong, China, May 31–June 7, pp.
S. K.
N. S.
, and
S. V.
, “
Segregation of Multiple Robots Using Model Predictive Control With Asynchronous Path Smoothing
,” IEEE Conference on Control Technology and Applications (
), Trieste, Italy, Aug. 23–25, pp.
, and
, “
Survey of Methods and Algorithms of Robot Swarm Aggregation
J. Phys.: Conf. Ser.
, p.
T. H.
, and
, “
Evolving Aggregation Behaviors in a Swarm of Robots
Advances in Artificial Life: Seventh European Conference
, ECAL, Dortmund, Germany, Sept. 14–17, pp.
V. G.
A. G.
R. J.
P. A.
L. C.
D. G.
, and
, “
Spatial Segregative Behaviors in Robotic Swarms Using Differential Potentials
Swarm Intell.
), pp.
Edson Filho
, and
L. C.
, “
Segregating Multiple Groups of Heterogeneous Units in Robot Swarms Using Abstractions
,” IEEE/RSJ International Conference on Intelligent Robots and Systems (
), Hamburg, Germany, Sept. 28–Oct. 2, pp.
F. R.
D. G.
, and
, “
United we Move: Decentralized Segregated Robotic Swarm Navigation
Distributed Autonomous Robotic Systems: The 13th International Symposium
, London, UK, Nov. 7–9, pp.
V. G.
, and
, “
Cohesion and Segregation in Swarm Navigation
), pp.
H. S.
, “
A Counterexample in Stochastic Optimum Control
SIAM J. Control
), pp.
A. G.
, and
, “
Convolutional Neural Network Architectures for Signals Supported on Graphs
IEEE Trans. Signal Process.
), pp.
, and
, “
Graph Neural Networks for Decentralized Multi-Robot Path Planning
,” IEEE/RSJ International Conference on Intelligent Robots and Systems (
), Las Vegas, NV, Oct. 24–Jan. 24, pp.
, and
, “
Learning Decentralized Controllers for Robot Swarms With Graph Neural Networks
Conference on Robot Learning
, Osaka, Japan, Nov. 16–18, pp.
, and
, “
Graph Policy Gradients for Large Scale Robot Control
Conference on Robot Learning
, Osaka, Japan, Nov. 16–18, pp.
, and
, “
Large Scale Distributed Collaborative Unlabeled Motion Planning With Graph Policy Gradients
IEEE Rob. Autom. Lett.
), pp.
, and
, “
Learning Decentralized Controllers for Robot Swarms With Graph Neural Networks
Proceedings of the Conference on Robot Learning, PMLR
L. P.
, and
, eds., Vol.
, pp.
, and
, “
Decentralized Control With Graph Neural Networks
,” arXiv preprint arXiv:2012.14906.
, and
, “
Multi-Robot Coverage and Exploration Using Spatial Graph Neural Networks
,” IEEE/RSJ International Conference on Intelligent Robots and Systems (
), Prague, Czech Republic, Sept. 27–Oct. 1, pp.
, and
, “
A Framework for Real-World Multi-Robot Systems Running Decentralized Gnn-Based Policies
International Conference on Robotics and Automation
), Philadelphia, PA, May 23–27, pp.
, and
, “
Learning Decentralized Controllers for Segregation of Heterogeneous Robot Swarms With Graph Neural Networks
,” International Conference on Manipulation, Automation and Robotics at Small Scales (
), Toronto, ON, Canada, July 25–29, pp.
, and
A. R.
, “
Synthesizing Decentralized Controllers With Graph Neural Networks and Imitation Learning
IEEE Trans. Signal Process.
, pp.
A. G.
, and
, “
Aggregation Graph Neural Networks
ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing
), Brighton, UK, May 12–17, pp.
E. B.
, and
L. C.
, “
Abstraction Based Approach for Segregation in Heterogeneous Robotic Swarms
Rob. Autom. Syst.
, p.
, and
, “
Uav Visual-Inertial Dynamics (vi-d) Odometry Using Unscented Kalman Filter
), pp.
, and
, “
The Grasp Multiple Micro-Uav Testbed
IEEE Rob. Autom. Mag.
), pp.
, and
, “
A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics
, Fort Lauderdale, FL, Apr. 11–13, pp.
, and
A. P.
, “
Learning to Fly-a Gym Environment With Pybullet Physics for Reinforcement Learning of Multi-Agent Quadcopter Control
,” IEEE/RSJ International Conference on Intelligent Robots and Systems (
), Prague, Czech Republic, Sept. 27–Oct. 1, pp.
M. K.
O. A.
, and
O. A.
, “
Robot Navigation Model in a Multi-Target Domain Amidst Static and Dynamic Obstacles
,” Proceedings of the IASTED International Conference Intelligent Systems and Control (
ISC 2018
), Calgary, AB, Canada, July 16–17, pp. | {"url":"https://risk.asmedigitalcollection.asme.org/dynamicsystems/article/146/6/061107/1201143/Learning-Scalable-Decentralized-Controllers-for","timestamp":"2024-11-06T17:00:47Z","content_type":"text/html","content_length":"588746","record_id":"<urn:uuid:721cd8eb-b4d6-46f7-b3d4-05c09dd634cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00444.warc.gz"} |
Matlab Program For Uniform Quantization Encoding Vs Decoding
Given the signal range R, a uniform quantizer has only one parameter: the number ... The encoder and decoder block diagrams of a predictive coding system are ... 1) The Matlab program given in
Appendix demo_quant.m performs uniform.... ... Uniform Encoder block, and reconstructs quantized floating-point values from ... In this example, the input to the block is the uint8 output of a
Uniform Encoder.... Jump to Example: DPCM Encoding and Decoding - The example also computes the mean square error between the original and decoded signals. ... Original signal % Quantize x using
DPCM. encodedx = dpcmenco(x,codebook.... The operations of the uniform encoder adhere to the definition for uniform encoding ... To quantize and encode a floating-point input into an integer output:
... Description. example. ue = dsp.UniformEncoder returns a uniform encoder, ue , that ... values, see System Design in MATLAB Using System Objects (MATLAB).. We have already looked at adaptive
quantization for the uniform quantizer. ... output points in the quantizer of Example 10.3.2 that lie on or within the contour of constant ... to use a uniform quantizer followed by an entropy
encoder, and the operational ... MATLAB function for calculation of signal-to-quantization noise ratio.. Image Processing, Data Compression, Matlab ... We will spend a bit more time on predictive
coding and, more specifically, with the so-called DPCM, differential ... So, for this image, if we apply a 1 bit per pixel uniform quantizer, here is the result we obtain. ... So, let's see the
structure of the predictor and, encoder decoder.. Encoders, decoders, scalar and vector quantizers. ... Uniform Encoder, Quantize and encode floating-point input into integer output ... Compare three
different delta-modulation (DM) waveform quantization, or coding, techniques. Open Model...
Then the signal will pass through it to encode the quantization values and then convert to digital signal. Page 8. Encoder produce information as parallel form , it.... Am getting error in scanning
in zigzag the error is as follows Undefined function or variable 'toZigzag'. Dsp Matlab Course - Download.. data compression system, with an encoder and decoder, is shown in Figure 5.1. ...
For example, each image pixel may be quantized to the nearest ... quantizers are sometimes called uniform scalar quantizers to distinguish them from more sophisticated quantizers ... In MATLAB, we
can do this for a matrix x in the following.. system. Our main observations are; that the quantization noise increases when the ... help during the application period, Sunil Shukla for his help with
MatLab. Last but ... FIGURE 5.1:1 INPUT AND OUTPUT FOR DECODED UNIFORM PCM.... The both encoder and decoder sides synchronously update the seed (initial state) of ... 4 shows the block diagram of
DPCM-based non-uniform quantizer. ... The MATLAB software of this project can flexibly decide the reconstruction order and.... This MATLAB function quantizes the entries in a multidimensional array
of ... example. y = uencode( u , n ) quantizes the entries in a multidimensional array of ... This encoding adheres to the definition for uniform encoding specified in ITU-T...
Description. y = udecode(u,n) inverts the operation of uencode and reconstructs quantized floating-point values from an encoded multidimensional array of integers u . ... View MATLAB Command. Create
a ... The algorithm adheres to the definition for uniform decoding specified in ITU-T Recommendation G.701. Integer.... The Scalar Quantizer Encoder block maps each input value to a quantization ...
cN] , and the Boundary points parameter is defined as [p0 p1 p2 p3 . ... Uniform Encoder, DSP System Toolbox ... Efficient Multirate Signal Processing in MATLAB ... developer of mathematical
computing software for engineers and scientists.. A new scalar quantizer with uniform decoder and. ... Therefore, joint source and channel coding has attracted much attention in [2, 3], ... one
variable , so the symbolic toolbox in Matlab can be used to solve the inequality set.. The Uniform Encoder block performs the following two operations on each floating-point sample in the input
vector or matrix: ... The quantization process rounds both positive and negative inputs downward to the nearest ... In this example, the following parameters are set: ... Efficient Multirate Signal
Processing in MATLAB.. MATLAB function for uniform quantization encoding. ... For example, an output of a saturator is 1 and 1 if the input is greater than 1 and smaller than ... sense that the ear&#
39;s ability to decipher two sounds with different volumes decreases with.... deliverable consists of your Matlab routines and your report. Both the Matlab code and ... having to change any source
code. If there is some ... Implement a uniform scalar encoder with a function header idx = sq enc(in, n bits,.... communication systems with the pulse code system by programming, and ... modulation
encoder and decoder for super-large-scale integrated circuit as well as the ... PCM modulation mainly includes sampling, quantization and encoding process. ... that is, to firstly calculate the
uniform quantization noise and uniform... 49a0673df2
Wrecking Ball Miley Cyrus Official Video 1080p
Download hancock 2 full movies mp4 download
accu chek compass software free download
hot tub time machine 2 full movie download in hindi
3ds Max 2010 X32 Xforce Keygen Download
mac osx pdplayer crack
social media email extractor v25 crack
ya ghous pak aj karam karo mp3 download free
Bloody Isshq Movie 720p Download Utorrent Movies
delfin arbeitsbuch antworten pdf free download | {"url":"https://vercayclaten.mystrikingly.com/blog/matlab-program-for-uniform-quantization-encoding-vs-decoding","timestamp":"2024-11-14T03:38:26Z","content_type":"text/html","content_length":"100485","record_id":"<urn:uuid:a2cffe1c-7e4e-4a73-9bc5-3df1f1face01>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00333.warc.gz"} |
Robust inference in conditionally heteroskedastic autoregressions
Pedersen, Rasmus Søndergaard (2017): Robust inference in conditionally heteroskedastic autoregressions.
Preview Download (468kB) | Preview
We consider robust inference for an autoregressive parameter in a stationary autoregressive model with GARCH innovations when estimation is based on least squares estimation. As the innovations
exhibit GARCH, they are by construction heavy-tailed with some tail index $\kappa$. The rate of consistency as well as the limiting distribution of the least squares estimator depend on $\kappa$. In
the spirit of Ibragimov and Müller (“t-statistic based correlation and heterogeneity robust inference”, Journal of Business & Economic Statistics, 2010, vol. 28, pp. 453-468), we consider testing a
hypothesis about a parameter based on a Student’s t-statistic for a fixed number of subsamples of the original sample. The merit of this approach is that no knowledge about the value of $\kappa$ nor
about the rate of consistency and the limiting distribution of the least squares estimator is required. We verify that the one-sided t-test is asymptotically a level $\alpha$ test whenever $\alpha \
le $ 5% uniformly over $\kappa \ge 2$, which includes cases where the innovations have infinite variance. A simulation experiment suggests that the finite-sample properties of the test are quite
Item Type: MPRA Paper
Original Robust inference in conditionally heteroskedastic autoregressions
Language: English
Keywords: t-test, AR-GARCH, regular variation, least squares estimation
C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C12 - Hypothesis Testing: General
C - Mathematical and Quantitative Methods > C2 - Single Equation Models ; Single Variables > C22 - Time-Series Models ; Dynamic Quantile Regressions ; Dynamic Treatment Effect Models ;
Subjects: Diffusion Processes
C - Mathematical and Quantitative Methods > C4 - Econometric and Statistical Methods: Special Topics > C46 - Specific Distributions ; Specific Statistics
C - Mathematical and Quantitative Methods > C5 - Econometric Modeling > C51 - Model Construction and Estimation
Item ID: 81979
Depositing Dr Rasmus Søndergaard Pedersen
Date 17 Oct 2017 16:56
Last 27 Sep 2019 06:23
Buraczewski, D., E. Damek, and T. Mikosch (2016): Stochastic Models with Power-Law Tails: The Equation X = AX + B, Springer Series in Operations Research and Financial Engineering,
Springer International Publishing.
Cavaliere, G., I. Georgiev, and A. M. R. Taylor (2016a): “Sieve-based inference for infinite-variance linear processes,” The Annals of Statistics, 44, 1467–1494.
Cavaliere, G., I. Georgiev, and A. M. R. Taylor (2016b): “Unit root inference for non-stationary linear processes driven by infinite variance innovations,” Econometric Theory,
Davis, R. and S. Resnick (1986): “Limit theory for the sample covariance and correlation functions of moving averages,” The Annals of Statistics, 14, 533–558.
Davis, R. A. and T. Hsing (1995): “Point process and partial sum convergence for weakly dependent random variables with infinite variance,” The Annals of Probability, 23, 879–917.
Davis, R. A. and T. Mikosch (1998): “The sample autocorrelations of heavytailed processes with applications to ARCH,” The Annals of Statistics, 26, 2049–2080.
Davis, R. A. and W. Wu (1997): “Bootstrapping M-estimates in regression and autoregression with infinite variance,” Statistica Sinica, 7, 1135–1154.
Embrechts, P., C. Klüppelberg, and T. Mikosch (2012): Modelling Extremal Events: For Insurance and Finance, Applications of Mathematics, Heidelberg:Springer, 4 ed.
Ibragimov, M., R. Ibragimov, and J. Walden (2015): Heavy-Tailed Distributions and Robustness in Economics and Finance, Lecture Notes in Statistics, Heidelberg: Springer.
Ibragimov, R. and U. K. Müller (2010): “t-statistic based correlation and heterogeneity robust inference,” Journal of Business & Economic Statistics, 28, 453–468.
Ibragimov, R. and U. K. Müller (2016): “Inference with few heterogeneous clusters,” The Review of Economics and Statistics, 98, 83–96.
Kesten, H. (1973): “Random difference equations and renewal theory for products of random matrices,” Acta Mathematica, 131, 207–248.
Lange, T. (2011): “Tail behavior and OLS estimation in AR-GARCH models,” Statistica Sinica, 21, 1191–1200.
Liebscher, E. (2005): “Towards a unified approach for proving geometric ergodicity and mixing properties of nonlinear autoregressive processes,” Journal of Time Series Analysis, 26,
Loretan, M. and P. C. B. Phillips (1994): “Testing the covariance stationarity of heavy-tailed time series: An overview of the theory with applications to several financial datasets,”
Journal of Empirical Finance, 1, 211–248.
Meitz, M. and P. Saikkonen (2008): “Stability of nonlinear AR-GARCH models,” Journal of Time Series Analysis, 29, 453–475.
Mikosch, T. and C. Starica (2000): “Limit theory for the sample autocorrelations and extremes of a GARCH (1, 1) process,” The Annals of Statistics, 28, 1427–1451.
Rio, E. (2017): Asymptotic Theory of Weakly Dependent Random Processes, Berlin:Springer-Verlag.
Samorodnitsky, G. and M. Taqqu (1994): Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance, Stochastic modeling, New York, NY: Chapman & Hall.
Zhang, R. and S. Ling (2015): “Asymptotic inference for AR models with heavytailed G-GARCH noises,” Econometric Theory, 31, 880–890.
URI: https://mpra.ub.uni-muenchen.de/id/eprint/81979
Available Versions of this Item
• Robust inference in conditionally heteroskedastic autoregressions. (deposited 17 Oct 2017 16:56) [Currently Displayed] | {"url":"https://mpra.ub.uni-muenchen.de/81979/","timestamp":"2024-11-12T06:55:43Z","content_type":"application/xhtml+xml","content_length":"33633","record_id":"<urn:uuid:03f829dc-0b76-4eab-b717-74190a6502e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00238.warc.gz"} |
Constraint sum
The constraint sum is one of the most important constraint. When the optional element coeffs is missing, it is assumed that all coefficients are equal to 1. The constraint is subject to a numerical
condition $(\odot,k)$ composed of an operator in {lt,le,ge,gt,eq,ne,in,notin} and a right operand which is an integer value, a variable, an integer interval or a set of integers.
sum($X$,$C$,($\odot$,$k$)) with $X=\langle x_1,x_2,\ldots \rangle$ and $C=\langle c_1,c_2,\ldots\rangle$, iff $(\sum_{i = 1}^{|X|} c_i \times x_i) \odot k$.
<list> (intVar wspace)2+ </list>
[ <coeffs> (intVal wspace)2+ | (intVar wspace)2+ </coeffs> ]
<condition> "(" operator "," operand ")" </condition>
The following constraint states that the values taken by variables x1 ,x2, x3 and y must respect the linear function x1 * 1 + x2 * 2 + x3* 3 $\gt$ y.
<list> x1 x2 x3 </list>
<coeffs> 1 2 3 </coeffs>
<condition> (gt,y) </condition>
A form of sum, sometimes called subset-sum or knapsack, see [T03] and [PQ08], involves the operator ``in’’, and ensures that the obtained sum belongs (or not) to a specified interval. The following
constraint states that the values taken by variables y1,y2,y3,y4 must respect 2 $\leq$ y1 * 4 + y2 * 2 + y3 * 3 + y4 * 1 $\leq$ 5.
<list> y1 y2 y3 y4 </list>
<coeffs> 4 2 3 1 </coeffs>
<condition> (in,2..5) </condition> | {"url":"http://xcsp.org/specifications/constraints/counting-summing/sum/","timestamp":"2024-11-12T09:34:55Z","content_type":"text/html","content_length":"15294","record_id":"<urn:uuid:0815e688-626b-43fc-8e66-093463f4cb7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00173.warc.gz"} |