arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# Recognizability of machines which halt in $n$ steps on some input
We define the following language:
$$L = \{M \mid \text{M is a TM and there exists an input x on which M halts in at most |x| steps}\}.$$
Following this question I understand that $L$ is not decidable, but I know that it is recognizable, and therefore I'm looking for a reduction from the HALTING problem to $L$.
I thought of the following reduction:
On input $M$ for $L$, create a Turing machine $M'$ that for input $y$ for the HALTING problem runs $M$.
If $M$ halts, then $M'$ halts too, but I'm having problems proving the reduction.
Another way of proving this problem in my opinion is by building an $M'$ that recognizes the language by running $M$ all inputs for their length.
If $M'$ halts on some input it accepts, otherwise it keeps going.
Does this solution work?
• Doesn't the post, to which you give the link in your post, reduce the Halting problem to $L$? – fade2black Dec 16 '17 at 9:50
• It does, and that proves that $L$ is not decidable, I want to prove that it's recognizable. – user3636583 Dec 16 '17 at 10:09
• This could be proven without reduction. The proof is straightforward: given $M$ run through all strings, say over $0$ and $1$, in canonical order and for each string $x$ simulate $M(x)$ for $|x|$ steps. If there is such $x$, then you will eventually find this $x$ and so accept $x$. – fade2black Dec 16 '17 at 10:27
• Sounds good, do you want to post it as the answer? – user3636583 Dec 16 '17 at 11:22
## 1 Answer
This could be proven without reduction. The proof is straightforward: given $M$ run through all strings, say over $0$ and $1$, in canonical order and for each string $x$ simulate $M(x)$ for $|x|$ steps and check if it halts. If there is such $x$, then you will eventually find this $x$ and so accept $x$.
|
|
## Class/Course - Engineering Entrance
### Subject - Physics
#### Total Number of Question/s - 3094
Just Exam provide question bank for Engineering Entrance standard. Currently number of question's are 3094. We provide this data in all format (word, excel, pdf, sql, latex form with images) to institutes for conducting online test/ examinations. Here we are providing some demo contents. Interested person may contact us at info@justexam.in
• 1. Measurements - Quiz
1. In an experiment the angles are required to be measured using an instrument.29 divisions of the man scale exactly coincide with the 30 divisions of the vernier scale. If the smallest division of the main scale is half-a-degree (= 0.50), then the least count of the instrument is
a) One minute
b) Half minute
c) One degree
d) Half degree
2. Out of the following pairs, which one does not have identical dimensions?
a) Angular momentum and Planck constant
b) Impulse and momentum
c) Moment of inertia and moment of a force
d) Work and torque
• 2. Kinematics - Quiz
1. A particle has an initial velocity $3\hat{i} + 4\hat{j}$ and an acceleration of $0.4\hat{i} + 0.3\hat{j}$. Its speed after 10 s is
a) 10 unit
b) 7 $\sqrt{2}$ unit
c) 7 unit
d) 8.5 unit
2. If $\vec{A} \times \vec{B}$ = $\vec{B} \times \vec{A}$, then the angle between A and B is
a) π
b) π/3
c) π/2
d) π/4
• 3. Laws of Motion - Quiz
1. A mass of M kg is suspended by a weightless string. The horizontal force that is required to displace it until the string makes an angle of 450 with the initial vertical direction is
a) Mg (√2 + 1)
b) Mg √2
c) $\frac{Mg}{\sqrt{2}}$
d) Mg(√2-1)
2. Consider a car moving on a straight road with a speed of 100 ms-1. The distance at which car can be stopped, is [μk = 0.5]
a) 800 m
b) 1000 m
c) 100 m
d) 400 m
• 4. Work, Energy and Power - Quiz
1. A mass m moves with a velocity v and a collides inelastically with another identical mass. After collision the 1st mass moves with velocity $\frac{v}{\sqrt{3}}$ in a direction perpendicular to the initial direction of motion. Find the speed of the second mass after collision.
a) v
b) √3 v
c) $\frac{2}{\sqrt{3}}v$
d) $\frac{v}{\sqrt{3}}$
2. Statement I when ultraviolet light is incident on a photocell, its stopping potential is V0 and the maximum kinetic energy of the photoelectrons is Kmax. When the ultraviolet light is replaced by X-rays, both V0 and Kmax increase.
Statement II Photoelectrons are emitted with speeds ranging from Zero to a maximum value because of the range of frequencies present in the incident light.
a) Statement I is true, Statement II is true; Statement II is the correct explanation of Statement I.
b) Statement I is true, Statement II is true; Statement II is not the correct explanation of Statement I.
c) Statement I is false, Statement II is true.
d) Statement I is the true, Statement II is false.
• 5. Rotational Motion - Quiz
1. A body A of mass M while falling vertically downwards under gravity breaks into two parts; a body B of mass $\frac{1}{3}M$ and , body C of mass $\frac{1}{3}m$. The centre of mass of bodies B and C taken together shifts compared to that of body A towards
a) Depends on height of breaking
b) Does not shift
c) Body C
d) Body B
2. A solid sphere, hollow sphere and a ring are released from top of an inclined plane (frictionless) so that they down the plane. The maximum acceleration down the plane is for ( no rolling)
a) Solid sphere
b) Hollow sphere
c) Ring
d) All same
• 6. Gravitation - Quiz
1. A satellite of mass m revolves around the earth of radius R at a height x from its surface. If g is the acceleration due to gravity on the surface of the earth, the orbital speed of the satellite is
a) gx
b) $\frac{gR}{R - x}$
c) $\frac{gR^{2}}{R + x}$
d) $\left (\frac{gR^{2}}{R + x} \right )^{1/2}$
2. If g is the acceleration due to gravity on the earth surface, the gain in the potential energy of an object of mass m raised from the surface of the earth to a height equal to the radius R of the earth, is
a) 2 mgR
b) $\frac{1}{2}mgR$
c) $\frac{1}{4}mgR$
d) mgR
• 7. Properties of Solids and Liquids - Quiz
1. If a ball of steel (density ρ = 7.8 g cm-3) attains a terminal velocity of 10 cm-1 when falling in a tank of water (coefficient of viscosity ηwater = 8.5 x 10-4 Pa -s) then its terminal velocity in glycerine (ρ = 12 g cm-3 , η = 13.2 Pa -s) would be nearly
a) 1.6 x 10-5 cms-1
b) 6.25 x 10-4 cms-1
c) 6.45 x 10-4 cms-1
d) 1.5 x 10-5 cms-1
2. The temperature of the two outer surfaces of composite slab, consisting of two materials having coefficients of thermal conductivity K and 2 K and thickness x and 4x, respectively are T2 and T1 (T2 > T1). The rate of heat tranfer through the slab, in a steady state is $\left (\frac{A\left (T_{2} - T_{1} \right )K}{x} \right )f$ ƒ, with ƒ equals to
a) 1
b) 1/2
c) 2/3
d) 1/3
• 8. Thermodynamics - Quiz
1. During an adiabatic process, the pressure of a gas is found to be proportional to the cube of its absolute temperature. The ratio Cp/Cv for the gas is
a) 4/3
b) 2
c) 5/3
d) 3/2
2. A log metallic bar is carrying heat from one of its end to the order end under steady-state. The variation of temperature θ along the length x of the bar from its hot end is best described by which of the following figure.
a)
b)
c)
d)
• 9. Kinetic Theory of Gases - Quiz
1. Two rigid boxes containing different ideal gases are placed on a table. Box A contains one mole of nitrogen at temperature T0, while box B contains one mole of helium at temperature (7/3) T0. The boxes are then put into thermal contact with each other, and heat flows between then until the gases reach a common final temperature (Ignore the heat capacity of boxes). Then, the final temperature of the gases, Tf, in terms of T0 is
a) Tf = $\frac{3}{7}T_{0}$
b) $\frac{7}{3}T_{0}$
c) $\frac{3}{2}T_{0}$
d) $\frac{5}{2}T_{0}$
2. A gaseous mixture consists of 16g of helium and 16g of oxygen . The ratio $\frac{C_{p}}{C_{V}}$ of the mixture
a) 1.59
b) 1.62
c) 1.4
d) 1.54
• 10. Oscillations and Waves - Quiz
1. A wave y=$\alpha sin\left ( \omega t-kx \right )$ on a string meets with another wave another wave producing a node at x=0. Then the equation of the unknown wave is
a) y=$\alpha sin\left ( \omega t+kx \right )$
b) y=$-\alpha sin\left ( \omega t+kx \right )$
c) y=$\alpha sin\left ( \omega t-kx \right )$
d) y=$-\alpha sin\left ( \omega t-kx \right )$
2. When temperature increases, the frequency of a tuning fork
a) Increases
b) Decreases
c) Remain same
d) Increases or decreases depending on the material
• 11. Electrostatics - Quiz
1. Let p(r) = $\frac{Q}{\pi R^{4}}r$ be the charge density distribution for a solid sphere of radius R and total charge Q. For a point P inside the sphere at distance r1 from the centre of the sphere the magnitude of electric field is
a) zero
b) $\frac{Q}{4\pi \epsilon_{0}r_{1}^{2}}$
c) $\frac{Qr_{1}^{2}}{4\pi \epsilon_{0}R^{4}}$
d) $\frac{Qr_{1}^{2}}{3\pi \epsilon_{0}R^{4}}$
2. A thin spherical conducting shell of radius R has a charge q. Another charge Q is placed at the centre of the shell. The electrostatic potential at a point P at a distance R/2 from the centre of the shell is
a) $\frac{2Q}{4\pi \epsilon_{0}R}$
b) $\frac{2Q}{4\pi \epsilon_{0}R}$ - $\frac{2q}{4\pi \epsilon_{0}R}$
c) $\frac{2Q}{4\pi \epsilon_{0}R}$ + $\frac{q}{4\pi \epsilon_{0}R}$
d) $\frac{\left (q + Q \right )}{4\pi \epsilon_{0}} \frac{2}{R}$
• 12. Current Electricity - Quiz
1. A rectangular loop has a sliding connector PQ of length l and resistance R Ω and it is moving with a speed v as shown. The set-up is placed in a uniform magnetic field going into the plane of the paper. The three currents I1, I2 and I are
a) $I_{1}$ = $-I_{2}$ = $\frac{Blv}{R}$, I = $\frac{Blv}{R}$
b) $I_{1}$ = $I_{2}$ = $\frac{Blv}{3R}$, I = $\frac{2Blv}{3R}$
c) $I_{1}$ = $I_{2}$ = I = $\frac{Blv}{R}$
d) $I_{1}$ = $I_{2}$ = $\frac{Blv}{6R}$, I = $\frac{Blv}{3R}$
2. If a wire is stretched to make it 0.1% longer, its resistance will
a) increase by 0.2%
b) decrease by 0.2%
c) decrease by 0.05%
d) increase by 0.05%
• 13. Magnetic Effects of Current and Magnetism - Quiz
1. A charged particle with charge q enters a region of constant, uniform and mutually orthogonal fields $\vec{E}$ and $\vec{B}$ with a velocity $\vec{v}$ perpendicular to both $\vec{E}$ and $\vec{B}$, and comes out without any change in magnitude or direction of $\vec{v}$, Then
a) $\vec{v}$ = $\vec{E} \times \frac{\vec{B}}{B^{2}}$
b) $\vec{v}$ = $\vec{B} \times \frac{\vec{E}}{B^{2}}$
c) $\vec{v}$ = $\vec{E} \times \frac{\vec{B}}{E^{2}}$
d) $\vec{v}$ = $\vec{B} \times \frac{\vec{E}}{E^{2}}$
2. The magnetic lines of force inside a bar magnetic
a) are from north-pole to south-pole of the magnet
b) do not exist
c) depend upon the area of cross-section of the bar magnet
d) are from south-pole to north-pole of the magnet
• 14. Electromagnetic Induction and Alternating Current - Quiz
1. One conducting U-tube can slide inside another as shown in figure, maintaining electrical contacts between the tubes. The magnetic field B is perpendicular to the plane of the figure. If each tube moves towards the other at a constant speed, v then the emf induced in the circuit in terms of B, l and v, where l is the width of each tube, will be
a) Blv
b) -Blv
c) zero
d) 2Blv
2. The self-inductance of the motor of an electric fan is 10H, In order to impart maximum power at 50Hz, it should be connected to a capacitance of
a) 4μF
b) 8 μF
c) 1μF
d) 2μF
• 15. Electromagnetic Waves - Quiz
1. The rms value of the electric field of the light coming from the sum is 720 NC-1. The average total energy density of the electromagnetic wave is
a) 4.58 x 10-6 Jm-3
b) 6.37 x 10-19 Jm-3
c) 81.35 x 10-12 Jm-3
d) 3.3 x 10-3 Jm-3
2. Which of the following are not electromagnetic waves?
a) Cosmic-rays
b) γ-rays
c) β-rays
d) X-rays
• 16. Optics - Quiz
1. Two point white dots are 1 mm apart on a black paper. They are viewed by eye of pupil diameter 3 mm . Approximately, what is the maximum distance at which these dots can be resolved by the eye ? [Take wavelength of light = 500 nm]
a) 5 m
b) 1 m
c) 6 m
d) 3 m
2. If I0 is the intensity of the principal maximum in the single slite diffraction pattern, then what will be its intensity when the slite width is double ?
a) 2I0
b) 4I0
c) I0
d) $\frac{I_{0}}{2}$
• 17. Dual Nature of Matter and Radiation - Quiz
1. Wave property of electrons implies that they will slow diffraction effects. Davisson and Germer demonstrated this by diffracting effects .Davisson and Germer demonstrated this by diffracting electrons from crystal. The law governing the diffraction from a crystal is obtained by requiring that electron waves reflected from the planes of atoms in a crystal interfere construtively (see figure)
Electrons accelerated by potential V are diffracted from a crystal . If d = 1Å and I = 300, V should be about (h = 6.6 x 10-34 J-s, me = 9.1 x 10-31 kg, e = 1.6 x 10-19C)
a) 2000 V
b) 50V
c) 500V
d) 1000V
2. According to Einsteins photoelectric equation, the plot of the kinetic energy of the emitted photoelectrons from a metal Vs the frequency of the incident radiation gives a straight line whose slope
a) depends on the nature of the metal used
b) depends on the intensity of the radiation
c) depends both on the intensity of the radiation and the metal used
d) is the same for all metals and independent of the intensity of the radiation
• 18. Atoms and Nuclei - Quiz
1. Starting with a sample of pure $^{66}Cu$, 7/8 of it decays into Zn in 15 min. The corresponding half-life is
a) 10 min
b) 15 min
c) 5 min
d) $7\frac{1}{2}min$
2. The half-life of a radioactive substance is 20min . The approximate the interval (t2 - t1)between the time t2 when
a) 14 min
b) 20 min
c) 28 min
d) 7 min
• 19. Electronic Devices - Quiz
1. When npn transistor is used as an amplifier
a) electrons move from base to collector
b) holes move from either to base
c) electrons move from collector to base
d) holes move from base to emiiter
2. The combination of gates shown below yields
a) OR gate
b) NOT gate
c) XOR gate
d) NANAD gate
• 20. Communication Systems - Quiz
1. This question has statement 1 and statement 2. Of the four choices given after the statements, choose the one that best describes the two statements.
Statement 1. Sky wave signals are used for long distance radio communication. These signals are in general, less stable than ground wave signals.
Statement 2 The state of ionosphere varies from hour to hour, day to day and season to season.
a) Statement 1 is true, Statement 2 is true, Statement 2 is the correct explanation of Statement 1
b) Statement 1 is true , Statement 2 is true, Statement 2 is not the correct explanations of Statement 1
c) Statement 1 is false, Statement 2 is true
d) Statement 1 is true, Statement 2 is false
2. The logic circuit shown below has the input waveforms A and B as shown. Pick out the correct output wavefrom.
a)
b)
c)
d)
|
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Multi-Step Problems with Changes of State
## Explains the calculation process when multiple changes of state are involved
Estimated7 minsto complete
%
Progress
Practice Multi-Step Problems with Changes of State
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated7 minsto complete
%
Multi-Step Problems with Changes of State
Credit: Boiling water: Image copyright Steven Coling, 2014; Ice cubes: Image copyright Zoom Team, 2014
Source: http://www.shutterstock.com
#### Which takes more heat – melting or boiling?
You have a cube of ice. Which process will take more energy – the melting of that ice cube or the conversion of the water to steam? The short answer is that more energy is needed to convert the water to steam. The long answer is really a question: how do you get from one point to the other? What is the temperature of the ice? What is the mass of that ice cube? A lot goes into taking the material from the starting point to the end-point.
### Multi-Step Problems with Changes of State
Heating curves show the phase changes that a substance undergoes as heat is continuously absorbed.
Credit: CK-12 Foundation - Christopher Auyeung
Source: CK-12 Foundation
Heating curve of water.[Figure2]
The specific heat of a substance allows us to calculate the heat absorbed or released as the temperature of the substance changes. It is possible to combine that type of problem with a change of state to solve a problem involving multiple steps. Figure above shows ice at -30°C being converted in a five-step process to gaseous water (steam) at 140°C. It is now possible to calculate the heat absorbed during that entire process. The process and the required calculation is summarized below.
1. Ice is heated from -30°C to 0°C. The heat absorbed is calculated by using the specific heat of ice and the equation \begin{align*}\Delta H = c_p \times m \times \Delta T\end{align*}.
2. Ice is melted at 0°C. The heat absorbed is calculated by multiplying the moles of ice by the molar heat of fusion.
3. Water at 0°C is heated to 100°C. The heat absorbed is calculated by using the specific heat of water and the equation \begin{align*}\Delta H = c_p \times m \times \Delta T\end{align*}.
4. Water is vaporized to steam at 100°C. The heat absorbed is calculated by multiplying the moles of water by the molar heat of vaporization.
5. Steam is heated from 100°C to 140°C. The heat absorbed is calculated by using the specific heat of steam and the equation \begin{align*}\Delta H = c_p \times m \times \Delta T\end{align*}.
#### Sample Problem: Multi-Step Problems using a Heating Curve
Calculate the total amount of heat absorbed (in kJ) when 2.00 mol of ice at -30.0°C is converted to steam at 140.0°C. The required specific heats can be found in the table in "Heat Capacity and Specific Heat".
Step 1: List the known quantities and plan the problem.
Known
• 2.00 mol ice = 36.04 g ice
• \begin{align*}c_p (\text{ice}) = 2.06 \text{ J/g}^\circ \text{C}\end{align*}
• \begin{align*}c_p(\text{water}) = 4.18 \text{ J/g}^\circ \text{C}\end{align*}
• \begin{align*}c_ p(\text{steam}) = 1.87 \text{ J/g}^\circ \text{C}\end{align*}
• \begin{align*}\Delta H_{\text{fus}} = 6.01 \text{ kJ/mol}\end{align*}
• \begin{align*}\Delta H_{\text{vap}}= 40.7 \text{ kJ/mol}\end{align*}
Unknown
• \begin{align*}\Delta H_{\text{total}}= ? \text{ kJ}\end{align*}
Follow the steps previously described. Note that the mass of the water is needed for the calculations that involve the specific heat, while the moles of water is needed for the calculations that involve changes of state. All heat quantities must be in kilojoules so that they can be added together to get a total for the five-step process.
Step 2: Solve.
1. \begin{align*}\Delta H_1=2.06 \text{ J/g}^\circ \text{C} \times 36.04 \text{ g} \times 30^\circ \text{C} \times \frac{1 \text{ kJ}}{1000 \text{ J}}=2.23 \text{ kJ}\end{align*}
2. \begin{align*}\Delta H_2=2.00 \text{ mol} \times \frac{6.01 \text{ kJ}}{1 \text{ mol}}=12.0 \text{ kJ}\end{align*}
3. \begin{align*}\Delta H_3=4.18 \text{ J/g}^\circ \text{C} \times 36.04 \text{ g} \times 100^\circ \text{C} \times \frac{1 \text{ kJ}}{1000 \text{ J}}=15.1 \text{ kJ}\end{align*}
4. \begin{align*}\Delta H_4=2.00 \text{ mol} \times \frac{40.7 \text{ kJ}}{1 \text{ mol}}=81.4 \text{ kJ}\end{align*}
5. \begin{align*}\Delta H_5=1.87 \text{ J/g}^\circ \text{C} \times 36.04 \text{ g} \times 40^\circ \text{C} \times \frac{1 \text{ kJ}}{1000 \text{ J}}=2.70 \text{ kJ} \end{align*}
\begin{align*}\Delta H_{\text{total}}= \Delta H_1+ \Delta H_2+\Delta H_3+ \Delta H_4+ \Delta H_5=113.4 \text{ kJ}\end{align*}
The total heat absorbed as the ice at -30°C is heated to steam at 140°C is 113.4 kJ. The largest absorption of heat comes during the vaporization of the liquid water.
### Review
1. Why are two different sets of units used?
2. What other units problem do you need to be aware of?
3. What would you need to know to do calculations like this for acetone?
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
|
|
This meetup included an extensive Text Mining in R session with an Introduction to tm by Ingo Feinerer and a talk about Text Mining with Hadoop by Stefan Theussl.
After a creative break for the last month Ingo and Stefan gave great talks covering tm in greater detail after the brief introduction in February.
## Ingo Feinerer: Introduction to tm
Ingo started right away with a nice bottom-up introduction covering tm’s building blocks like Sources, Readers and Corpora. The creation of Document-TermMatrices was also motivated with a small clustering example for 3 documents.
The word cloud shown above was created from the tm package vignette as follows:
library(tm)
library(wordcloud)
uri <- sprintf("file://%s", system.file(file.path("doc", "tm.pdf"), package = "tm"))
stopifnot(all(file.exists(Sys.which(c("pdfinfo", "pdftotext")))))
tmvignette <- paste(content(corp[[1]]), collapse = "\n")
vigclean <- stripWhitespace(removePunctuation(removeNumbers(tmvignette)))
vigclean <- removeWords(vigclean, stopwords())
wordcloud(vigclean)
## Stefan Theussl: Text Mining with Hadoop
Stefan gave a solution to the problem when things (i.e. text corpora) get big using a set of Hadoop R-packages he created in collaboration with Ingo.
CRAN packages:
Documentation:
Best,
-ViennaR
|
|
# Definition:Induced Mapping on Spectra of Rings
## Definition
Let $A$ and $B$ be commutative rings with unity.
Let $f : A \to B$ be a ring homomorphism.
The induced mapping on spectra by $f$ is the mapping $f^* : \operatorname{Spec} B \to \operatorname{Spec} A$ between their spectra with:
$f^* (\mathfrak p) = f^{-1}(\mathfrak p)$,
the preimage of a prime ideal $\mathfrak p \in \operatorname{Spec} B$.
## Also denoted as
The induced map on spectra by $f$ is also denoted $\operatorname{Spec} f$; see the spectrum functor.
|
|
# When is the Demand for a Good Said to Be Inelastic? - Economics
When is the demand for a good said to be inelastic?
#### Solution
When the percentage change in demand for a good is less than the percentage change in its price, the demand for a good is inelastic.
Concept: Change in Demand
Is there an error in this question or solution?
Share
|
|
## February 23, 2018
### Higher Algebra and Mathematical Physics
#### Posted by John Baez
You all know about Homotopy Type Theory Electronic Seminar Talks. Here’s another way to cut carbon emissions: a double conference. The idea here is to have a conference in two faraway locations connected by live video stream, to reduce the amount of long-distance travel!
Even better, it’s about a great subject:
• Higher algebra and mathematical physics, August 13–17, 2018, Perimeter Institute, Waterloo, Canada, and Max Planck Institute for Mathematics, Bonn, Germany. Organized by David Ayala, Lukas Brantner, Kevin Costello, Owen Gwilliam, Andre Henriques, Theo Johnson-Freyd, Aaron Mazel-Gee, and Peter Teichner.
Higher algebra, lower carbon emissions… what more could you want?
Here’s the idea:
“Higher algebra” has become important throughout mathematics, physics, and mathematical physics, and this conference will bring together leading experts in higher algebra and its mathematical physics applications. In physics, the term “algebra” is used quite broadly: any time you can take two operators or fields, multiply them, and write the answer in some standard form, a physicist will be happy to call this an “algebra”. “Higher algebra” is characterized by the appearance of a hierarchy of multilinear operations (e.g. A-infinity and L-infinity algebras). These structures can be higher categorical in nature (e.g. derived categories, cohomology theories), and can involve mixtures of operations and co-operations (Hopf algebras, Frobenius algebras, etc.). Some of these notions are purely algebraic (e.g. algebra objects in a category), while others are quite geometric (e.g. shifted symplectic structures).
An early manifestation of higher algebra in high-energy physics was supersymmetry. Supersymmetry makes quantum field theory richer and thus more complicated, but at the same time many aspects become more tractable and many problems become exactly solvable. Since then, higher algebra has made numerous appearances in mathematical physics, both high- and low-energy.
Here are the speakers: Mina Aganagic${}^\ast$, Damien Calaque, Tobias Dyckerhoff, Davide Gaiotto, Dennis Gaitsgory, Lotte Hollands, Lisa Jeffrey, Mathilde Marcolli${}^\ast$, Greg Moore${}^\ast$, David Nadler, Adny Neitzke, Sylvie Paycha, Joerg Teschner, Bertrand Toen and Katrin Wendland. (${}^\ast$ means “to be confirmed”.)
Participation is limited. Some financial support is available for early-career mathematicians. For more information and to apply, please visit the conference website of the institute closer to you:
If you have any questions, please write to double.conference.2018@gmail.com.
Aaron Mazel-Gee told me:
We are also interested in spreading the idea of double conferences more generally: we’re hoping that our own event’s success inspires other academic communities to organize their own double conferences. We’re hoping to eventually compile a sort of handbook to streamline the process for others, so that they can learn from our own experiences regarding the various unique challenges that organizing such an event poses. Anyways, all of this is just to say that I would be happy for you to publicize this event anywhere that it might reach these broader audiences.
So, if you’re interested in having a double conference, please contact the organizers of this one for tips on how to do it! I’m sure they’ll have better advice after they’ve actually done it. I’ve found that the technical details really matter for these things: it can be very frustrating when they don’t work correctly. Avoiding such problems requires testing everything ahead of time — under conditions that exactly match what you’re planning to do!
Posted at February 23, 2018 6:43 AM UTC
TrackBack URL for this Entry: https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3021
## 1 Comment & 0 Trackbacks
### Re: Higher Algebra and Mathematical Physics
That seems to be the week for higher categories and physics. There’s a symposium (#109 here) – Higher Structures in M-Theory – organised by B. Jurčo, C. Saemann, U. Schreiber, and M. Wolf, taking place from August 12 - 18, 2018.
Posted by: David Corfield on February 23, 2018 7:46 AM | Permalink | Reply to this
Post a New Comment
|
|
# VS2010 always thinks project is out of date but nothing has changed
I have a very similar problem as described here.
I also upgraded a mixed solution of C++/CLI and C# projects from VS2008 to VS2010. And now in VS2010 one C++/CLI project always runs out of date. Even if it has been compiled and linked just before and F5 is hit the messagebox "The project is out of date. Would you like to build it?" appears. This is very annoying because the dll is very low-tiered and forces almost all projects of the solution to rebuild.
My pdb settings are set to default value (suggested solution of this problem).
Is it possible the get the reason why VS2010 forces a rebuild or thinks a project is up to date?
Any other ideas why VS2010 behaves like that?
-
For Visual Studio/Express 2010 only. See other (easier) answers for VS2012, VS2013, etc
To find the missing file(s), use info from the article Enable C++ project system logging to enable debug logging in Visual Studio and let it just tell you what's causing the rebuild:
1. Open the devenv.exe.config file (found in %ProgramFiles%\Microsoft Visual Studio 10.0\Common7\IDE\ or in %ProgramFiles(x86)%\Microsoft Visual Studio 10.0\Common7\IDE\). For Express versions the config file is named V*Express.exe.config.
2. Add the following after the </configSections> line:
<system.diagnostics>
<switches>
</switches>
</system.diagnostics>
3. Restart Visual Studio
4. Open up DbgView and make sure it's capturing debug output
5. Try to debug (hit F5 in Visual Studio)
6. Search the debug log for any lines of the form:
devenv.exe Information: 0 : Project 'Bla\Bla\Dummy.vcxproj' not up to date because build input 'Bla\Bla\SomeFile.h' is missing.
(I just hit Ctrl+F and searched for not up to date) These will be the references causing the project to be perpetually "out of date".
To correct this, either remove any references to the missing files from your project, or update the references to indicate their actual locations.
Note: If using 2012 or later then the snippet should be:
<system.diagnostics>
<switches>
</switches>
</system.diagnostics>
-
> Open up DbgView and make sure it's capturing debug output. How to make sure that capturing is started? I have the same problem with rebuild projects. But there is no any info in DebugView. I enabled first 5 option in menu 'Capture' of DebugView. (And thanks for good links in answer!) – sergtk Aug 3 '11 at 7:24
This helped us figure it out; however, we also had to delete our intermediate build directory before the last of the .H references went away -- probably to refresh the StdAfx.obj? Anyway, after deleting all the intermediate build folders and cleaning up the project files, we're also good to go. – AHelps Nov 10 '11 at 20:18
Thank you - now why isn't that in the regular output window? – Martin Beckett Oct 27 '12 at 0:40
If you are using VS2012, there is a slightly different snippet to paste in to the config file. This is linked from the original article, but just in case:Enable C++ and Javascript project system tracing VS2012 – rmaVT Dec 28 '12 at 13:24
I can confirm this approach also works with VC++ Express 2010. It might be useful to know that if you are using VC++ Express, you should be looking for the VCExpress.exe.config file, not the devenv.exe.config file. That took me a while to figure out. – StanB123 Jul 23 '13 at 7:14
In VS2012 I was able to achieve easier the same result as in the accepted solution.
I changed the option in "Tools->Options->Projects and Solutions->Build and Run->MSBuild project build output verbosity" from Minimal to Diagnostic.
Then in the build output I found the same lines by searching for "not up to date":
Project 'blabla' is not up to date. Project item 'c:\foo\bar.xml' has 'Copy to Output Directory' attribute set to 'Copy always'.
-
+1. By the way, the same setting exists in VS2010. – smirnoff Feb 14 '14 at 21:45
This also works in VS2013, where the config file tweak doesn't seem to work anymore. – Nathan Reed Apr 25 '14 at 23:32
This worked very well for me. Turned out I had a circular reference (project1 -> project2, project2 -> project1.dll), which caused most of the solution to build each time. It wasn't even in use. – Kobi Jun 24 '14 at 6:47
With C#, I could not find anything with "not up to date", the magic word seems to be "is newer than" – Pete Jul 17 '14 at 13:50
1>Project not up to date because build input 'C:\...\ReadMe.txt' is missing. :O !!?! – jozxyqk Nov 21 '14 at 10:19
This happened to me today. I was able to track down the cause: The project included a header file which no longer existed on disk.
Removing the file from the project solved the problem.
-
No, I do not have any header files that are not existant on the disk. But how were you able to track down the cause? How did you find out that there was a missing file? Perhaps I can find out something more about my problem by checking in the same way like you. – Chris U May 18 '10 at 12:35
This happened to me. A co-worker was thrashing for hours trying to figure it out; I searched SO, found your answer, and 2 minutes later we were done! – Orion Edwards May 25 '10 at 1:18
There was a different solution when this happened to me. Probably pretty obscure, but I was compiling the project from one computer, then another, and discovered I'd accidentally set the time to AM on one computer and PM on the other. The drastic time difference caused one of the computers to either always compile everything, or never to compile anything even when I modified source files. – Kyle Jul 29 '11 at 4:37
This worked for me in spite of the header file existing. Using the answer below to enable logging, it thought a header file was missing. I removed its dependency, added it back in, and minimal rebuild worked again! – Ed Bayiates Feb 15 '12 at 23:56
clock skew will cause most build systems to implode – paulm May 15 '14 at 9:23
We also ran into this issue and found how to resolve it.
The issue was as stated above "The file no longer exists on the disk."
This is not quite correct the file does exist on the disk but the .VCPROJ file is referencing the file somewhere else.
You can 'discover' this by going to the "include file view" and clicking on each include file in turn untill you find the one that VS can not find. You then ADD that file (as an existing item) and delete the reference that can not be found and everything is ok.
A valid question is: How can VS even build if it does not know where the include files are?
We think the .vcproj file has some relative path to the offending file somewhere that it does not show in the VS GUI and this accounts for why the project will actually build even though the tree-view of the includes is incorrect.
-
The reason VC can build is because they're header files -- and header files don't actually get compiled. If any of the header files are actually used by a .C/.CPP file, then and only then will the build fail. So the dependency checker (which looks for the header file) marks the project as needing a rebuild, but the actual compiler (which just ignores the list of header files) can succeed. – AHelps Nov 10 '11 at 20:20
Unbelievably... this also happens if you have a stale reference to a text file (that is NOT even part of the build anyway even if it did exist!!) in your .vcxproj file. I had generated a project with the wizard, and it included a ReadMe.txt file, which I deleted off the disk, but forgot to remove from the vcxproj. – DLRdave Apr 12 '13 at 18:45
I can't find any file that I can't open (except one, but that is on the hard drive. It says something like that kind of file can't be opened on the Visual Studio 2010 Express SKU or something like that. – Annonomus Penguin Jul 29 '13 at 21:45
What is the "include file view" and how do you get to it? – Ben May 21 '14 at 16:44
Include File View is perhaps the Include Files section in the Solution Explorer. – Jaywalker Nov 24 '14 at 14:49
The accepted answer helped me on the right path to figuring out how to solve this problem for the screwed up project I had to start working with. However, I had to deal with a very large number of bad include headers. With the verbose debug output, removing one caused the IDE to freeze for 30 seconds while outputting debug spew, which made the process go very slowly.
I got impatient and wrote a quick and dirty python script to check the (VS 2010) project files for me and output all the missing files at once, along with the filters they're located in. You can find it as a Gist here: https://gist.github.com/antiuniverse/3825678 (or this fork that supports relative paths)
Example:
D:\...> check_inc.py sdk/src/game/client/swarm_sdk_client.vcxproj
fx_cs_blood.h (cstrike\fx_cs_blood.h)
fx_cs_shared.h (..\shared\cstrike\fx_cs_shared.h)
weapon_flashbang.h (..\shared\cstrike\weapon_flashbang.h)
basepaenl.h (swarm\gameui\basepaenl.h)
...
Source code:
#!/c/Python32/python.exe
import sys
import os
import os.path
import xml.etree.ElementTree as ET
ns = '{http://schemas.microsoft.com/developer/msbuild/2003}'
#Works with relative path also
projectFileName = sys.argv[1]
if not os.path.isabs(projectFileName):
projectFileName = os.path.join(os.getcwd(), projectFileName)
filterTree = ET.parse(projectFileName+".filters")
filterRoot = filterTree.getroot()
filterDict = dict()
missingDict = dict()
for inc in filterRoot.iter(ns+'ClInclude'):
incFileRel = inc.get('Include')
incFilter = inc.find(ns+'Filter')
if incFileRel != None and incFilter != None:
filterDict[incFileRel] = incFilter.text
if incFilter.text not in missingDict:
missingDict[incFilter.text] = []
projTree = ET.parse(projectFileName)
projRoot = projTree.getroot()
for inc in projRoot.iter(ns+'ClInclude'):
incFileRel = inc.get('Include')
if incFileRel != None:
incFile = os.path.abspath(os.path.join(os.path.dirname(projectFileName), incFileRel))
if not os.path.exists(incFile):
missingDict[filterDict[incFileRel]].append(incFileRel)
for (missingGroup, missingList) in missingDict.items():
if len(missingList) > 0:
print("["+missingGroup+"]:")
for missing in missingList:
print(" " + os.path.basename(missing) + " (" + missing + ")")
-
Modified your code to support relative paths. Feel free to update your gist and remove the link to my fork! – ixe013 Oct 24 '14 at 23:19
I had a similar problem, but in my case there were no files missing, there was an error in how the pdb output file was defined: I forgot the suffix .pdb (I found out with the debug logging trick).
To solve the problem I changed, in the vxproj file, the following line:
<ProgramDataBaseFileName>MyName</ProgramDataBaseFileName>
to
<ProgramDataBaseFileName>MyName.pdb</ProgramDataBaseFileName>
-
I've deleted a cpp and some header files from the solution (and from the disk) but still had the problem.
Thing is, every file the compiler uses goes in a *.tlog file in your temp directory. When you remove a file, this *.tlog file is not updated. That's the file used by incremental builds to check if your project is up to date.
Either edit this .tlog file manually or clean your project and rebuild.
-
This was it for me! I spent hours after fixing the missing include files, STILL was out of date, logging showed inconclusive grabage for what was missing. Needed to get rid of those TLOG files! Thanks! – Ed Bayiates Oct 10 '13 at 19:15
For me it was the presence of a non-existing header file on "Header Files" inside the project. After removing this entry (right-click > Exclude from Project) first time recompiled, then directly
========== Build: 0 succeeded, 0 failed, 5 up-to-date, 0 skipped ==========
and no attempt of rebuilding without modification was done. I think is a check-before-build implemented by VS2010 (not sure if documented, could be) which triggers the "AlwaysCreate" flag.
-
I had similar problem and followed the above instructions (the accepted answer) to locate the missing files but not without scratching my head. Here is my summary of what I did. To be accurate these are not missing files since they are not required by the project to build (at least in my case) but they are references to files that don't exist on disk which are not really required.
Here is my story:
1) Under windows 7 the file is located at %ProgramFiles(x86)%\Microsoft Visual Studio 10.0\Common7\IDE\%. There are two similar files devenv.exe.config.config and devenv.exe.config. You want to change later one.
2) Under windows 7, you don't have permission to edit this file being in program files. Just copy it somewhere else (desktop) change it and than copy it back to the program files location.
3) I was trying to figure how how do I connect DbgView to IDE to see the missing files. Well you don't have to do anything. Just run it and it will capture all the messages. Make sure Capture Events menu option is selected in Capture menu which by default should be selected.
4) The DbgView will NOT display all the missing files at once (at least it didn't for me)! You would have DbgView running and than run the project in VS2010. It will prompt project out of date message, select yes to build and DbgView will show the first file that is missing or causing the rebuild. Open the project file (not solution file) in notepad and search for that file and delete it. You are better of closing your project and reopening it again while doing this delete. Repeat this process until DbgView no longer shows any files missing.
5) It's kind of helpful to set the message filter to not up to date from DbgView toolbar button or Edit > Filter/Highlight option. That way the only messages it displays are the one that has not up to date' string in it.
I had lots of files that were unnecessary references and removing them all fixed the issue following the above steps.
Second way to find all the missing files at once
There is a second way to find these files all at once but it involves (a) source control and (b) integration of it with VS2010. Using VS2010 add your project to a desired location or dummy location in source control. It will try to all all the files including those that don't exist on disk as well but referenced in the project file. Go to your source control software like perforce and it should these files which don't exist on disk in a different color scheme. Perforce shows them with a black lock on them. These are your missing references. Now you have a list of them all and you can delete all of them from your project file using notepad and your project would not complain about being out of date
-
Another simple resolving referenced by Visual Studio Forum.
Changing configuration: "TOOLS->Options->Projects and Solutions->VC++ Project Settings->Solution Explorer Mode" to "Show all files".
Then You can see all files in Solution Explorer.
Find the files marked by Yellow icon and remove them from the project.
It's OK.
-
I met this problem today, however it was a bit different. I had a cuda dll project in my solution. Compiling in a clean solution was ok, otherwise it failed and the compiler always treated the cuda dll project as not up to date.
I tried the solution from this post.
but there is no missing header file in my solution. Then I found out the reason in my case.
I have changed the project's Intermediate Directory before, although it didn't cause trouble. And now when I changed the CUDA dll Project's Intermediate Directory back to \$(Configuration)\, everything works right again.
I guess there is some minor problem between CUDA Build Customization and non-default Intermediate Directory.
-
Using VS2013 (C#), I've experimented with setting IntermediateOutputPath. If this points to a folder on a different drive then the solution, incremental building stops working - MSBuild complains that some source file is always out of date with some intermediate file (usually a PDB). See my blog post. – Robert Schmidt Mar 20 '14 at 11:03
I had this problem and found this:
http://curlybrace.blogspot.com/2005/11/visual-c-project-continually-out-of.html
Visual C++ Project continually out-of-date (winwlm.h macwin32.h rpcerr.h macname1.h missing)
### Problem:
In Visual C++ .Net 2003, one of my projects always claimed to be out of date, even though nothing had changed and no errors had been reported in the last build.
Opening the BuildLog.htm file for the corresponding project showed a list of PRJ0041 errors for these files, none of which appear on my system anywhere: winwlm.h macwin32.h rpcerr.h macname1.h
Each error looks something like this:
MyApplication : warning PRJ0041 : Cannot find missing dependency 'macwin32.h' for file 'MyApplication.rc'.
Your project may still build, but may continue to appear out of date until this file is found.
### Solution:
Include afxres.h instead of resource.h inside the project's .rc file.
The project's .rc file contained "#include resource.h". Since the resource compiler does not honor preprocessor #ifdef blocks, it will tear through and try to find include files it should be ignoring. Windows.h contains many such blocks. Including afxres.h instead fixed the PRJ0041 warnings and eliminated the "Project is out-of-date" error dialog.
-
I don't know if anyone else has this same problem, but my project's properties had "Configuration Properties" -> C/C++ -> "Debug Information Format" set to "None", and when I switched it back to the default "Program Database (/Zi)", that stopped the project from recompiling every time.
-
+1 this works for me as well, on Visual Studio 2013. Specifically, when I switch it back to None, it works fine again as well. – Mehrdad Feb 16 at 20:55
Most build systems use data time stamps to determine when rebuilds should happen - The date/time stamp of any output files is checked against the last modified time of the dependencies - if any of the dependencies are fresher, then the target is rebuilt.
This can cause problems if any of the dependencies somehow get an invalid data time stamp as its difficult for the time stamp of any build output to ever exceed the timestamp of a file supposedly created in the future :P
-
Is it possible the get the reason why VS2010 forces a rebuild or thinks a project is uptodate? – Chris U May 4 '10 at 10:20
In VS6 or perhaps VS2005 there was a weird little property dialog one would get when right clicking on a project that had tabs showing the dependencies, and outputs, of each file in a project. I don't know how to get the equivalent report in VS2008 (or VS2010) – Chris Becke May 4 '10 at 11:53
I had a similar issue with VS2005, my solution consisted of five projects in the following dependancy (first built at top):
Video_Codec depends on nothing
Generic_Graphics depends on Video_Codec
SpecificAPI_Graphics depends on Generic_Graphics
Engine depends on Specific_Graphics
Application depends on Engine.
I was finding that the Video_Codec project wanted a full build even after a full clean then rebuild of the solution.
I fixed this by ensuring the pdb` output file of both the C/C++ and linker matched the location used by the other working projects. I also switched RTTI on.
-
In my case one of the projects contains multiple IDL files. The MIDL compiler generates a DLL data file called 'dlldata.c' for each of them, regardless of the IDL file name. This caused Visual Studio to compile the IDL files on every build, even without changes to any of the IDL files.
The workaround is to configure a unique output file for each IDL file (the MIDL compiler always generates such a file, even if the /dlldata switch is omitted):
• Right-click the IDL file
• Select Properties - MIDL - Output
• Enter a unique file name for the DllData File property
-
I spent many hours spent tearing out my hair over this. The build output wasn't consistent; different projects would be "not up to date" for different reasons from one build to the next consecutive build. I eventually found that the culprit was DropBox (3.0.4). I junction my source folder from ...\DropBox into my projects folder (not sure if this is the reason), but DropBox somehow "touches" files during a build. Paused syncing and everything is consistently up-to-date.
-
The .net projects are always recompiled regardless. Part of this is to keep the IDE upto date (such as intellisense). I remember asking this question on an MS forum years ago and this was the answer I was given.
-
In VS2008 the project did not rebuild everytime. This is very annoying because the dll is very low-tiered and forces almost all of my dlls to rebuild. Something went wrong on migration and I cannot figure out what. – Chris U May 4 '10 at 6:23
2008, 2010, 2012 and 2013 do no rebuilt .NET projects everytime – paulm May 15 '14 at 9:10
## protected by Community♦Oct 25 '13 at 15:44
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
|
|
# Octonions to the Rescue
## Xavier Dahan and Jean-Pierre Tillich’s Octonion-based Ramanujan Graphs with High Girth.
Update (February 2012): Non associative computations can be trickier than we expect. Unfortunately, the paper by Dahan and Tillich turned out to be incorrect.
Update: There is more to be told about the background of the new exciting paper. In particular, I would like to tell you more about regular graphs with high girth. (I started below.) The Ramanujan graphs story is, of course, also fascinating so at the very least I should give good links.
Michael Atiyah‘s lecture at IAS physics last Friday was entertaining, educational and quite provocative.
The talk started with the following thesis: There are four fundamental forces of nature and there are four division rings over the reals. The real numbers, complex numbers, Quaternions and the Octonions. Atiyah expects that the Octonions will play a major role in physics and will allow a theory which accounts for gravitation. He described some specific steps in this direction and related ideas and connections. At the end of the talk, Atiyah’s thesis looked more plausible than in the beginning. His concluding line was: “you can regard what I say as nonsense, or you can claim that you know it already, but you cannot make these two claims together.” In any case, it looks that the people in the audience were rather impressed by and sympathetic to the Octonionic ideas of this wise energetic scientific tycoon.
The same day I received an email from Nati Linial. The subject was: “a good topic for your blog” and the email contained just a single link.
http://arxiv.org/PS_cache/arxiv/pdf/1011/1011.2642v1.pdf
Nati is my older academic brother and often I regard our relations as similar to typical relations between older and younger (biological) brothers. When he tells me what to do I often rebel, but usually at the end I do as he says and most of the times he is right.
So I waited a couple of hours before looking at the link. Indeed, 1011.2642v1.pdf is a great paper. It uses Octonions in place of Quaternions for the construction of Ramanujan graphs and describes a wonderful breakthrough in creating small graphs with large girth. Peter Sarnak’s initial reaction to the new paper was: “wow”.
Here is a link to a paper entitled “Octonions” by John Baez, that appeared in Bull. AMS.
### Some background:
Let $G$ be a $k$-regular graph with girth $g$ where $g$ is an odd integer. Continue reading
|
|
# [Dev-luatex] Four suggestions
Hans Hagen pragma at wxs.nl
Mon Sep 3 20:57:55 CEST 2007
Javier Múgica de Rivera wrote:
> Yes, I do have red that at this stage suggestions will not really be
> usefull, but the fact that what would be my top suggestion wasn't
> present in the manual from April but already appeared in the June one
> made me think that my ideas weren't bad at all and could actually be
> usefull (I was thinking about the dinamic allocation of memory).
it's not a matter of 'useful' but more one of priorities .. the next
couple of years luatex dev will focus on core features and especially
opening up the engine; once that part is stable it's time to start
thinking of adding fucntionality but even then, chances are good that it
will be helper functions so that one can use lua coee to implement
his/her solutions
so, feel free to post ideas, if only because then they're archived somehow
> 1. Currently, when TeX computes the space between two lines it looks
> at the depth of the line above and the height of the line below. It may
> happen, specially if math is being inserted inline, that the character
> that causes the maximum depth in the above line is far away from the one
> that accounts for the maximum height in the line below, causing TeX to
> insert unnecessary white space. I think it could be a good idea to
> compute the skip between lines taking into account the actual position
> of characters and boxes.
> Given that a high ascendent looks too close to a deep descendent not
> only in the case when the last is on top of the former, but also when it
> is near it, a primitive parameter (length) like \charsurround could be
> Before thinking on how this could be implemented, there is a very
> important side effect, that deserves attention on its own:
>
> Look at §109 of TeX: The Program:
>
> "When TeX 'packages' a list into a box, it needs to calculate the
> proportionality ratio by which the glue inside the box should stretch or
> shrink. **This calculation does not affect TeX's decision making**, so
> the precise details of rounding, etc., in the glue calculation are not
> of critical importance for the consistency of results on different
> computers."
>
> If my idea for the computation of the line skip is to be
> implemented, this assertion is not valid any more. Whether it will be
> implemented or not, now that LuaTeX provides access to the internal
> structures of TeX it is very likely that the assertion is not valid
> anyway, or will not be in the near future. Therefore, I think that §109
> should be rewritten to deal with glue ratios using fixed-point arithmetic.
>
> As for the implementation of my idea for the line skip, this is what
> I thought: TeX has to compute the minimum separation between two lines:
> the above-line and the below-line. It would first generate a profile for
> each line: a tooth-like bottom profile for the above-line, and a top
> profile for the below-line. Then it could operate fast on those two
> profiles, and thus compute efficiently the minimum separation that is
> present between the lines, taking into account \charsurround.
>
> * * *
you can do that by using callbacks that process the node list
> 2. Other idea I would like to propose is making \phantom a
> primitive. The idea behind \phantom is that it is as if you actually
> typed the material, but it finally does not show up, as if it were
> printed with invisible ink. The current definition of \phantom does not
> fulfil this idea. In particular, it generates Ord items in math. This is
> specially anoying for delimiters. It has some other drawbacks; it kills
> the stretching and shrinking of glue, so the "phantommed" material may
> not actually occupy the same space as real material, possibly in a line
> just above or below, resulting in misalignments.
> Now that nodes can have attributes, maybe could phantomness be one
> of them.
>
> * * *
more control over math is definitely on the agenda but first we need to
open up the math machinery (later this year/ beginning next year)
btw, luatex will not define attributes itself; it's just a mechanism to
let information travel along with nodes; indeed you can reserve an
attribute (one of many) for phantomness but you need to write node
manipulation code to do something useful with it
> 3. Yet another thing (I sent this some time ago to the pdfdtex list,
> the one discussing the extensions to TeX. This is the place where this
> message belongs to. Sorry to those who will read it twice): An
> intermediate scope of assignment, between \local and \global. The idea
> is that the assignment be still in force in the innermost group that
> encloses that which we are in. For example, if the primitive is to be
> named \regional,
>
>
> \def\foo{a}{\def\foo{b}{\regional\def\foo{c}\show\foo}\show\foo}\show\foo
>
> the two first \show\foo would display 'c', while the third would display
> 'a'.
>
> By now, instead of thinking "how?" you are probably thinking "why?".
> I found this very useful in tree scenarios. The first one is to write
> function-like macros. Suppose that we want a macro to perform some
> operations and return the value, then we could do
>
> \def\myfunction{{% notice the aditional grouping
> %
> %Perform some operations, possibly long. Everyting is local, so we
> may use scratch registers
> %and the control sequences we wish, provided we are sure that the
> output routine will not be invoked.
> %
> %Now, return the vaule
> \regional\myregister\myregister %Or \regional\myregister\dimen0,
> or \regional\foo{whatever}, or...
> }%
> }
for a long time we've been thinking of a feature like that, where
results can be carried over one (or more groups); it has a low priority
but it's definitely on the agenda (has even been prototyped by taco a
couple of years ago)
> Another situation is inside alignments. It often happens, or at least
> it often happens to me, that we want to change the behavour of \\ for
> the current row we are in; or for the entire alignment but still taking
> advantage of other programmed features (for examlple, suppose we have
> \begin{matrix} ... \end{matrix}, or \begin{mymatrix} ... \end{mymatrix},
> or \mytable{ ... }, and we want to change just \\,).
> Since TeX puts each alignment entry inside a group by itself, it is
> impossible to perform such changes locally, since right after
> \begin{matrix} or \mytable{, we are already at the first entry. It will
> not usually be solvable with aftergroup, and it is likely that we cannot
> do the cange before entering the table, because the macro may itself
> redefine \\.
> The third situation, and it may be the one that arises more often,
> is when passign some information to the group enclosign the one we are
> in. It is currently solved by using global (e.g., \global\st at rredtrue),
> but that is not save.
>
> * * *
alligments ... as with math we need more control there too
> 4. Just one more. The todo list includes "A way to (re?)calculate
> the width of a \vbox, taking only the natural width of the included
> items into account". What about the depth of a \hbox where items have
> been raised? Try
>
> \setbox0\hbox{\raise 10pt\hbox{a}}. \showthe\dp0
>
> You will get \dp0=0pt. It is specially anoying when raising a box that
> would otherwise be the one causing the maximum depth of the enclosing
> box (for example, denominators).
the width calculation mentioned in the todo concerns the natural width
of the longest line in a (often ragged) paragraph
concerning your raise example ... i would not expect a (positive or
negative) depth there anyway since that's how raise is supposed to work
taking the skyline of a line into account is one of the things that node
list processing should make possible; once we have more control over
line breaks (for that the hyphenation engine will be opened up) these
things can be investigated indeed
Hans
-----------------------------------------------------------------
|
|
On some extensions of the FKN theorem
Revised: September 19, 2015
Published: December 29, 2015
$\newcommand{\Var}{{\mathrm {Var}}}$
Let $S=a_{1}r_{1}+a_{2}r_{2}+\ldots+a_{n}r_{n}$ be a weighted Rademacher sum. Friedgut, Kalai, and Naor have shown that if $\Var(|S|)$ is much smaller than $\Var(S)$, then the sum is largely determined by one of the summands. We provide a simple and elementary proof of this result, strengthen it, and extend it in various ways to a more general setting.
|
|
# What is the intuitive relationship between SVD and PCA?
Singular value decomposition (SVD) and principal component analysis (PCA) are two eigenvalue methods used to reduce a high-dimensional data set into fewer dimensions while retaining important information. Online articles say that these methods are 'related' but never specify the exact relation.
What is the intuitive relationship between PCA and SVD? As PCA uses the SVD in its calculation, clearly there is some 'extra' analysis done. What does PCA 'pay attention' to differently than the SVD? What kinds of relationships do each method utilize more in their calculations? Is one method 'blind' to a certain type of data that the other is not?
• SVD and PCA and "total least-squares" (and several other names) are the same thing. It computes the orthogonal transform that decorrelates the variables and keeps the ones with the largest variance. There are two numerical approaches: one by SVD of the (centered) data matrix, and one by Eigen decomposition of this matrix "squared" (covariance).
– user65203
Jun 10 '14 at 8:21
• Here is a link to a very similar thread on CrossValidated.SE: Relationship between SVD and PCA. How to use SVD to perform PCA? It covers similar grounds to J.M.'s answer (+1 by the way), but in somewhat more detail. Jan 24 '15 at 23:28
• how-to-find-straight-line-minimizing-the-sum-of-squares-of-euclidean-distances-f on stats.stackexchange has some links on the relationship between orthogonal regression and PCA. Aug 30 '15 at 12:44
• Wow 400 upvotes Mar 1 at 3:09
(I assume for the purposes of this answer that the data has been preprocessed to have zero mean.)
Simply put, the PCA viewpoint requires that one compute the eigenvalues and eigenvectors of the covariance matrix, which is the product $$\frac{1}{n-1}\mathbf X\mathbf X^\top$$, where $$\mathbf X$$ is the data matrix. Since the covariance matrix is symmetric, the matrix is diagonalizable, and the eigenvectors can be normalized such that they are orthonormal:
$$\frac{1}{n-1}\mathbf X\mathbf X^\top=\frac{1}{n-1}\mathbf W\mathbf D\mathbf W^\top$$
On the other hand, applying SVD to the data matrix $$\mathbf X$$ as follows:
$$\mathbf X=\mathbf U\mathbf \Sigma\mathbf V^\top$$
and attempting to construct the covariance matrix from this decomposition gives $$\frac{1}{n-1}\mathbf X\mathbf X^\top =\frac{1}{n-1}(\mathbf U\mathbf \Sigma\mathbf V^\top)(\mathbf U\mathbf \Sigma\mathbf V^\top)^\top = \frac{1}{n-1}(\mathbf U\mathbf \Sigma\mathbf V^\top)(\mathbf V\mathbf \Sigma\mathbf U^\top)$$
and since $$\mathbf V$$ is an orthogonal matrix ($$\mathbf V^\top \mathbf V=\mathbf I$$),
$$\frac{1}{n-1}\mathbf X\mathbf X^\top=\frac{1}{n-1}\mathbf U\mathbf \Sigma^2 \mathbf U^\top$$
and the correspondence is easily seen (the square roots of the eigenvalues of $$\mathbf X\mathbf X^\top$$ are the singular values of $$\mathbf X$$, etc.)
In fact, using the SVD to perform PCA makes much better sense numerically than forming the covariance matrix to begin with, since the formation of $$\mathbf X\mathbf X^\top$$ can cause loss of precision. This is detailed in books on numerical linear algebra, but I'll leave you with an example of a matrix that can be stable SVD'd, but forming $$\mathbf X\mathbf X^\top$$ can be disastrous, the Läuchli matrix:
$$\begin{pmatrix}1&1&1\\ \epsilon&0&0\\0&\epsilon&0\\0&0&\epsilon\end{pmatrix}^\top,$$
where $$\epsilon$$ is a tiny number.
• To give a Mathematica example: A=SparseArray[{{i_, 1} -> 1, {i_, j_} /; i + 1 == j :> $MachineEpsilon}, {3, 4}]; and then compare Sqrt[Eigenvalues[a.Transpose[a]]] and SingularValueList[a,Tolerance->0]. Sep 2 '10 at 14:13 • Note that in practice, the columns of$W$and$U$(the principal components via the eigendecomposition versus singular value decomposition) may differ from each other by a factor of -1. Jan 16 '14 at 14:11 • @J.M. - It is a bit unclear if the data matrix consists of row vectors or column vectors maybe could be good to mention so there is no misunderstanding. Jan 21 '14 at 20:17 • @J. M., for completeness what is the mathematical relation between the W matrix defined in your PCA explanation and the U matrix defined in your SVD explanation? Sep 4 '14 at 10:39 • This was a little confusing in that normally the data matrix has n rows of samples of data with d dimensions along columns, like a least squares design matrix. If that is true then the covariance is$X^TX$, and the SVD result is$V\Sigma V^T\$. I was also confused by the lack of normalization initially. But altogether a pretty clear explanation. Mar 3 '15 at 22:43
A tutorial on Principal Component Analysis by Jonathon Shlens is a good tutorial on PCA and its relation to SVD. Specifically, section VI: A More General Solution Using SVD.
The question boils down to whether you what to subtract the means and divide by standard deviation first. The same question arises in the context of linear and logistic regression. So I'll reason by analogy.
In many problems our features are positive values such as counts of words or pixel intensities. Typically a higher count or a higher pixel intensity means that a feature is more useful for classification/regression. If you subtract the means then you are forcing features with original value of zero to have a negative value which is high in magnitude. This entails that you make the features values that are non-important to the problem of classification (previously zero valued) as influential as the most important features values (the ones that have high counts or pixel intensities).
The same reasoning holds for PCA. If your features are least sensitive (informative) towards the mean of the distribution, then it makes sense to subtract the mean. If the features are most sensitive towards the high values, then subtracting the mean does not make sense.
SVD does not subtract the means but often as a first step projects the data on the mean of all data points. In this way the SVD first takes care of global structure.
• I think this answer may be a bit misleading. The fact that zero value numbers will be mapped to negative numbers of large magnitude after subtracting means doesn't mean that their influence on a statistical model is increased. Deviation from the mean is the information used by many (perhaps most?) statistical models to fit curves, sort items, etc. If you are concerned about a feature with a long distribution tail (e.g. counts), then there are ways of transforming that data (e.g. add 1 and take the log) so it plays nice with models based on symmetric distributions. Jan 27 '16 at 4:50
There is a way to do an SVD on a sparse matrix that treats missing features as missing (using gradient search). I don't know any way to do PCA on a sparse matrix except by treating missing features as zero.
|
|
This site is supported by donations to The OEIS Foundation.
# Benford's law
Please do not rely on any information it contains.
Given a finite set of numerical data, it would be expected that the digits 1 through 9 would occur as first digits with roughly equal probability. However, in many cases, the digit 1 occurs as the first digit most often, with roughly 30% probability, and the other digits occur with decreasing frequency.
This is called Benford's law, after Frank Benford, who noticed a curious pattern of usage in logarithm tables. Benford went on to also notice this unexpected disparity in many different sets of data, including atomic weights, baseball statistics and street addresses.
A selection of house numerals at Brooks Lumber in Detroit. Notice the newer packaging for the 1s, 2s and 7s.
The effects of Benford's law can be observed even in those cases that might not necessarily reflect a difference in usage between first digits and other digits of a number. Presumably, building contractors know about Benford's law and know whether it applies to the addresses of the properties they're building. Hardware stores, on the other hand, might not know for what addresses replacement house numerals are being bought, and a given numeral can just as easily be used for the last digit of a house address as it can be for the first. It is still nevertheless a good idea to stock more 1s and 2s than other numerals.
## Prime numbers and Benford's law
Note: The prime numbers do not satisfy Benford's law (Daniel I. A. Cohen and Talbot M. Katz, "Prime numbers and the first digit phenomenon," J. Number Theory 18 (1984), 261-268; A. Berger and T. P. Hill, What is Benford's Law?, Notices, Amer. Math. Soc., 64:2 (2017), 132-134.)
Street addresses are different from prime numbers in that, even if we include all the world's addresses, we are still dealing with a finite set, while there are infinitely many primes.[1] So, to simplify things for ourselves, in this article, we will only consider primes up to some threshold. And, so as to give the digits 2 to 9 a fair shake, we will set that threshold at a power of 10.
We see that going only up to 10, 1 as a first digit gets off to a lousy start, since we don't consider 1 prime anymore (for reasons that are beyond the scope of this article). One might be tempted to temporarily regard 1 as prime for the sake of a handicap, but just going up to 100 will show that to be unnecessary.
1 2 3 4 5 6 7 8 9 Up to 10 0 1 1 0 1 0 1 0 0 Up to 100 4 3 3 3 3 2 4 2 1 Up to 1000 25 19 19 20 17 18 18 17 15 Up to 10000 160 146 139 139 131 135 125 127 127 Up to 10 5 1193 1129 1097 1069 1055 1013 1027 1003 1006 Up to 10 6 9585 9142 8960 8747 8615 8458 8435 8326 8230 Up to 10 7 80020 77025 75290 74114 72951 72257 71564 71038 70320 Up to 10 8 686048 664277 651085 641594 633932 628206 622882 618610 614821 Up to 10 9 6003530 5837665 5735086 5661135 5602768 5556434 5516130 5481646 5453140 Cf.: A073517 A073516 A073515 A073514 A073513 A073512 A073511 A073510 A073509
However, as we go further up, the advantage of 1 as a first digit gradually erodes.
Another way to look at this is to see what is the 50th prime number to start with a given digit ${\displaystyle d}$. Now, 50 is a bit arbitrary: the choice was arrived at by a rough estimate of term visibility for the relevant OEIS sequence entries.
${\displaystyle d}$ 50th ${\displaystyle p}$ 1 1171 A045707 2 2243 A045708 3 3259 A045709 4 4231 A045710 5 5297 A045711 6 6269 A045712 7 7309 A045713 8 8291 A045714 9 9311 A045715
If for these 50th primes we chop off the leading digit, we see that the resulting number for 9 is significantly higher than the one for 1.
But some reflection upon the prime number theorem suggests that as we look at higher and higher powers of 10, the distribution of first digits will become more or less uniform, leading to the conclusion that Benford's law does not actually apply to prime numbers.
## References
• Daniel I. A. Cohen and Talbot M. Katz, "Prime numbers and the first digit phenomenon," J. Number Theory 18 (1984), 261-268.
|
|
## Shortcuts and document type.
### Shortcuts and document type.
how can i make a shortcut like cmd (or super) + i to grab the selected text and replace it with \textit{selected text} ? and is it possible to do this shortcut work only when the document is defined as a latex document?
thank you.
bgustavo
Posts: 2
Joined: Sun Apr 03, 2011 4:22 pm
### Re: Shortcuts and document type.
Sorry for the short answer, but take a look at:
http://sublimetext.info/docs/extensibil ... ppets.html
Paraphrasing Obama, yes you can!
guillermooo
Posts: 739
Joined: Thu Jul 23, 2009 9:06 am
### Re: Shortcuts and document type.
Please be patient... the LaTeXTools plugin *will* have this functionality. FYI you can grab the most up-to-date version at GitHub:
https://github.com/SublimeText/LaTeXTools
CTRL+I (CMD-I on the Mac) is coming soon for sure. In the meantime, you can use the autocompletion system. IF you type
em<TAB>
you will get \emph{<CURSOR>}; a <TAB> will exit the braces. Similarly, "it" gives you "\textit{}" in text mode and "\mathit{}" in math mode; "bf" gives you "\textbf{}". It's pretty cool!
Marciano
Posts: 146
Joined: Fri Sep 18, 2009 2:55 am
### Re: Shortcuts and document type.
thanks for the quick replies. will look into that.
bgustavo
Posts: 2
Joined: Sun Apr 03, 2011 4:22 pm
|
|
problem
Astronomers often examine star maps where stars are represented by points on a plane and each star has Cartesian coordinates. Let the level of a star be an amount of the stars that are not higher and not to the right of the given star. Astronomers want to know the distribution of the levels of the stars. For example, look at the map shown on the figure above. Level of the star number 5 is equal to 3 (it's formed by three stars with a numbers 1, 2 and 4). And the levels of the stars numbered by 2 and 4 are 1. At this map there are only one star of the level 0, two stars of the level 1, one star of the level 2, and one star of the level 3. You are to write a program that will count the amounts of the stars of each level on a given map.
Input
The first line of the input file contains a number of stars N (1<=N<=15000). The following N lines describe coordinates of stars (two integers X and Y per line separated by a space, 0<=X,Y<=32000). There can be only one star at one point of the plane. Stars are listed in ascending order of Y coordinate. Stars with equal Y coordinates are listed in ascending order of X coordinate.
Output
The output should contain N lines, one number per line. The first line contains amount of stars of the level 0, the second does amount of stars of the level 1 and so on, the last line contains amount of stars of the level N-1.
Sample Input
5
1 1
5 1
7 1
3 3
5 5
Sample Output
1
2
1
1
0
Hint
This problem has huge input data,use scanf() instead of cin to read data to avoid time limit exceed.
代码示例
#include<iostream>
#include<cstdio>
#include<algorithm>
using namespace std;
const int maxn=35000;
typedef long long ll;
typedef pair<int,int >pir;
int node[maxn];
int a[maxn];
pir c[maxn];
int n;
inline int lowbit(int x)
{
return x&(-x);
}
{
for(int i=a;i<=maxn-1;i+=lowbit(i)){//没有离散化
node[i]+=b;
}
}
int sum(int m)//前缀和
{
int res=0;
for(int i=m;i;i-=lowbit(i)){
res+=node[i];
}
return res;
}
int level[maxn];
int main()
{
//freopen("in.txt","r",stdin);
ios::sync_with_stdio(false);
cin>>n;
for(int i=1;i<=n;++i){
cin>>c[i].first>>c[i].second;
c[i].second++;
}
sort(c+1,c+n+1);
// for(int i=1;i<=n;++i){
// cout<<c[i].first<<' '<<c[i].second<<endl;
// }
for(int i=1;i<=n;++i){
a[i]=c[i].second;
}
for(int i=1;i<=n;++i){
level[sum(a[i])]++;
//cout<<sum(a[i])<<endl;
}
for(int i=0;i<=n-1;++i){
cout<<level[i]<<endl;
}
return 0;
}
|
|
In this video, you will learn how to do a reflection over an axis, such as the x-axis or y-axis.
To reflect a shape over an axis, you can either match the distance of a point to the axis on the other side of using the reflection notation.
To match the distance, you can count the number of units to the axis and plot a point on the corresponding point over the axis.
You can also negate the value depending on the line of reflection where the x-value is negated if the reflection is over the y-axis and the y-value is negated if the reflection is over the x-axis.
Either way, the answer is the same thing.
For example:
Triangle ABC with coordinate points A(1,2), B(3,5), and C(7,1). Determine the coordinate points of the image after a reflection over the x-axis.
Since the reflection applied is going to be over the x-axis, that means negating the y-value. As a result, points of the image are going to be:
A'(1,-2), B'(3,-5), and C'(7,-1)
By counting the units, we know that point A is located two units above the x-axis. Count two units below the x-axis and there is point A’. Do the same for the other points and the points are also
A'(1,-2), B'(3,-5), and C'(7,-1)
Reflection Notation:
rx-axis = (x,-y)
ry-axis = (-x,y)
## Video-Lesson Transcript
In this lesson, we’ll go over reflections on a coordinate system. This will involve changing the coordinates.
For example, try to reflect over the $x$-axis.
We have triangle $ABC$ with coordinates
$A (1, 2)$
$B (3, 5)$
$C (7, 1)$
We’re going to reflect it over the $x$-axis. We’re going to flip it over.
So we’ll do what we normally do. Just one point at a time.
Now, $A$ is above $2$ units from the $x$-axis so we’ll move it below the $x$-axis by $2$ units.
This will be the $A^\prime (1, -2)$.
Let’s do the same for $B$. It’s $5$ units above the $x$-axis so we’re going to go $5$ units below the $x$-axis. Notice that it’s still in line with $x = 3$.
This is now $B^\prime (3, -5)$.
Look at point $C$ at $7, 1$. It’s $1$ point above the $x$-axis so we’ll go $1$ point below the $x$-axis.
So, $C^\prime (7, -1)$.
And just connect the points. Then we can see our reflection over the $x$-axis.
When we reflect over the $x$-axis, something happens to the coordinates.
The initial coordinates $(x, y)$ change. The $x$ coordinate stays the same but the $y$ coordinate is the same number but now it’s negative.
$(x, y) \rightarrow (x, -y)$
In reflecting over the $x$-axis, we’ll write
$r_{x-axis}$
Now, the same thing goes for reflecting over the $y$-axis.
We’re going to reflect triangle $ABC$ over the $y$-axis.
$r_{y-axis} \triangle{ABC}$
Similar to reflecting over the $x$-axis, we’ll just do one point at a time.
$A (1, 2)$
$B (3, 5)$
$C (7, 1)$
$A$ is $1$ unit from the $y$-axis so we’ll move $1$ beyond the $y$-axis.
So, $A^\prime (-1, 2)$.
Let’s look at $B$ at $(3, 5)$. That means it’s $3$ units from the $y$-axis so we’ll move $3$ coordinates on the other side of the $y$-axis.
Now, $B^\prime (-3, 5)$.
Finally, $C$ is at $(7, 1)$ so we’ll go $7$ points beyond the $y$-axis.
We’ll have $C^\prime (-7, 1)$.
Now, we can draw a triangle that is a reflection of triangle $ABC$ over the $y$-axis.
Let’s look at how these coordinates changed.
Originally we have coordinates $(x, y)$ but $x$ became negative while $y$ stayed the same.
$(x, y) \rightarrow (-x, y)$
Let’s recap.
The rule of reflecting over the $x$-axis is
$r_{x-axis} (x, y) \rightarrow (x, -y)$
And for reflecting over the $y$-axis is
$r_{y-axis} (x, y) \rightarrow (-x, y)$
If you reflect it over the $x$-axis, $x$ coordinate stays the same the other coordinate becomes negative.
And reflecting over the $y$-axis, $y$ coordinate stays the same while the other coordinate becomes negative.
|
|
# Well defined probability
Given the following probabilistic model
where $u$ are users of a search engine, $c$ are categories where the queries that $u$ search ($q$) and webs sites that they visit ($w$) are classified.
My question is if the following has necessarily has to hold for $p(c\mid q)$ to be a probability:
$$\displaystyle\sum_{i=1}^n\displaystyle\sum_{j=1}^m p(c_i\mid q_j) = 1.$$
-
If $p(c\mid q)$ is a probability, then the following condition must be true: $$\displaystyle\sum_{i=1}^n\displaystyle\sum_{j=1}^m p(c_i\mid q_j) = 1.$$
As stated, that's not correct. For a fixed value of $q$, $p(c\mid q)$ is a probability on $c$. An informal definition of this is "The probability of $c$ occurring given that $q$ has already occurred." So, the following is true: $$\displaystyle\sum_{i=1}^n p(c_i\mid q_j) = 1.$$
|
|
6. Process or Product Monitoring and Control
6.4. Introduction to Time Series Analysis
6.4.3. What is Exponential Smoothing?
## Example of Triple Exponential Smoothing
Example comparing single, double, triple exponential smoothing This example shows comparison of single, double and triple exponential smoothing for a data set.
The following data set represents 24 observations. These are six years of quarterly data (each year has four quarters).
Table showing the data for the example
Quarter Period Sales Quarter Period Sales
90 1 1 362 93 1 13 544
2 2 385 2 14 582
3 3 432 3 15 681
4 4 341 4 16 557
91 1 5 382 94 1 17 628
2 6 409 2 18 707
3 7 498 3 19 773
4 8 387 4 20 592
92 1 9 473 95 1 21 627
2 10 513 2 22 725
3 11 582 3 23 854
4 12 474 4 24 661
Plot of raw data with single, double, and triple exponential forecasts
Plot of raw data with triple exponential forecasts
Actual Time Series with forecasts
Comparison of MSEs
Comparison of MSEs
MSE $$\alpha$$
demand
$$\gamma$$
trend
$$\beta$$
seasonality
6906 0.4694
5054 0.1086 1.0000
936 1.0000 1.0000
520 0.7556 0.0000 0.9837
The updating coefficients were chosen by a computer program such that the MSE for each of the methods was minimized.
Example of the computation of the Initial Trend
Computation of initial trend The data set consists of quarterly sales data. The season is 1 year and since there are 4 quarters per year, $$L=4$$. Using the formula we obtain: $$\begin{eqnarray} b_1 & = & \frac{1}{4} \left[ \left( \frac{y5-y1}{4} \right) + \left( \frac{y6-y2}{4} \right) + \left( \frac{y7-y3}{4} \right) + \left( \frac{y8-y4}{4} \right) \right] \\ & & \\ & & \\ & = & \frac{1}{4} \left[ \left( \frac{382-362}{4} \right) + \left( \frac{409-385}{4} \right) + \left( \frac{498-432}{4} \right) + \left( \frac{387-341}{4} \right) \right] \\ & & \\ & & \\ & = & \frac{5 + 6 + 16.5 + 11.5}{4} = 9.75 \, . \end{eqnarray}$$
Example of the computation of the Initial Seasonal Indices
Table of initial seasonal indices
1 2 3 4 5 6
1 362 382 473 544 628 627
2 385 409 513 582 707 725
3 432 498 582 681 773 854
4 341 387 474 557 592 661
$$\bar{X}$$ 380 419 510.5 591 675 716.75
In this example we used the full 6 years of data. Other schemes may use only 3, or some other number of years. There are also a number of ways to compute initial estimates.
|
|
In the previous post, we did two examples of proving average-case fine-grained hardness via worst-case to average-case reductions. In this post, I want to continue the discussion of counting $t$-cliques (in particular, on Erdős–Rényi graphs) to showcase a new technique, which builds on the general recipe and the example of counting $t$-cliques described in the previous post. In the next post, I will discuss how this new technique can be applied to some other combinatorial problems.
Counting $t$-cliques in Erdős–Rényi graph. A strong follow-up [BBB19] of the result [GR18] we discussed in the previous post shows that there is an $\widetilde{O}(n^2)$-time reduction from counting $t$-cliques in any $n$-vertex graph to counting $t$-cliques with error probability $<\frac{1}{\log^{O(1)} n}$ in Erdős–Rényi graph (whereas the sampable distribution of the random graph in [GR18] is somewhat unnatural). The key idea is a decomposition lemma which says for sufficiently large prime $p$ and $k=O(\log(p)\log(p/\varepsilon))$, for any constants $0 (for our application, these constants will be equal), given independent Bernoulli random variables $y_{\ell}\sim\textrm{Bern}(p^{(\ell)})$, the distribution of $\sum_{\ell=0}^k 2^{\ell}y_{\ell}\mod p$ is close to the uniform distribution over $\mathbf{F}_{p}$, i.e., the statistical distance is less than $\varepsilon$ (later when we apply this lemma, we want $\varepsilon$ to be $1/\textrm{poly}(n)$, and therefore $k=O(\log p\cdot(\log p+\log n))$ ). We skip the proof of this lemma which is a nice application of basic Fourier analysis.
As in the previous post, $f_{t\textrm{-clique}}$ denotes a constructed polynomial that computes the number of $t$-cliques when the input is an adjacency matrix of a graph. The step 3 of the general recipe from the previous post reduces counting $t$-cliques for the worst-case graph to evaluating $f_{t\textrm{-clique}}(Y)$ on $d+1$ many uniformly random $Y\in\mathbf{F}_{p}^{n^2}$ (recall in the previous post, $d$ is the degree of the polynomial $f_{t\textrm{-clique}}$, and $p$ is $O(t\log n)$ after the Chinese remaindering trick). Based on the decomposition lemma, using standard sampling scheme (this is essentially rejection sampling, which I will not go into the details, but I just want to mention that we would like the $\varepsilon$ in the decomposition lemma to be $1/\textrm{poly}(n)$ such that the sampling scheme succeeds w.h.p. by a few attempts), we can further reduce evaluating $f_{t\textrm{-clique}}(Y)$ on uniformly random $Y\in\mathbf{F}_{p}^{n^2}$ to evaluating $f_{t\textrm{-clique}}(\sum_{\ell=0}^k 2^{\ell} Y_{\ell})$, where each $Y_{\ell}$ is a random 0-1 valued matrix that is statistically close to the adjacency matrix of an Erdős–Rényi graph. Now, if we can “pull out” the weighted sum in $f_{t\textrm{-clique}}(\sum_{\ell=0}^k 2^{\ell} Y_{\ell})$, then we are done, because evaluating $f_{t\textrm{-clique}}$ on the adjacency matrix of an Erdős–Rényi graph is precisely counting $t$-cliques for Erdős–Rényi graph.
When can we “pull out” the weighted sum for a polynomial $f(\sum_{\ell=0}^k 2^{\ell} Y_{\ell})$? One answer is when the polynomial is $d$-partite.
An $m$-variate polynomial $f(x_1,\dots,x_m)$ is $d$-partite if there is a partition of the set of variables $\dot{\bigcup}_{j\in[d]} S_j=[m]$ such that $f$ is the sum of monomials in which each monomial contains exactly one variable from each part $S_j$ (more formally, $f(x_1,\dots,x_m)=\sum_{(i_1,i_2,\dots,i_d)\in S}\prod_{j\in[d]}x_{i_j}$ for some $S\subseteq S_1\times S_2\times\dots\times S_d$).
For $d$-partite polynomial $f$, it is not hard to show that
$f(\sum_{\ell=0}^k 2^{\ell}y_{1,\ell},\dots,\sum_{\ell=0}^k 2^{\ell}y_{m,\ell})=\sum_{\ell_1=0}^k\sum_{\ell_2=0}^k\dots\sum_{\ell_d=0}^k 2^{\ell_1+\dots+\ell_d}\cdot f(y_{1,\ell_1},\dots,y_{m,\ell_m}).$
(Essentially, because two variables from the same $S_i$ never appear in the same monomial, we can enumerate variables from the same $S_i$ in the same order.)
Let us think of each $y_{i,\ell}$ as a coordinate of Erdős–Rényi adjacency matrix $Y_{\ell}$, then $f(y_{1,\ell_1},\dots,y_{m,\ell_m})$ is evaluating $f$ on an ensemble of distinct coordinates of $Y_{\ell}$‘s, and such ensemble is obviously Erdős–Rényi as well. Therefore, we have managed to decompose $f(\sum_{\ell=0}^k 2^{\ell} Y_{\ell})$ into sum of $k^d$ many $f(Y^{(\ell_1,\dots,\ell_d)})$‘s where each $Y^{(\ell_1,\dots,\ell_d)}$ denotes an Erdős–Rényi adjacency matrix. In the next paragraph, we will construct a $d=\binom{t}{2}$-partite polynomial $f_{t\textrm{-clique}}$ for counting $t$-cliques, and therefore, we have reduced computing $f_{t\textrm{-clique}}(\sum_{\ell=0}^k 2^{\ell} Y_{\ell})$ to computing $k^{\binom{t}{2}}=\log^{O(1)} n$ many $f_{t\textrm{-clique}} (Y^{(\ell_1,\dots,\ell_d)})$‘s, which is a mild blow-up of the number of the random instances which the reduction needs to solve. (In general, we consider the reduction to be efficient when the number of random instances it needs to solve is $n^{o(1)}$, and hence, as long as $d=o(\log n/\log \log n)$, we are good to go.)
Unfortunately, the $f_{t\textrm{-clique}}$ given in the previous post does not work. Instead, we first reduce counting $t$-cliques in any graph to counting $t$-cliques in a $t$-partite graph, and then we construct a $\binom{t}{2}$-partite polynomial $f_{t\textrm{-clique}}$ for counting $t$-cliques in a $t$-partite graph. Reduction from counting $t$-cliques in any graph to counting $t$-cliques in a $t$-partite graph is standard. Simply consider the tensor product between the original graph and another $t$-clique. The number of $t$-cliques in the tensor product graph is exactly $t!$ times that in the original graph. Now given the $t$-paritite graph, let $V_i$ denote the $i$-th part of vertices, and let $X^{(i,j)}$ (for $i) denote the adjacency matrix between $V_i$ and $V_j$. Consider the new polynomial $f_{t\textrm{-clique}}(X^{(1,2)},X^{(1,3)},\dots,X^{(t-1,t)}):=\sum_{v_1\in V_1}\sum_{v_2\in V_2}\dots\sum_{v_t\in V_t}\prod_{(i,j)\in\binom{[t]}{2}} X_{v_i,v_j}^{(i,j)}$, where $X_{v_i,v_j}^{(i,j)}$ denotes the coordinate that indicates if there is an edge between $v_i,v_j$. Observe that $f_{t\textrm{-clique}}$ counts $t$-cliques by picking one vertex for each part and checking if these $t$ vertices form a clique. It is indeed $\binom{t}{2}$-partite as each $X^{(i,j)}$ corresponds to a part of variables.
New recipe for worst-case to average-case reductions. Let us take a minute to think about what structural properties of counting $t$-cliques we have used in the entire reduction except for constructing $f_{t\textrm{-clique}}$.
The answer is none! The only part specific to counting $t$-cliques was cooking up a $d$-partite polynomial for $d=o(\log n/\log \log n)$ (let us call such polynomial “good”) that encodes this problem. The reduction we showed above works as long as we can construct such “good” polynomial for a problem. Therefore, a new recipe, which was explicitly formulated in [DLW20] for proving average-case (here “average-case” means Erdős–Rényi random input model) fine-grained hardness for a problem $L:\{0,1\}^n\to\mathbf{Z}_{\ge 0}\cap \textrm{poly}(n)$ in P, is
1. Construct a “good” polynomial $f_{L}$ on $\mathbf{F}_p^n$, where $p=\textrm{poly}(n)$, such that $f_{L}(x)=L(x)$ for all $x\in\{0,1\}^n$.
Short and sweet.
In the final post, I will present another instantiation of this new recipe.
Acknowledgements. I would like to thank my quals committee — Aviad Rubinstein, Tselil Schramm, Li-Yang Tan for valuable feedback to my quals talk.
|
|
## Talks
Wednesday, July 14, 16:00 ~ 16:30 UTC-3
## Cyclic quadrilaterals and smooth Jordan curves
### Joshua Greene
#### Boston College, USA - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak4693e66d0fa4afa82d586a5e6da88e1c').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy4693e66d0fa4afa82d586a5e6da88e1c = 'joshua.greene' + '@'; addy4693e66d0fa4afa82d586a5e6da88e1c = addy4693e66d0fa4afa82d586a5e6da88e1c + 'bc' + '.' + 'edu'; var addy_text4693e66d0fa4afa82d586a5e6da88e1c = 'joshua.greene' + '@' + 'bc' + '.' + 'edu';document.getElementById('cloak4693e66d0fa4afa82d586a5e6da88e1c').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy4693e66d0fa4afa82d586a5e6da88e1c + '\'>'+addy_text4693e66d0fa4afa82d586a5e6da88e1c+'<\/a>';
I will discuss the context and proof of the following result: for every smooth Jordan curve and for every four points on a circle, there exists an orientation-preserving similarity taking the four points onto the curve. The proof involves symplectic geometry in a surprising way.
Joint work with Andrew Lobb (Durham University).
View abstract PDF
Wednesday, July 14, 16:40 ~ 17:10 UTC-3
## Taut foliations from double-diamond replacements
### Rachel Roberts
#### Washington University in St Louis, United States - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak1f1a380998cfbc6c228cec9467154d90').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy1f1a380998cfbc6c228cec9467154d90 = 'roberts' + '@'; addy1f1a380998cfbc6c228cec9467154d90 = addy1f1a380998cfbc6c228cec9467154d90 + 'wustl' + '.' + 'edu'; var addy_text1f1a380998cfbc6c228cec9467154d90 = 'roberts' + '@' + 'wustl' + '.' + 'edu';document.getElementById('cloak1f1a380998cfbc6c228cec9467154d90').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy1f1a380998cfbc6c228cec9467154d90 + '\'>'+addy_text1f1a380998cfbc6c228cec9467154d90+'<\/a>';
Suppose $M$ is an oriented 3-manifold with connected boundary a torus, and suppose $M$ contains a properly embedded, compact, oriented, surface $R$ with a single boundary component that is Thurston norm minimizing in $H_2(M, \partial M)$. We define a readily recognizable type of sutured manifold decomposition, which for notational reasons we call double-diamond taut, and show that if $R$ admits a double-diamond taut sutured manifold decomposition, then for every boundary slope except one, there is a co-oriented taut foliation of $M$ that intersects $\partial M$ transversely in a foliation by curves of that slope. In the case that $M$ is the complement of a knot $\kappa$ in $S^3$, the exceptional filling is the meridional one; in particular, restricting attention to rational slopes, it follows that every manifold obtained by non-trivial Dehn surgery along $\kappa$ admits a co-oriented taut foliation. As an application, we show that if $R$ is a Murasugi sum of surfaces $R_1$ and $R_2$, where $R_2$ is an unknotted band with an even number $2m\ge 4$ of half-twists, then every manifold obtained by non-trivial surgery on $\kappa= \partial R$ admits a co-oriented taut foliation.
Joint work with Charles Delman (Eastern Illinois University).
View abstract PDF
Wednesday, July 14, 17:20 ~ 17:50 UTC-3
## The Gordon-Litherland pairing for links in thickened surfaces
### Hans Boden
#### McMaster University, Canada - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak85fc2537c9c8414b704fb7b371f56097').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy85fc2537c9c8414b704fb7b371f56097 = 'boden' + '@'; addy85fc2537c9c8414b704fb7b371f56097 = addy85fc2537c9c8414b704fb7b371f56097 + 'mcmaster' + '.' + 'ca'; var addy_text85fc2537c9c8414b704fb7b371f56097 = 'boden' + '@' + 'mcmaster' + '.' + 'ca';document.getElementById('cloak85fc2537c9c8414b704fb7b371f56097').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy85fc2537c9c8414b704fb7b371f56097 + '\'>'+addy_text85fc2537c9c8414b704fb7b371f56097+'<\/a>';
We introduce the Gordon-Litherland pairing for knots and links in thickened surfaces that bound unoriented spanning surfaces. Using the GL pairing, we define signature and determinant invariants and relate them to invariants derived from the Tait graph and Goeritz matrices. These invariants depend only on the $S^*$ equivalence class of the spanning surface, and the determinants give a simple criterion to check if the knot or link has minimal genus. The GL pairing is isometric to the relative intersection pairing on a 4-manifold obtained as the 2-fold cover along the surface. Time permitting, we will explain how to use the GL pairing to give a topological characterization of alternating links in thickened surfaces, extending the results of Josh Greene and Josh Howie.
Joint work with Micah Chrisman (Ohio State University, Marion) and Homayun Karimi (McMaster University).
View abstract PDF
Wednesday, July 14, 18:00 ~ 18:30 UTC-3
## On classification of genus $g$ knots which admit a $(1,1)$-decomposition
### Fabiola Manjarrez-Gutiérrez
#### UNAM, México - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloakf3e918b497f46016e14c150ffac818ec').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addyf3e918b497f46016e14c150ffac818ec = 'fabiola.manjarrez' + '@'; addyf3e918b497f46016e14c150ffac818ec = addyf3e918b497f46016e14c150ffac818ec + 'im' + '.' + 'unam' + '.' + 'mx'; var addy_textf3e918b497f46016e14c150ffac818ec = 'fabiola.manjarrez' + '@' + 'im' + '.' + 'unam' + '.' + 'mx';document.getElementById('cloakf3e918b497f46016e14c150ffac818ec').innerHTML += '<a ' + path + '\'' + prefix + ':' + addyf3e918b497f46016e14c150ffac818ec + '\'>'+addy_textf3e918b497f46016e14c150ffac818ec+'<\/a>';
Given an oriented minimal genus Seifert surface $F'$ for a $(1,1)$-knot $K$ it is possible to surger $F'$ along annuli to obtain a simple minimal Seifert surface $F$. Such a surface can be put in a very nice position with respect to the $(1,1)$-position of the knot $K$. Using this kind of surfaces we give a description of a $(1,1)$-knot of genus $g$ as a vertical banding of $(1,1)$-knots of genus smaller than $g$. In addition, we show that any rational knot of genus $g$ is obtained as a vertical banding of $g$ genus one rational knots.
Joint work with Mario Eudave-Muñoz (UNAM) and Enrique Ramírez-Losada (CIMAT).
View abstract PDF
Wednesday, July 14, 18:40 ~ 19:10 UTC-3
## Meridionally essential one-sided spanning surfaces
### Joshua Howie
#### University of California, Davis, USA - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak8fa5a95b306362a11874e52f8dbdede2').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy8fa5a95b306362a11874e52f8dbdede2 = 'jahowie' + '@'; addy8fa5a95b306362a11874e52f8dbdede2 = addy8fa5a95b306362a11874e52f8dbdede2 + 'ucdavis' + '.' + 'edu'; var addy_text8fa5a95b306362a11874e52f8dbdede2 = 'jahowie' + '@' + 'ucdavis' + '.' + 'edu';document.getElementById('cloak8fa5a95b306362a11874e52f8dbdede2').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy8fa5a95b306362a11874e52f8dbdede2 + '\'>'+addy_text8fa5a95b306362a11874e52f8dbdede2+'<\/a>';
The geography problem for spanning surfaces asks for a classification of all pairs of Euler characteristic and slope which can be realised by a spanning surface for a given knot in the 3-sphere. It is enough to understand the meridionally essential one-sided spanning surfaces, a somewhat larger class of surfaces than the geometrically essential spanning surfaces. We will discuss the existence of such one-sided surfaces, and give an algorithmic solution to the geography problem.
View abstract PDF
Wednesday, July 14, 19:20 ~ 19:50 UTC-3
## Satellites and Lorenz knots
### Jessica Purcell
#### Monash University, Australia - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak66327509f737365aabcbd7ab476e25eb').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy66327509f737365aabcbd7ab476e25eb = 'jessica.purcell' + '@'; addy66327509f737365aabcbd7ab476e25eb = addy66327509f737365aabcbd7ab476e25eb + 'monash' + '.' + 'edu'; var addy_text66327509f737365aabcbd7ab476e25eb = 'jessica.purcell' + '@' + 'monash' + '.' + 'edu';document.getElementById('cloak66327509f737365aabcbd7ab476e25eb').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy66327509f737365aabcbd7ab476e25eb + '\'>'+addy_text66327509f737365aabcbd7ab476e25eb+'<\/a>';
We construct infinitely many families of Lorenz knots that are satellites but not cables, giving counterexamples to a conjecture attributed to Morton. We amend the conjecture to state that Lorenz knots that are satellite have companion a Lorenz knot, and pattern equivalent to a Lorenz knot. We show this amended conjecture holds very broadly: it is true for all Lorenz knots obtained by high Dehn filling on a parent link, and other examples.
Joint work with Thiago de Paiva (Monash University, Australia).
View abstract PDF
Thursday, July 15, 16:00 ~ 16:30 UTC-3
## Prime quasi-alternating links are atoroidal
### Cameron Gordon
#### University of Texas at Austin, USA - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak7850e67b5e9917296cef3eaba44dfb09').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy7850e67b5e9917296cef3eaba44dfb09 = 'gordon' + '@'; addy7850e67b5e9917296cef3eaba44dfb09 = addy7850e67b5e9917296cef3eaba44dfb09 + 'math' + '.' + 'utexas' + '.' + 'edu'; var addy_text7850e67b5e9917296cef3eaba44dfb09 = 'gordon' + '@' + 'math' + '.' + 'utexas' + '.' + 'edu';document.getElementById('cloak7850e67b5e9917296cef3eaba44dfb09').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy7850e67b5e9917296cef3eaba44dfb09 + '\'>'+addy_text7850e67b5e9917296cef3eaba44dfb09+'<\/a>';
A classical result of Menasco is that a prime non-split alternating link is either hyperbolic or a (2,q)-torus link. In 2005 Ozsvath and Szabo introduced the class of quasi-alternating links, which (properly) contains the non-split alternating links. We prove that Menasco's result holds for this more general class: a prime quasi-alternating link is either hyperbolic or a (2,q)-torus link.
Joint work with Steve Boyer (University of Quebec at Montreal) and Ying Hu (University of Nebraska Omaha).
View abstract PDF
Thursday, July 15, 16:40 ~ 17:10 UTC-3
## The L-space conjecture and toroidal 3-manifolds
### Steven Boyer
#### UQAM, Canada - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak1cfcf76d7399c0bfdd560a60ccb6292f').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy1cfcf76d7399c0bfdd560a60ccb6292f = 'boyer.steven' + '@'; addy1cfcf76d7399c0bfdd560a60ccb6292f = addy1cfcf76d7399c0bfdd560a60ccb6292f + 'uqam' + '.' + 'ca'; var addy_text1cfcf76d7399c0bfdd560a60ccb6292f = 'boyer.steven' + '@' + 'uqam' + '.' + 'ca';document.getElementById('cloak1cfcf76d7399c0bfdd560a60ccb6292f').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy1cfcf76d7399c0bfdd560a60ccb6292f + '\'>'+addy_text1cfcf76d7399c0bfdd560a60ccb6292f+'<\/a>';
Hanselman, Rasmussen and Watson have characterised closed, toroidal, non-L-space 3-manifolds expressed as the union of manifolds with incompressible torus boundaries in terms of the gluing map. We discuss analogous results on the left-orderabilty of their fundamental groups suggested by the L-space conjecture together with some applications.
Joint work with Cameron McA. Gordon (University of Texas at Austin) and Ying Hu (University of Nebraska at Omaha).
View abstract PDF
Thursday, July 15, 17:20 ~ 17:50 UTC-3
## Dynamical systems on hyperbolic groups
### Yo'av Rieck
#### University of Arkansas , USA - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloakb031f9850a9f0c024c3d8994053cbd1d').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addyb031f9850a9f0c024c3d8994053cbd1d = 'yoav' + '@'; addyb031f9850a9f0c024c3d8994053cbd1d = addyb031f9850a9f0c024c3d8994053cbd1d + 'uark' + '.' + 'edu'; var addy_textb031f9850a9f0c024c3d8994053cbd1d = 'yoav' + '@' + 'uark' + '.' + 'edu';document.getElementById('cloakb031f9850a9f0c024c3d8994053cbd1d').innerHTML += '<a ' + path + '\'' + prefix + ':' + addyb031f9850a9f0c024c3d8994053cbd1d + '\'>'+addy_textb031f9850a9f0c024c3d8994053cbd1d+'<\/a>';
Let $G$ be an infinite hyperbolic group.
By a \em dynamical system \em on $G$ we mean an action of $G$ on a compact space $X$. The most commonly studied (and best understood) type of dynamical system, called SFT (subshift of finite type), is given by a closed, $G$-invariant subspace $X \subset A^G$, where $A$ is any finite set. We will explain these terms in the talk and show why an SFT on $G$ is essentially a tiling'' of $G$.
Gromov studied SFT's on $G$ in his original paper about hyperbolic groups, and much work on the subject was done by Coornaert and Papadopoulos. In particular, $G$ admits an SFT that can be used to study its action on its boundary.
A non-empty SFT is called \em strongly aperiodic \em if the stabilizer of every point is trivial. The question of which finitely generated, infinite groups admits a strongly aperiodic SFT has a long history, dating back to the foundational work of Wang and Berger in the 60's. Few groups are known not to admit one, and many are known to admit one; however, until the current work, the only hyperbolic groups that were known to admit a strongly aperiodic SFT were surface groups (Cohen and Goodman-Strauss).
In this talk we will describe the construction of a strongly aperiodic SFT when $G$ is one-ended, which is the key for the following result:
{\bf Theorem.} An infinite hyperbolic group admits a strongly aperiodic SFT if and only if it is one-ended.
Time permitting we will discuss further development related to this construction.
Joint work with David Bruce Cohen and Chaim Goodman--Strauss.
View abstract PDF
Thursday, July 15, 18:00 ~ 18:30 UTC-3
## On non almost-fibered knots
### Araceli Guzmán Tristán
#### CIMAT Guanajuato, México - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak2d1607e3becae1919fd1bd2e1fc092d9').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy2d1607e3becae1919fd1bd2e1fc092d9 = 'araceli.guzman' + '@'; addy2d1607e3becae1919fd1bd2e1fc092d9 = addy2d1607e3becae1919fd1bd2e1fc092d9 + 'cimat' + '.' + 'mx'; var addy_text2d1607e3becae1919fd1bd2e1fc092d9 = 'araceli.guzman' + '@' + 'cimat' + '.' + 'mx';document.getElementById('cloak2d1607e3becae1919fd1bd2e1fc092d9').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy2d1607e3becae1919fd1bd2e1fc092d9 + '\'>'+addy_text2d1607e3becae1919fd1bd2e1fc092d9+'<\/a>';
An almost-fibered knot is a knot whose complement possesses a circular thin position in which there is one and only one weakly incompressible Seifert surface and one incompressible Seifert surface. Infinite examples of almost-fibered knots are known. In this talk, we will show the existence of infinitely many hyperbolic genus one knots that are not almost-fibered.
Joint work with Mario Eudave Muñoz (Instituto de Matemáticas, UNAM) and Enrique Ramírez Losada (CIMAT, Guanajuato).
View abstract PDF
Thursday, July 15, 18:40 ~ 19:10 UTC-3
## Berge Conjecture for tunnel number one knots
### Tao Li
#### Boston College, U.S.A. - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak7cc618d094726ab9256d6b218de3c0c2').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy7cc618d094726ab9256d6b218de3c0c2 = 'taoli' + '@'; addy7cc618d094726ab9256d6b218de3c0c2 = addy7cc618d094726ab9256d6b218de3c0c2 + 'bc' + '.' + 'edu'; var addy_text7cc618d094726ab9256d6b218de3c0c2 = 'taoli' + '@' + 'bc' + '.' + 'edu';document.getElementById('cloak7cc618d094726ab9256d6b218de3c0c2').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy7cc618d094726ab9256d6b218de3c0c2 + '\'>'+addy_text7cc618d094726ab9256d6b218de3c0c2+'<\/a>';
Let $K$ be a tunnel number one knot in $M$, where $M$ is either $S^3$, $S^2\times S^1$, or a connected sum of $S^2\times S^1$ with a lens space. We prove that if a Dehn surgery on $K$ yields a lens space, then $K$ is a doubly primitive knot in $M$. For $M = S^3$ this resolves the tunnel number one Berge Conjecture. For $M = S^2\times S^1$ this resolves a conjecture of Greene and Baker-Buck-Lecuona for tunnel number one knots.
Joint work with Yoav Moriah (Technion, Israel) and Tali Pinsky (Technion, Israel).
View abstract PDF
Thursday, July 15, 19:20 ~ 19:50 UTC-3
## Embeddability in $\mathbb R^3$ is NP-hard
### Eric Sedgwick
#### DePaul University, United States - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloaka13c23385045ba1089d3f06ab821b2f0').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addya13c23385045ba1089d3f06ab821b2f0 = 'esedgwick' + '@'; addya13c23385045ba1089d3f06ab821b2f0 = addya13c23385045ba1089d3f06ab821b2f0 + 'cdm' + '.' + 'depaul' + '.' + 'edu'; var addy_texta13c23385045ba1089d3f06ab821b2f0 = 'esedgwick' + '@' + 'cdm' + '.' + 'depaul' + '.' + 'edu';document.getElementById('cloaka13c23385045ba1089d3f06ab821b2f0').innerHTML += '<a ' + path + '\'' + prefix + ':' + addya13c23385045ba1089d3f06ab821b2f0 + '\'>'+addy_texta13c23385045ba1089d3f06ab821b2f0+'<\/a>';
We prove that the problem of deciding whether a 2–or 3–dimensional simplicial complex embeds into $\mathbb R^3$ is NP-hard. This stands in contrast with the lower dimensional cases which can be solved in linear time, and a variety of computational problems in $\mathbb R^3$ like unknot or 3–sphere recognition which are in NP ∩ co-NP (assuming the generalized Riemann hypothesis). Our reduction encodes a satisfiability instance into the embeddability problem of a 3–manifold with boundary tori, and relies extensively on techniques from low-dimensional topology, most importantly Dehn fillings on link complements.
Joint work with Arnaud de Mesmay (CNRS, GIPSA-Lab, France), Yo'av Rieck (University of Arkansas, USA) and Martin Tancer (Charles University, Czech Republic)..
View abstract PDF
Friday, July 23, 16:00 ~ 16:30 UTC-3
## Generalizing classical knot invariants
### Maggy Tomova
#### The University of Iowa, United States - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak2aa5737979de73c91b3324766a08fc30').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy2aa5737979de73c91b3324766a08fc30 = 'maggy-tomova' + '@'; addy2aa5737979de73c91b3324766a08fc30 = addy2aa5737979de73c91b3324766a08fc30 + 'uiowa' + '.' + 'edu'; var addy_text2aa5737979de73c91b3324766a08fc30 = 'maggy-tomova' + '@' + 'uiowa' + '.' + 'edu';document.getElementById('cloak2aa5737979de73c91b3324766a08fc30').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy2aa5737979de73c91b3324766a08fc30 + '\'>'+addy_text2aa5737979de73c91b3324766a08fc30+'<\/a>';
Abstract: Tunnel number and knot width are well-known and very useful invariants. They are however not additive. In this talk, I will present joint work with Scott Taylor of generalizations of these invariants that are additive.
Joint work with Scott Taylor (Colby College).
View abstract PDF
Friday, July 23, 16:40 ~ 17:10 UTC-3
## Instantons and Knot Concordance
### Juanita Pinzón Caicedo
#### University of Notre Dame, USA - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloakb4f35919d18c922a73714f9b9f73b647').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addyb4f35919d18c922a73714f9b9f73b647 = 'jpinzonc' + '@'; addyb4f35919d18c922a73714f9b9f73b647 = addyb4f35919d18c922a73714f9b9f73b647 + 'nd' + '.' + 'edu'; var addy_textb4f35919d18c922a73714f9b9f73b647 = 'jpinzonc' + '@' + 'nd' + '.' + 'edu';document.getElementById('cloakb4f35919d18c922a73714f9b9f73b647').innerHTML += '<a ' + path + '\'' + prefix + ':' + addyb4f35919d18c922a73714f9b9f73b647 + '\'>'+addy_textb4f35919d18c922a73714f9b9f73b647+'<\/a>';
Knot concordance can be regarded as the study of knots as boundaries of surfaces embedded in spaces of dimension 4. Specifically, two knots $K_0$ and $K_1$ are said to be smoothly concordant if there is a smooth embedding of the annulus $S^1 \times [0, 1]$ into the cylinder'' $S^3 \times [0, 1]$ that restricts to the given knots at each end. Smooth concordance is an equivalence relation, and the set $\mathcal{C}$ of smooth concordance classes of knots is an abelian group with connected sum as the binary operation. The algebraic structure of $\mathcal{C}$, the concordance class of the unknot, and the set of knots that are topologically slice but not smoothly slice are much studied objects in low-dimensional topology. Gauge theoretical results on the nonexistence of certain definite smooth 4-manifolds can be used to better understand these objects. In particular, the study of anti-self dual connections on 4-manifolds can be used to shown that (1) the group of topologically slice knots up to smooth concordance contains a subgroup isomorphic to $\mathbb{Z}^\infty$, and (2) satellite operations that are similar to cables are not homomorphisms on $\mathcal{C}$.
Joint work with Matt Hedden (Michigan State University, USA) and Tye Lidman (North Carolina State University, USA).
View abstract PDF
Friday, July 23, 17:20 ~ 17:50 UTC-3
## Flows, growth rates and veering triangulations
### Yair Minsky
#### Yale University, United States - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak763768f37aa029662364148ee5602185').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy763768f37aa029662364148ee5602185 = 'yair.minsky' + '@'; addy763768f37aa029662364148ee5602185 = addy763768f37aa029662364148ee5602185 + 'yale' + '.' + 'edu'; var addy_text763768f37aa029662364148ee5602185 = 'yair.minsky' + '@' + 'yale' + '.' + 'edu';document.getElementById('cloak763768f37aa029662364148ee5602185').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy763768f37aa029662364148ee5602185 + '\'>'+addy_text763768f37aa029662364148ee5602185+'<\/a>';
The work of Thurston, Fried, McMullen, Mosher, Fenley and others weaves together a rich picture of fibrations and flows in 3-manifolds, linking growth rates of orbits, dilatations of pseudo-Anosov maps, and Thurston's norm on homology. Agol and Gueritaud introduced veering triangulations, which are ideal triangulations associated with (certain) pseudo-Anosov flows. We use these triangulations to construct a polynomial invariant that extends McMullen's Teichmuller polynomial from suspension flows to the more general setting. We develop a combinatorial model for the flow which, with the polynomial, permits explicit computations of growth rates of orbits in naturally defined subsets of the flow. As an application we obtain, for a fibered 3-manifold, a description of the limit set of the dilatations of fibrations belonging to a fibered face of Thurston's norm. This is joint work with Michael Landry and Sam Taylor.
View abstract PDF
Friday, July 23, 18:00 ~ 18:30 UTC-3
## The Strong Slope Conjecture for Mazur pattern satellite knots
### Kimihiko Motegi
#### Nihon University, Japan - This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak0dce540d48dc489044a3374318d188aa').innerHTML = ''; var prefix = 'ma' + 'il' + 'to'; var path = 'hr' + 'ef' + '='; var addy0dce540d48dc489044a3374318d188aa = 'motegi.kimihiko' + '@'; addy0dce540d48dc489044a3374318d188aa = addy0dce540d48dc489044a3374318d188aa + 'nihon-u' + '.' + 'ac' + '.' + 'jp'; var addy_text0dce540d48dc489044a3374318d188aa = 'motegi.kimihiko' + '@' + 'nihon-u' + '.' + 'ac' + '.' + 'jp';document.getElementById('cloak0dce540d48dc489044a3374318d188aa').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy0dce540d48dc489044a3374318d188aa + '\'>'+addy_text0dce540d48dc489044a3374318d188aa+'<\/a>';
The Slope Conjecture proposed by Garoufalidis asserts that the degree of the colored Jones polynomial determines a boundary slope, and its refinement, the Strong Slope Conjecture proposed by Kalfagianni and Tran asserts that the linear term in the degree determines the topology of an essential surface that satisfies the Slope Conjecture. Under certain hypotheses, we show that a Mazur pattern satellite knots satisfy the Strong Slope Conjecture if the original knot does. Consequently, combining with previous results, any knot obtained by a finite sequence of cabling, connected sums, Whitehead doubling and taking Mazur pattern satellites of adequate knots (including alternating knots) or torus knots satisfies the Strong Slope Conjecture.
Joint work with Kenneth L. Baker (University of Miami) and Toshie Takata (Kyushu University).
View abstract PDF
Friday, July 23, 18:40 ~ 19:10 UTC-3
## Guaranteed-quality triangular meshes
### Joel Hass
All surfaces can be triangulated, but in applications one seeks triangulations that have nice regularity properties. For computer graphics, finite elements, morphing, image recognition and other uses, one would like to have triangles whose angles are bounded away from zero degrees, and even better as close to 60 degrees as possible. I will talk here about a new algorithm developed jointly with Maria Trnkova that improves previously obtained bounds. It produces a triangulation, or mesh, with all angles in the interval $[35.2^o, 101.5^o$].
View abstract PDF
Friday, July 23, 19:20 ~ 19:50 UTC-3
|
|
### Vector Displays The Sketchpad, developed at MIT’s Lincoln Lab,...
Below you can see a picture of a a IBM 709 along with the included ...
The resulting line on the raster screen will be something like this...
The **Newton-Raphson** method is an iterative root-finding algorith...
This iterative method of finding square roots has been known since ...
Finding the square root of a number $V$ is equivalent to using Newt...
I think $G_n+1$ is a misprint for $G_{n+1}$.
Here is a simple implementation of the Bresenham’s algorithm in Jav...
### Floating Point Numbers You can think of floating point numbe...
Given that $x$ is the fraction and $a$ is the characteristic: $$\s... In _Figure 3_ it would be better to say that the circle is circumsc... There is a long history of computing more and more precise approxim... Using a rapidly converging series: \pi = \sqrt{12}\left(1-\frac{1... Professor John von Neumann was one of the people that expressed int... ### Modern computation of π In the early 20th century the Indian m... Of course the second identity immediately implies the first by sett... The following three algorithms don’t have much in common except for two things: They were implemented between 1949 and 1965— during the so-called first generation of comput- ing hardware—and they are good examples of rigorous thinking. First, I explain Bresenham’s algorithm and apply it to a fundamental process of computer graphics—drawing a line on a raster screen. Second, although there are sever- al ways to calculate a square root, the square root algorithm is interesting because it takes advantage of hardware design. And, finally, while the calculation of π goes back more than 2,200 years, Machin’s algorithm from 1706 is of importance to the history of computing. Bresenham’s algorithm Cathode-ray tube (CRT) displays for com- puters were developed principally along two lines, vector and raster, until the 1970s, when memory became inexpensive. Initially, the advantage of a vector display was that it could draw a line by passing its end points to analog circuitry, which moved an electron beam across the CRT. To draw a line on a raster display, on the other hand, required calculating the loca- tions of every point that the line passed through and illuminating each one. Raster technology won out when it became economically feasible to draw a display in a dedicated memory and refresh the CRT from it. Even though raster computation is more com- plicated, vector display time varied with the complexity of the diagram, making realistic animation, for example, impossible. In the 1950s, the IBM 704 and 709 comput- ers included a CRT display with an addressable raster of 1,024 × 1,024 dots. Once illuminated, a dot would persist on the screen for about two seconds. The CPU—excessively slow by today’s standards—controlled the display, which had no refresh buffer. The 704, for example, had a 12-microsecond cycle, meaning that the fastest instruction executed in 24 microseconds. Floating-point calculations were much slower; a floating divide instruction took 18 cycles, or 216 microseconds. Each dot was activated by the program’s building a 36-bit word containing the X- and Y- address of the dot, with the X-address in bits 8–17 and the Y-address in bits 26–35 and “copy- ing” the word to the CRT (see Figure 1). Obviously, the fastest way to refresh the screen was to create a word for each illuminated dot and to set up a “copy” loop. However, the maxi- mum memory size for the 704 was 32,000 words, including the program, so prestoring all the dots wasn’t always feasible. In actual use, the visible CRT was a “slave” to another unit in a black box with a shutterless 35-mm camera. The general purpose of this arrangement was to enable the camera to take a picture of the master screen by illuminating dots only once. Nevertheless, the process of calculat- ing the positions of dots was so slow that only a few dots could be displayed before the first one began to fade. There was a demonstration pro- gram—a tic-tac-toe game—for the display, and even this simple picture flickered badly. It was in this context, around 1960, that computer scientist Jack Bresenham 1 consid- ered how to minimize the calculation time for the dots that make up a line. For a line with a slope between 0 and 1, the Y-difference between consecutive points on a straight line is less than or equal to 1. For the line between X1,Y1 and X2,Y2, such that 0 ≤∆Y ≤∆X and X1 < X2, the X-addresses of the dots are X1, X1 + 1, X1 + 2, , X2. If X = X2 – X1 and Y = Y2 – Y1, then for each step in X, the val- ues of Y are Y1, Y1 +∆Y/X, Y1 + 2Y/X, … ,Y2; but raster point addresses are integers, so we 10 IEEE Annals of the History of Computing 1058-6180/02/17.00 © 2002 IEEE Three Early Algorithms Keith S. Reid-Green Three well-known algorithms, implemented separately within a span of 20 years, demonstrate the use of mathematics developed hundreds of years ago. Figure 1. This 36-bit word activates a dot of a raster display. S 1 2 3 4 5 6 7 8 9 1011121314151617181920212223242526272829303132333435 X-address Y-address use Y1, int(Y1 +∆Y/X), int(Y1 + 2Y/X), and so forth, without rounding. In the following implementation of Bresenham’s algorithm, Ratio and Delta are real numbers; all other variables are integers. Note that only addition and subtraction are used in the For … Next loop. DeltaX = X2 X1 DeltaY = Y2 Y1 Ratio = DeltaY / DeltaX Delta = Ratio 0.5 j = Y1 For i = X1 to X2 Call Dot (i, j) If Delta > 0 Then Delta = Delta 1.0 j = j + 1 End If Delta = Delta + Ratio Next i The For … Next loop is executed once per dot, and so this part of the code should be minimized. This loop would have taken about 60 cycles, or 720 microseconds. Because the raster size of the master CRT was 3.366 × 3.366 inches, there were about 300 dots per inch. Thus it took about 0.25 seconds to plot a line one inch long. Note that the algorithm can be improved by avoiding floating-point opera- tions entirely: DeltaX = X2 X1´ All integer variables DeltaY = Y2 Y1 Dy2 = 2 x DeltaY Ratio = Dy2 2 x DeltaX Delta = Dy2 DeltaX j = Y1 For i = X1 to X2 Call Dot (i, j) If Delta > 0 Then Delta = Delta + Ratio j = j + 1 Else Delta = Delta + Dy2 End If Next i This algorithm improves the loop’s speed by about 15 percent. Generalizing the algorithm to plot any line requires determining which X-value is the smaller, determining if the slope is 1 or > 1, and requires using X or Y, respectively, as the indexing value. A special case is if X1 = X2, but this is easy to deal with. A square root algorithm From the very beginning of computing, mathematical subroutines such as sine, cosine, logarithm, A X , square root, and so on were avail- able to programmers. The importance of opti- mizing their speed is obvious because most mathematical programs used them, and a single program could call them thousands of times. In the early days of minicomputers, hardware costs were kept as low as possible so most mini- computers did not have floating-point hard- ware. Instead, floating-point operations were accomplished in subroutines. At the same time, functions like SQRT were done in software, so a square root function with floating-point soft- ware could be slow. The usual way to find a square root was to use a Newton’s iteration: where V is the value for which the square root is to be found, and G1 is the first approxima- tion (or first guess) of the square root. The cal- culation is repeated until the difference between G n + 1 and G n is small enough, that is, until G 2 n+1 is approximately equal to V. Given that V can range over the set of posi- tive numbers, a good first approximation of G 1 is not easy to find, and of course the better the first guess, the faster the iteration converges. However, for values of V less than 1, conver- gence is fast. Here’s where knowledge of the structure of a floating-point number comes in handy. A floating-point number mimics scientific nota- tion—for example, we write 12.35 as 0.1235 × 10 2 . The power of 10, called the characteristic, is stored as a signed integer along with the frac- tion and its sign. This structure suggests that we can calculate the square root of the number by dividing the characteristic by 2 and finding the square root of the fraction. Floating-point numbers are normalized, that is, the first digit to the right of the point is never zero (unless the value of the number is zero). In a binary floating-point number, this means that the first digit to the right of the binary point is always a 1, thus the value of the fraction is at least 0.5. So the square root of a binary floating-point number 0.11000 … × 2 3 (in decimal, 6.0) can be found by halving the characteristic and finding the square root of 0.11000 …, except that the characteristic is an GG GV G nn n n + =− 1 2 2 October–December 2002 11 odd number. This is easily remedied by adding 1 to the characteristic if it is odd and shifting the fraction one place to the right, thereby dividing it by two, yielding 0.011000 … × 2 4 , or 0.375 × 16—still 6.0. The square root of the characteristic is 2 2 , and we can find the square root of the fraction in four Newton’s iterations to an error of not more than 0.000002, using G 1 = 0.7. (It’s necessary to set the characteristic to 2 0 so that the fraction V is between 0.25 and 1.) Furthermore, with a little effort, we can determine a single calculation that develops four iterations without a loop: and thus and so on. Finally, the resulting square root G 4 must be multiplied by 10 to the power of the original characteristic divided by 2, to yield the square root of the original number V. Machin’s algorithm; computation of π The pursuit of more and more digits of π may start with a reference in the Old Testament, 2 to zero decimal places; that is, π= 3. The Babylonians got by nicely with π=3.125 (well, strictly speaking, 3 1/8, because decimal fractions were unknown to the Babylonians). The ancient Egyptians used π=4(8/9) 2 , which is about 3.1605, not really any better than the Babylonian estimate. It was Archimedes (287–212 BCE) who start- ed the contest of finding more and more digits of π by inscribing and circumscribing a regular polygon about a circle. This was a big step, because it provided both an upper and a lower bound for π, and the more sides to the polygon, the tighter the bounds became. The area of a cir- cle is πr 2 , so the area of a circle of unit radius is π. For a hexagon, we see in Figure 2 that the inscribed polygon consists of six equilateral triangles with sides of length 1. Each has an area of the square root of 3 divided by 4, so the area of the inscribed hexagon is a bit more than 2.598. For a circumscribed hexagon (see Figure 3) the height of each triangle is 1, so the area is slightly greater than 0.57735, giving the area of the circumscribed hexagon as about 3.4641. Archimedes used these numbers to bound π. He wrote the equivalent of 2.598 < π < 3.464 which suggests that π is about 3.03. This isn’t very good, except that Archimedes went on to use the same approach with 96-sided polygons and refined his bounds to 3.14084 < π < 3.142858 although he had to use proper fractions: 3 10/71 < π < 3 1/7 This is more like it—the average of the bounds is 3.14185; accurate to three decimals. GG GV G G GV G V G GV G 31 1 2 1 1 1 2 1 2 1 1 2 1 2 2 2 2 2 =− GG GV G 32 2 2 2 2 =− GG GV G 21 1 2 1 2 =− 12 IEEE Annals of the History of Computing Three Early Algorithms R=1 Figure 2. An inscribed hexagon decomposed into six equilateral triangles. R=1 Figure 3. A circumscribed hexagon. For almost 2,000 years, refinements of π as cal- culated in the Western world were done by increasing the number of sides of the polygons. Finally, with the advent of the calculus in the 17th century came new ways to find π. Notable was John Machin, a London professor of astronomy, who in 1706 used the difference between two arctangents. It was this method that scientists used on ENIAC 3 in 1949 at the University of Pennsylvania, setting a new record of 2,037 digits: π / 4 = 4 tan – 1 (1/5) – tan – 1 (1/239) We know that π / 4 radians is 45 degrees, and that the angle whose tangent is 1 is 45 degrees. Unfortunately, we can’t calculate tan – 1 (1) using a series because the arctangent series only con- verges for values less than 1—hence Machin’s equation. It is easy to test the equation: Debug.Print 4# * (4# * Atn(1# / 5#) Atn(1# / 239#)) gives the result 3.14159265358979, exactly right as far as it goes, but not a proof. To show that tan – 1 (1) = 4 tan – 1 (1/5) – tan – 1 (1/239) we need two trigonometric identities: tan(2a) = (2 tan(a)) / (1 – tan 2 (a)) (1) and tan(a + b) = (tan(a) + tan(b)) / (1 – tan(a) tan(b)) (2) If we let x = atn(1/5), then using Equation 1 we get tan(2x) = (2 tan x) / (1 – tan 2 (x)) = 5/12 and tan(4x) = (2 tan(2x)) / (1 – tan 2 (2x)) = 120/119 but we want the result to be 1. We can find the difference from Equation 2: tan(4x – π / 4) = (tan(4x) – 1) / (1 + tan(4x)) = 1/239 hence tan(4 atn(1/5) – atn(1/239)) resolves to 1. Machin found π / 4 to 100 places using two arc- tangent series: π / 4 = 4 (1/5 – 1 / (3 * 5 3 ) + 1 / (5 * 5 5 ) – …) – (1/239 – 1 / (3 * 239 3 ) + 1 / (5 * 239 5 ) – …) and this is how it was done on ENIAC to 2,037 places. Finding a very high precision result is a nontrivial programming exercise, but that’s another matter. Acknowledgments I would like to acknowledge the assistance of Michael Greenwald, Dick Painter, and Dennis Quardt in the preparation of this article. References and notes 1. W.M. Newman and R.F. Sproull, Principles of Inter- active Computer Graphics, 2nd ed., McGraw-Hill, New York, 1979. Jack Bresenham is currently a professor of computer science at Winthrop Uni- versity. He retired from IBM as a senior technical staff member in 1987 after 27 years. He is best known for fast line and circle algorithms. 2. 1 Kings 23, Authorized (King James) Version. The verse reads: “And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it round about.” The “molten sea” is a brass vat. Its diameter is ten cubits, thus its radius is five cubits. The given circumference—the line of 30 cubits that “did compass it round about”—implies that π = 3, because the circumference is computed as 2πr, from 2π * 5 = 30. 3. P. Beckmann, A History of Pi, Dorset Press, New York, 1989. Keith S. Reid-Green retired in 1999 after 42 years as a com- puter scientist. He began as an IBM 704 programmer in 1957 and is known for pioneering work in data reduction, com- puter-aided manufacturing, and image processing. He is widely published in the history of computing and was an adjunct professor at the University of Utah and at Rutgers, where he taught computer graphics to graduate students. He holds bachelor’s degrees in mathematics and physics and a master’s degree in computer science. Readers may contact Keith Reid-Green at 23 Point Court, Lawrenceville, NJ 08648, email KReid- Green@msn.com. October–December 2002 13 Below you can see a picture of a a IBM 709 along with the included CRT display. The IBM 709 was an early vacuum tube powered computer introduced by IBM in 1958. The IBM 709 had 32,768 words of 36-bit memory, could execute 42,000 add instructions per second and could multiply two 36-bit integers at a rate of 5000 per second.  Thanks Jadrian, I went ahead and corrected it. This iterative method of finding square roots has been known since Babylonian times. This method is also known as Heron's method, after the first-century Greek mathematician Hero of Alexandria who gave the first explicit description of the method in his AD 60 work: Metrica. Finding the square root of a number V is equivalent to using Newton’s method to find the root of f(x) = x^2 - V. Newton’s method finds a root of f(x) from a guess x_n by approximating f(x) as its tangent line at x_n : f(x_n) + f’(x_n)(x - x_n). To get x_{n+1} you find the root of the tangent:$$ f(x_n) + f’(x_n)(x - x_n) = 0 f’(x_n)x = f’(x_n)x_n - f(x_n) x = x_n - \frac{f(x_n)}{f’(x_n)}$$For f(x) = x^2 - V you get:$$x_{n+1} = x_n - \frac{x_n^2 - V}{2x_n}$$Professor John von Neumann was one of the people that expressed interest in having the ENIAC compute the value of π to many decimal places. Von Neumann was in fact interested in obtaining a statistical measure of the randomness of the distribution of the digits. ### Modern computation of π In the early 20th century the Indian mathematician Srinivasa Ramanujan found several rapidly converging infinite series of π, including:$${\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{9801}}\sum _{k=0}^{\infty }{\frac {(4k)!(1103+26390k)}{(k!)^{4}396^{4k}}}$$This series computes a further eight decimal places of π with each term. Ramanujan series are the basis of a lot of the fastest algorithms used to compute Pi. The current record was set in 2020 by Timothy Mullican, who computed 50 trillion digits of π. He wrote about it [here](https://blog.timothymullican.com/calculating-pi-my-attempt-breaking-pi-record) Given that x is the fraction and a is the characteristic:$$\sqrt{x \times 10^a} \Leftrightarrow (x \times 10^a)^\frac{1}{2} \Leftrightarrow \sqrt{x} \times 10^\frac{a}{2} ### Vector Displays The Sketchpad, developed at MIT’s Lincoln Lab, is an example of an early computer that made use of a vector displays.  ### Raster Displays Here is an example of a raster display - a slow motion recording of a CRT displaying Super Mario Bros. on a NES:  The resulting line on the raster screen will be something like this:  The basic intuition is that on each loop you have to decide whether you are increasing the $y$ coordinate by 1 or not. Here is a simple implementation of the Bresenham’s algorithm in JavaScript. let draw_line = (x0, y0, x1, y1) => { // Calculate "deltas" of the line (difference between two ending points) let dx = x1 - x0; let dy = y1 - y0; // Calculate the line equation based on deltas let D = (2 * dy) - dx; let y = y0; // Draw the line based on arguments provided for (let x = x0; x < x1; x++) { // Place pixel on the raster display pixel(x, y); if (D >= 0) { y = y + 1; D = D - 2 * dx; } D = D + 2 * dy; } }; You can find a full tutorial along with an interactive demo [here](http://www.javascriptteacher.com/bresenham-line-drawing-algorithm.html) There is a long history of computing more and more precise approximations of π. Broadly you can divide it into 2 eras: before and after electronic digital computers. Throughout the centuries many mathematicians worked on developing new methods of computing π. The 16th century German-Dutch mathematician Ludolph van Ceulen computed the first 35 decimal places of π with a 262 sided polygon. He was so proud of his achievement that he had the digits inscribed on his tombstone.  *Ludolph van Ceulen's tombstone* The longest expansion of π before the advent of digital computing was achieved by the English amateur mathematician William Shanks in the late 19th century. He spent over 20 years calculating π to 527 places. To put it into perspective, in order to calculate the circumference of the the observable universe, 93 billion light-years in diameter, to a precision of less than one Planck length you need an approximation of π to 62 decimal places. Using a rapidly converging series: $\pi = \sqrt{12}\left(1-\frac{1}{3.3}+\frac{1}{3.5^{2}}-\frac{1}{3.7^{2}}+\ldots \right)$ Madhava of Sangamagrama (Modern day Kerala, India) computed the the correct value of $\pi$ to 11 decimal digits as early as the period (c. 1340 – c. 1425) . The value of $\pi$ correct to 13 decimals, $\pi=3.1415926535898$, the most accurate since the 5th century, is sometimes attributed to Madhava. https://en.wikipedia.org/wiki/Madhava_of_Sangamagrama I think $G_n+1$ is a misprint for $G_{n+1}$. In _Figure 3_ it would be better to say that the circle is circumscribed by the hexagon. “A circumscribed hexagon” implies, rather, _Figure 2_ where the hexagon is circumscribed by a circle. Of course the second identity immediately implies the first by setting b = a. ### Floating Point Numbers You can think of floating point numbers as being written in *scientific notation*. A floating point number has 3 parts: - **sign**: indicates whether the number is positive or negative - **significant or fraction**: The significant digits of the number - **exponent or characteristic**: indicates how large (or small) the number is In the example below I’m using a biased exponent - which was popularized by the IBM 704. This was made a standard by the IEEE Standard for Floating-Point Arithmetic (IEEE 754). In this paper the author uses signed exponents. Example Let’s say we wanted to store 2019 as a floating point number: - Decimal Representation: 2019 - Binary Representation: 11111100011 - Scientific notation: $2.019 \times 10^{3}$ - Binary Scientific notation: $1.1111100011 \times 2^{10}$ - Double Precision Raw Binary:  As you can see in the raw binary, the exponent is stored as 10000001001, which is 1033: 10 (the actual binary exponent) + 1023 (the bias, which allows you to store the exponent as an unsigned integer). This means that the exponent range is −1022 to +1023 (exponents of −1023 (all 0s) and +1024 (all 1s) are reserved for special numbers.) It's Newton–*Raphson*; there's no *L* in Raphson's name. The **Newton-Raphson** method is an iterative root-finding algorithm which produces successively better approximations to the zeroes of a real-valued function. Here is a representation of a few iterations of the method  ### Convergence The **Newton-Raphson** method doesn’t always converge. Sometimes it can fail: - Derivative may be zero at the root - Function may fail to be continuously differentiable - Choosing a bad starting point, i.e. one that lies outside the range of guaranteed convergence
|
|
# Find Amplitude, Period, and Phase Shift y=5sin((2x)/3-pi)
Find Amplitude, Period, and Phase Shift y=5sin((2x)/3-pi)
Use the form to find the variables used to find the amplitude, period, phase shift, and vertical shift.
Find the amplitude .
Amplitude:
Find the period using the formula .
The period of the function can be calculated using .
Period:
Replace with in the formula for period.
Period:
Solve the equation.
is approximately which is positive so remove the absolute value
Period:
Multiply the numerator by the reciprocal of the denominator.
Period:
Cancel the common factor of .
Factor out of .
Period:
Cancel the common factor.
Period:
Rewrite the expression.
Period:
Period:
Move to the left of .
Period:
Period:
Period:
Find the phase shift using the formula .
The phase shift of the function can be calculated from .
Phase Shift:
Replace the values of and in the equation for phase shift.
Phase Shift:
Multiply the numerator by the reciprocal of the denominator.
Phase Shift:
Combine and .
Phase Shift:
Move to the left of .
Phase Shift:
Phase Shift:
Find the vertical shift .
Vertical Shift:
List the properties of the trigonometric function.
Amplitude:
Period:
Phase Shift: ( to the right)
Vertical Shift:
Do you know how to Find Amplitude, Period, and Phase Shift y=5sin((2x)/3-pi)? If not, you can write to our math experts in our application. The best solution for your task you can find above on this page.
|
|
# Differentiability of $f(x)=\frac{x|y|}{\sqrt{x^2+y^2}}$ in $(0,0)$
Prove that$$\begin{array}\\&f(x)&=\begin{cases}\frac{x|y|}{\sqrt{x^2+y^2}}&\text{ if }(x,y) \ne (0,0)\\0&\text{ otherwise }\end{cases}\end{array}$$ is differentiable in $(0,0)$.
Proof.
We know that f is differentiable in (0,0), if all partial derivatives exist and are continuous in (0,0). We have $$\frac{\partial f}{\partial x}=\lim_{h\to\ 0}\frac{f(h,0)}{h}=0$$ $$\frac{\partial f}{\partial y}=\lim_{h\to\ 0}\frac{f(0,h)}{h}=0$$ So the partial derivatives exist.
I am now stuck at prooving the continuity of the partial derivatives in (0,0). Can anyone help me with that?
Since $\partial_x f(0,0) = \partial_y f(0,0) = 0$ and $f(0,0) = 0$, by definition your function is differentiable at $(0,0)$ if and only if $$\lim_{(h,k) \to (0,0)} \varphi(h,k) = 0, \quad\text{where}\quad \varphi(h,k) := \frac{f(h, k)}{\sqrt{h^2+k^2}} = \frac{h |k|}{h^2+k^2}, \ (h,k)\neq (0,0).$$ But $\varphi(h,h) = \frac{h |h|}{2h^2} = \frac{1}{2} \text{sign} h$, hence your function $f$ is not differentiable at the origin.
|
|
# Calculate the homology group of $S^3/G$, an Harvard qualifying exam problem with “unclear” solution
Problem Suppose that $G$ is a finite group whose abelianization is trivial. Suppose also that $G$ acts freely on $S^3$. Compute the homology groups (with integer coeffcients) of the orbit space $M=S^3/G$.
This is an algebraic-topology problem of Harvard qualifying exam. There is a solution in the website:
Solution: Note that $M$ is a smooth manifold, and that $\pi_1 M = G$. By Poincare's [sic] theorem $H_1 S^3/G = 0$, as is $H^1(S^3/G; A) = \hom(\pi_1 M, A)$ for any abelian group $A$. This implies that $M$ is orientable. It then follows from Poincare [sic] duality that $H_2(M;A) = 0$ for any abelian group $A$ and that $H_3(M;A) = A$.
I think, before using Poincaré duality, we should first check it's orientable.
For example, $M$ is orientable if and only if the action $G$ on $S^3$ preserve the orientation; so, I guess this solution manual is not correct. Also, perhaps I made some stupid mistake.
• The first result of Poincare's mentioned is not Poincare duality; it's the result that $H_1$ is the abelianization of $\pi_1$. (I don't think I've seen that result attributed to Poincare, or at least referred to as 'Poincare's theorem', though. It's part of, for example, Hurewicz's theorem.) – anomaly Jul 6 '15 at 13:35
If $S^3/G$ was not orientable, then the orientation double cover would be non-trivial. From the correspondence between covers of $S^3/G$ and subgroups of $\pi_1(S^3/G)\cong G$, the cover would correspond to an index two subgroup of $G$ (call this subgroup $H$). But now $G/H \cong \Bbb Z/2\Bbb Z$ is a non-trivial abelian quotient of $G$, so $G$ cannot have trivial abelianization.
Alternatively, one can use the universal coefficient theorem to show $H^1(S^3/G,\Bbb Z/2\Bbb Z)=0$, so every vector bundle over $S^3/G$ has vanishing 1st Stiefel-Whitney class. Equivalently, every vector bundle (including the tangent bundle) over $S^3/G$ is orientable.
• Here's another way of saying this: by definition if $S^3/G$ is non-orientable, $\pi_1$ acts nontrivially on the local homology $H_3(M, M-x_0) \cong \mathbb Z$. So we have a nontrivial homomorphism $G = \pi_1(S^3/G) \to \mathbb Z/2 = \text{Aut}(\mathbb Z)$. – user98602 Jul 6 '15 at 18:17
|
|
# Model of MLTT with $\eta$ rule where function extensionality fails
Consider intensional Martin-Löf type theory with judgmental $$\eta$$ rule for dependent product types. Is there a model of it where function extensionality fails?
• Here's a proof in Agda – ice1000 May 17 at 0:31
The simplest one that I know about is the $$\text{Set}$$-based polynomial model ("container" model). Here, every context is interpreted as a family of sets, i.e. a $$Q : \text{Set}$$ together with an $$A : Q \to \text{Set}$$. We can view this as a set of questions together with sets of possible answers for each question, or a request-response protocol with at most two messages sent. Certain game semantics generalize this to arbitrary number of interactions in a protocol.
A morphism between $$(Q, A)$$ and $$(Q', A')$$ is a function $$f : Q \to Q'$$ with $$g : (q : Q) \to A'\,(f q) \to A\,q$$. So this is not a usual morphism of families, because we map answers in the backwards direction. Again, this can be viewed with the game/protocol intuition, as a synchronous translation between two protocols. The translator gets $$q : Q$$ from the "domain" agent, computes $$f\,q$$ and feeds it to the "codomain" agent, then receives an answer in $$A'\,(f\,q)$$ and translates it back.
Roughly speaking, most of the polynomial model of MLTT works by taking a plain $$\text{Set}$$ model and the opposite of a simply-typed "fibered" $$\text{Set}$$ model, and gluing them together. We need to take opposites of type constructions because of the flipping of mapping directions in answers. So when modeling the answer components, products become coproducts, the unit type becomes the empty type, and so on.
Function types and $$\Pi$$ types are an exception to this scheme, where there is non-trivial interaction between the question and answer components of the model. See this for the simply typed case. See Pi.agda from this for a formalization of the dependent $$\Pi$$ case, or this for the same in a more general setting with heavier abstractions.
Regarding the failure of function extensionality: semantic $$\Pi$$ inhabitants can be viewed as bidirectional functions with forwards and backwards mapping components. However, the function application always projects out and applies the forwards component. Hence, if we have $$(a : A) \to f\,a = g\,a$$, this only expresses that $$f$$ and $$g$$ have the same forwards action, and does not say anything about the backwards actions. So $$f = g$$ generally does not follow.
|
|
# Tag Info
14
There does not, even if you don’t require the fiber and base to be manifolds (or even connected, just that $F$ is not a single point). See Borel, Armand; Serre, Jean-Pierre, Impossibilité de fibrer un espace euclidien par des fibres compactes, C. R. Acad. Sci. Paris 230 (1950), 2258–2260.
13
I guess the most conceptual proof is the one using Morse theory: Take a Morse function on the (closed, orientable) surface S. If it has no saddle points, then (using the gradient flow) $S\cong S^2$. Assume by induction that a surface with k-1 saddle points is $S^2$ with finitely many handles added. For the inductive step, consider a Morse function with k ...
11
On the other hand, if you only mean "foliation" as in your title, and not "fibration", then there is Vogt's foliation of R^3 by circles! (But it is not C^1, only differentiable). Vogt, Elmar, "A foliation of R3 and other punctured 3-manifolds by circles", Publications Mathématiques de l'IHÉS, Tome 69 (1989), p. 215-232 http://www.numdam.org/item/...
9
This is more an extended comment than an answer to the question. The first thing to note is that there are different strenghts of the classification theorem for surfaces. Of course, there are the differentiable, triangulated and topological setting. But even if we choose such a setting, there are two statements one has to prove (at least in one approach): ...
9
The proof of Zeeman described in this note is by a substantial margin the easiest and most conceptual proof I know. To simplify the exposition I restrict to orientable surfaces in the note, but it is trivial to also do the non-orientable case (and see the edit below for one description of how to arrange this to avoid using the fact that three cross caps is ...
9
Yes, the Chern–Weil homomorphism lifts to differential cohomology, which guarantees that periods are integral. See the original paper by Cheeger and Simons, or the paper by Hopkins and Singer. The (modernized) construction of such a refinement relies on the computation of the de Rham complex of the stack B_∇(G) of principal G-bundles with connection and ...
5
Your first question can be answered by using the splitting principle. In order to answer the second question, one would need to know what $c_2(\Sigma^-)$ is, but I do not. If $V \to X$ is a complex vector bundle of rank two, then $c_1(S^3V) = 6c_1(V)$ and $c_2(S^3V) = 11c_1(V)^2 + 10c_2(V)$. Proof: By the splitting principle, there is a map $p : Y \to X$...
4
Using a little bit of real algebraic geometry, there is a conceptual proof at least in the critical case $\chi=-1$, i.e. the case you're talking about explicitly. Indeed, let $S$ be a compact connected smooth surface without boundary with $\chi(S)=-1$. Choose a conformal structure on $S$. Since $\chi$ is odd, $S$ is nonorientable. Let $\tilde S$ be its ...
4
This answers the first (simple) half of the question, asking just about a smooth map. In fact, you've already given an answer to it, in some sense. Apply the map $f: re^{i \theta} \to \sigma_1re^{i(\sigma_2/\sigma_1) \theta}$ to a unit disk that doesn't contain $(0,0)$, say radius $1$ disk $D$, centred at $(2,0)$. Then, the image $f(D)$ is contained in the ...
3
If $t$ is a regular value, then it is a property of Morse functions that there is some small open neighborhood $U$ of $t$ in $\mathbb{R}$ such that $u$ is also a regular value for all $u\in U$. In particular, we can take $U=(t-\delta,t+\delta)$ for some $\delta>0$. But then $\pi^{-1}(U)\cap S\cong(\pi^{-1}(t)\cap S)\times U$, ie. the surface is a product ...
3
The cited (early) work by Cerf proves that, given a submanifold Y in a manifold X, the obvious map Diff(X)->Emb(Y,X) is a locally trivial fibration. I guess that Budney and Gabai mean the following. By Palais, all embeddings D^3->S^4 are isotopic. Hence, for i=0, 1, the complement C_i of a small open tubular neighborhood U_i of Delta_i in S^4 is ...
2
Ian's argument of mean curvature is wonderfully simple. Here is another one. Rotate your surface to put it in generic position with respect to the heigth function z; then, the preimage of z is a Morse function f on RP^2, which has no critical point of index 1 (saddle point) since the surface is locally convex. Hence, every critical point of f has index 0 or ...
2
There is perhaps some confusion over the terminology. Wall (chapter 13A) uses the term multisignature to denote a collection of invariants of certain Hermitian forms over group rings, giving rise to a function from $L_{2k}(\pi) \to \mathbb{Z}^n$. In that chapter, he interprets the multisignature in terms of equivariant signatures. Such signatures occur in ...
1
Such a homotopy exists and in fact you can assume that it is an isotopy. This is a "standard fact" in the theory of mapping class groups. See Proposition 2.2 of the "Primer" by Farb and Margalit.
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
# Machine Learning Holographic Mapping by Neural Network Renormalization Group
Hong-Ye Hu, Shuo-Hui Li, Lei Wang, Yi-Zhuang You
Jan 07, 2020
0
0
The exact holographic mapping (EHM) provides an explicit duality map between a conformal field theory (CFT) configuration and a massive field propagating on an emergent classical geometry. However, designing the optimal holographic mapping is challenging. Here we introduce the neural network renormalization group as a universal approach to design generic EHM for interacting field theories. Given a field theory action that flows to CFT, we train a flow-based hierarchical deep generative neural network to reproduce the boundary field ensemble from uncorrelated bulk field fluctuations. In this way, the neural network develops the optimal renormalization group transformations. Using the machine-designed EHM to map the CFT back to a bulk effective action, we determine the bulk geodesic distance from the residual mutual information. We apply this approach to the complex $\phi^4$ theory in two-dimensional Euclidian spacetime in its critical phase, and show that the emergent bulk geometry matches the three-dimensional hyperbolic geometry.
|
|
# Pipeline Design¶
This section presents an overview of the fundamental algorithms used by, and data flow through, the TraP. It is designed such that everyday users have a full understanding of how their data is being processed. Note that the top-level logic is defined in tkp.main, further implementation details for specific sub-sections may be found in the Developer’s Reference Guide.
As images flow through the TraP, they are processed by a series of distinct pipeline components, or “stages”. Each stage consists of Python logic, often interfacing with the pipeline database.
A complete description of the logical design of the TraP is beyond the scope of this document. Instead, the reader is referred to an upcoming publication by Swinbank et al. Here, we sketch only an outline of the various pipeline stages.
## Pipeline topology and code re-use¶
An early design goal of the TraP was that the various stages should be easily re-usable in different pipeline topologies. That is, rather than simply relying on “the” TraP, users should be able to mix-and-match pipeline components to pursue their own individual science goals. This mode of operation is not well supported by the current TraP, but some effort is made to ensure that stages can operate as independent entities
## Image ordering and reproducibility¶
The material below describes each of the stages an image goes through as it is processed through the pipeline. It is important to realise, though, that the order in which images are processed is important due to the way in which lightcurves are generated within the database: see the material on Source association stage for details. Reproducibility of pipeline results is of paramount importance: the TraP guarantees that results will be reproducible provided that images are always processed in order of time. That is, an image from time $$t_n$$ must always be processed before an image from time $$t_{n+1}$$. In order to satisfy this condition, the TraP will internally re-order images provided to it in the images_to_process.py file so that they are in time order. If multiple TraP runs are to be combined in a single dataset, the user must ensure that the runs are in an appropriate sequence.
## Configuration and startup¶
The pipeline configuration and job management system is described under Pipeline Configuration.
|
|
### For REFEREES
Successive iterations for unique positive solution of a nonlinear fractional q-integral boundary value problem Guotao Wang,Zhanbing Bai,Lihong Zhang Keywords:Fractional q-difference equation; q-integral condition; Explicit iterative sequence; Hybrid monotone method. Abstract: In this paper, under certain nonlinear growth conditions, we investigate the existence and successive iterations for the unique positive solution of a nonlinear fractional $q$-integral boundary problem by employing hybrid monotone method, which is a novel approach to nonlinear fractional $q$-difference equation. This paper not only proves the existence of the unique positive solution, but also gives some computable explicit hybrid iterative sequences approximating to the unique positive solution.
|
|
1
AIPMT 2000
When an electron does transition from n = 4 to n = 2, then emitted line spectrum will be
A
first line of Lyman series
B
second line of Balmer series
C
first line of Balmer series
D
second line of Paschen series.
Explanation
Jump to second orbit leads to Balmer series. The jump from 4th orbit shall give rise to second line of Balmer series.
2
AIPMT 2000
Nuclear fission is best explained by
A
liquid droplet theory
B
Yukawa $\pi$-meson theory
C
independent particle model of the nucleus
D
proton-proton cycle.
Explanation
According to liquid drop model of nucleus, an excited nucleus breaks into lighter nuclei just like an excited drop breaks into tiny drops.
3
AIPMT 2000
For the given reaction, the particle X is
$_6{C^{11}} \to {}_5{B^{11}} + {\beta ^ + } + X$
A
neutron
B
anti neutrino
C
neutrino
D
proton
Explanation
The particle X is neutrino.
|
|
gemseo / utils
# logging_tools module¶
## Logging tools¶
class gemseo.utils.logging_tools.LoggingContext(logger=<RootLogger root (INFO)>, level=30, handler=None, close=True)[source]
Bases: object
Context manager for selective logging.
Change the level of the logger in a with block.
Examples
>>> logger = logger.getLogger()
>>> logger.setLevel(logging.INFO)
>>> logger.info("This should appear.")
>>> with LoggingContext():
>>> logger.warning("This should appear.")
>>> logger.info("This should not appear.")
>>>
>>> logger.info("This should appear.")
Source: Logging Cookbook
Parameters:
• logger (logging.Logger) –
The logger.
By default it is set to <RootLogger root (INFO)>.
• level (int) –
The level of the logger to be used on block entry.
By default it is set to 30.
• handler (logging.StreamHandler | None) – An additional handler to be used on block entry.
• close (bool) –
Whether to close the handler on block exit.
By default it is set to True.
class gemseo.utils.logging_tools.MultiLineFileHandler(filename, mode='a', encoding=None, delay=False, errors=None)[source]
FileHandler to split multiline logging messages.
Open the specified file and use it as the stream for logging.
acquire()
Add the specified filter to this handler.
close()
Closes the stream.
createLock()
emit(record)
Emit one logging message per input record line.
filter(record)
Determine if a record is loggable by consulting all the filters.
The default is to allow the record to be logged; any filter can veto this and the record is then dropped. Returns a zero value if a record is to be dropped, else non-zero.
Changed in version 3.2: Allow filters to be just callables.
flush()
Flushes the stream.
format(record)
Format the specified record.
If a formatter is set, use it. Otherwise, use the default formatter for the module.
get_name()
handle(record)
Conditionally emit the specified logging record.
Emission depends on filters which may have been added to the handler. Wrap the actual emission of the record with acquisition/release of the I/O thread lock. Returns whether the filter passed the record for emission.
handleError(record)
Handle errors which occur during an emit() call.
This method should be called from handlers when an exception is encountered during an emit() call. If raiseExceptions is false, exceptions get silently ignored. This is what is mostly wanted for a logging system - most users will not care about errors in the logging system, they are more interested in application errors. You could, however, replace this with a custom handler if you wish. The record which was being processed is passed in to this method.
release()
removeFilter(filter)
Remove the specified filter from this handler.
setFormatter(fmt)
Set the formatter for this handler.
setLevel(level)
Set the logging level of this handler. level must be an int or a str.
setStream(stream)
Sets the StreamHandler’s stream to the specified value, if it is different.
Returns the old stream, if the stream was changed, or None if it wasn’t.
set_name(name)
property name
terminator = '\n'
class gemseo.utils.logging_tools.MultiLineHandlerMixin[source]
Bases: object
Stateless mixin class to override logging handlers behavior.
emit(record)[source]
Emit one logging message per input record line.
class gemseo.utils.logging_tools.MultiLineStreamHandler(stream=None)[source]
StreamHandler to split multiline logging messages.
Initialize the handler.
If stream is not specified, sys.stderr is used.
acquire()
Add the specified filter to this handler.
close()
Tidy up any resources used by the handler.
This version removes the handler from an internal map of handlers, _handlers, which is used for handler lookup by name. Subclasses should ensure that this gets called from overridden close() methods.
createLock()
emit(record)
Emit one logging message per input record line.
filter(record)
Determine if a record is loggable by consulting all the filters.
The default is to allow the record to be logged; any filter can veto this and the record is then dropped. Returns a zero value if a record is to be dropped, else non-zero.
Changed in version 3.2: Allow filters to be just callables.
flush()
Flushes the stream.
format(record)
Format the specified record.
If a formatter is set, use it. Otherwise, use the default formatter for the module.
get_name()
handle(record)
Conditionally emit the specified logging record.
Emission depends on filters which may have been added to the handler. Wrap the actual emission of the record with acquisition/release of the I/O thread lock. Returns whether the filter passed the record for emission.
handleError(record)
Handle errors which occur during an emit() call.
This method should be called from handlers when an exception is encountered during an emit() call. If raiseExceptions is false, exceptions get silently ignored. This is what is mostly wanted for a logging system - most users will not care about errors in the logging system, they are more interested in application errors. You could, however, replace this with a custom handler if you wish. The record which was being processed is passed in to this method.
release()
removeFilter(filter)
Remove the specified filter from this handler.
setFormatter(fmt)
Set the formatter for this handler.
setLevel(level)
Set the logging level of this handler. level must be an int or a str.
setStream(stream)
Sets the StreamHandler’s stream to the specified value, if it is different.
Returns the old stream, if the stream was changed, or None if it wasn’t.
set_name(name)
property name
terminator = '\n'
|
|
# Chapter 11 Characterization
Chapter leads: Anthony Sena & Daniel Prieto-Alhambra
Observational healthcare databases provide a valuable resource to understand variations in populations based on a host of characteristics. Characterizing populations through the use of descriptive statistics is an important first step in generating hypotheses about the determinants of health and disease. In this chapter we cover methods for characterization:
• Database-level characterization: provides a top-level set of summary statistics to understand the data profile of a database in its totality.
• Cohort characterization: describes a population in terms of its aggregate medical history.
• Treatment pathways: describes the sequence of interventions a person received for a duration of time.
• Incidence: measures the occurrence rate of an outcome in a population for a time at risk.
With the exception of database-level characterization, these methods aim to describe a population relative to an event referred to as the index date. This population of interest is defined as a cohort as described in chapter 10. The cohort defines the index date for each person in the population of interest. Using the index date as an anchor, we define the time preceding the index date as baseline time. The index date and all time after is called the post-index time.
Use-cases for characterization include disease natural history, treatment utilization and quality improvement. In this chapter will describe the methods for characterization. We will use a population of hypertensive persons to demonstrate how to use ATLAS and R to perform these characterization tasks.
## 11.1 Database Level Characterization
Before we can answer any characterization question about a population of interest, we must first understand the characteristics of the database we intend to utilize. Database level characterization seeks to describe the totality of a database in terms of the temporal trends and distributions. This quantitative assessment of a database will typically include questions such as:
• What is the total count of persons in this database?
• What is the distribution of age for persons?
• How long are persons in this database observed for?
• What is the proportion of persons having a {treatment, condition, procedure, etc} recorded/prescribed over time?
These database-level descriptive statistics also help a researcher to understand what data may be missing in a database. Chapter 15 goes into further detail on data quality.
## 11.2 Cohort Characterization
Cohort characterization describes the baseline and post-index characteristics of people in a cohort. OHDSI approaches characterization through descriptive statistics of all conditions, drug and device exposures, procedures and other clinical observations that are present in the person’s history. We also summarize the socio-demographics of members of the cohort at the index date. This approach provides a complete summary of the cohort of interest. Importantly, this enables a full exploration of the cohort with an eye towards variation in the data while also allowing for identification of potentially missing values.
Cohort characterization methods can be used for person-level drug utilization studies (DUS) to estimate the prevalence of indications and contraindications amongst users of a given treatment. The dissemination of this cohort characterization is a recommended best practice for observational studies as detailed in the Strengthening the Reporting of Observation Studies in Epidemiology (STROBE) guidelines. (Elm et al. 2008)
## 11.3 Treatment Pathways
Another method to characterize a population is to describe the treatment sequence during the post-index time window. For example, Hripcsak et al. (2016) utilized the OHDSI common data standards to create descriptive statistics to characterize treatment pathways for type 2 diabetes, hypertension and depression. By standardizing this analytic approach, Hripcsak and colleagues were able to run the same analysis across the OHDSI network to describe the characteristics of these populations of interest.
The pathway analysis aims to summarize the treatments (events) received by persons diagnosed with a specific condition from the first drug prescription/dispensation. In this study, treatments were described after the diagnosis of type 2 diabetes, hypertension and depression respectively. The events for each person were then aggregated to a set of summary statistics and visualized for each condition and for each database.
As an example, figure 11.1 represents a population of persons initiating treatment for hypertension. The first ring in the center shows the proportion of persons based on their first-line therapy. In this example, Hydrochlorothiazide is the most common first-line therapy for this population. The boxes that extend from the Hydrochlorothiazide section represent the 2nd and 3rd line therapies recorded for persons in the cohort.
A pathways analysis provides important evidence about treatment utilization amongst a population. From this analysis we can describe the most prevalent first-line therapies utilized, the proportion of persons that discontinue treatment, switch treatments or augment their therapy. Using the pathway analysis, Hripcsak et al. (2016) found that metformin is the most commonly prescribed medication for diabetes thus confirming general adoption of the first-line recommendation of the American Association of Clinical Endocrinologists diabetes treatment algorithm. Additionally, they noted that 10% of diabetes patients, 24% of hypertension patients, and 11% of depression patients followed a treatment pathway that was shared with no one else in any of the data sources.
In classic DUS terminology, treatment pathway analyses include some population-level DUS estimates such as prevalence of use of one or more medications in a specified population, as well as some person-level DUS including measures of persistence and switching between different therapies.
## 11.4 Incidence
Incidence rates and proportions are statistics that are used in public health to assess the occurrence of a new outcome in a population during a time-at-risk (TAR). Figure 11.2 aims to show the components of an incidence calculation for a single person:
In figure 11.2, a person has a period of time where they are observed in the data denoted by their observation start and end time. Next, the person has a point in time where they enter and exit a cohort by meeting some eligibility criteria. The time at risk window then denotes when we seek to understand the occurrence of an outcome. If the outcome falls into the TAR, we count that as an incidence of the outcome.
There are two metrics for calculating incidence:
$Incidence\;Proportion = \frac{\#\;persons\;in\;cohort\;with\;new\;outcome\;during\;TAR}{\#\;persons\;in\;cohort\;with\;TAR}$
An incidence proportion provides a measure of the new outcomes per person in the population during the time-at-risk. Stated another way, this is the proportion of the population of interest that developed the outcome in a defined timeframe.
$Incidence\;Rate = \frac{\#\;persons\;in\;cohort\;with\;new\;outcome\;during\;TAR}{person\;time\;at\;risk\;contributed\;by\;persons\;in\;cohort}$
An incidence rate is a measure of the number of new outcomes during the cumulative TAR for the population. When a person experiences the outcome in the TAR, their contribution to the total person-time stops at the occurrence of the outcome event. The cumulative TAR is referred to as person-time and is expressed in days, months or years.
When calculated for therapies, incidence proportions and incidence rates of use of a given therapy are classic population-level DUS.
## 11.5 Characterizing Hypertensive Persons
Per the World Health Organization’s (WHO) global brief on hypertension (Who 2013), there are significant health and economic gains attached to early detection, adequate treatment and good control of hypertension. The WHO brief provides an overview of hypertension and characterizes the burden of the disease across different countries. The WHO provides descriptive statistics around hypertension for geographic regions, socio-economic class and gender.
Observational data sources provide a way to characterize hypertensive populations as was done by the WHO. In the subsequent sections of this chapter, we’ll explore the ways that we make use of ATLAS and R to explore a database to understand its composition for studying hypertensive populations. Then, we will use these same tools to describe the natural history and treatment patterns of hypertensive populations.
## 11.6 Database Characterization in ATLAS
Here we demonstrate how to use the data sources module in ATLAS to explore database characterization statistics created with ACHILLES to find database level characteristics related to hypertensive persons. Start by clicking on in the left bar of ATLAS to start. In the first drop down list shown in ATLAS, select the database to explore. Next, use the drop down below the database to start exploring reports. To do this, select the Condition Occurrence from the report drop down which will reveal a treemap visualization of all conditions present in the database:
To search for a specific condition of interest, click on the Table tab to reveal the full list of conditions in the database with person count, prevalence and records per person. Using the filter box on the top, we can filter down the entries in the table based on concept name containing the term “hypertension”:
We can explore a detailed drill-down report of a condition by clicking on a row. In this case, we will select “essential hypertension” to get a breakdown of the trends of the selected condition over time and by gender, the prevalence of the condition by month, the type recorded with the condition and the age at first occurrence of the diagnosis:
Now that we have reviewed the database’s characteristics for the presence of hypertension concepts and the trends over time, we can also explore drugs used to treat hypertensive persons. The process to do this follows the same steps except we use the Drug Era report to review characteristics of drugs summarized to their RxNorm Ingredient. Once we have explored the database characteristics to review items of interest, we are ready to move forward with constructing cohorts to identify the hypertensive persons to characterize.
## 11.7 Cohort Characterization in ATLAS
Here we demonstrate how to use ATLAS to perform large-scale cohort characterization for several cohorts. Click on the in the left bar of ATLAS and create a new characterization analysis. Give the analysis a name a save using the button.
### 11.7.1 Design
A characterization analysis requires at least one cohort and at least one feature to characterize. For this example, we will use two cohorts. The first cohort will define persons initiating a treatment for hypertension as their index date with at least one diagnosis of hypertension in the year prior. We will also require that persons in this cohort have at least one year of observation after initiating the hypertensive drug (Appendix B.6). The second cohort is identical to the first cohort described with a requirement having at least three years of observation instead of one (Appendix B.7).
#### Cohort Definitions
We assume the cohorts have already been created in ATLAS as described in Chapter 10. Click on and select the cohorts as shown in figure 11.6. Next, we’ll define the features to use for characterizing these two cohorts.
#### Feature Selection
ATLAS comes with nearly 100 preset feature analyses that are used to perform characterization across the clinical domains modeled in the OMOP CDM. Each of these preset feature analyses perform aggregation and summarization functions on clinical observations for the selected target cohorts. These calculations provide potentially thousands of features to describe the cohorts baseline and post-index characteristics. Under the hood, ATLAS is utilizing the OHDSI FeatureExtraction R package to perform the characterization for each cohort. We will cover the use of FeatureExtraction and R in more detail in the next section.
Click on to select the feature to characterize. Below is a list of features we will use to characterize these cohorts:
The figure above shows the list of features selected along with a description of what each feature will characterize for each cohort. The features that start with the name “Demographics” will calculate the demographic information for each person at the cohort start date. For the features that start with a domain name (i.e. Visit, Procedure, Condition, Drug, etc), these will characterize all recorded observations in that domain. Each domain feature has four options of time window preceding the cohort star, namely:
• Any time prior: uses all available time prior to cohort start that fall into the person’s observation period
• Long term: 365 days prior up to and including the cohort start date.
• Medium term: 180 days prior up to and including the cohort start date.
• Short term: 30 days prior up to and including the cohort start date.
#### Subgroup Analysis
What if we were interested in creating different characteristics based on gender? We can use the “subgroup analyses” section to define new subgroups of interest to use in our characterization.
To create a subgroup, click on and add your criteria for subgroup membership. This step is similar to the criteria used to identify cohort enrollment. In this example, we’ll define a set of criteria to identify females amongst our cohorts:
Subgroup analyses in ATLAS are not the same as strata. Strata are mutually exclusive while subgroups may include the same persons based on the criteria chosen.
### 11.7.2 Executions
Once we have our characterization designed, we can execute this design against one or more databases in our environment. Navigate to the Executions tab and click on the Generate button to start the analysis on a database:
Once the analysis is complete, we can view reports by clicking on the “All Executions” button and from the list of executions, select “View Reports”. Alternatively, you can click “View latest result” to view the last execution performed.
### 11.7.3 Results
The results provide a tabular view of the different features for each cohort selected in the design. In figure 11.10, a table provides a summary of all conditions present in the two cohorts in the preceding 365 days from the cohort start. Each covariate has a count and percentage for each cohort and the female subgroup we defined within each cohort.
We used the search box to filter the results to see what proportion of persons have a cardiac arrhythmia in their history in an effort to understand what cardiovascular-related diagnoses are observed in the populations. We can use the Explore link next to the cardiac arrhythmia concept to open a new window with more details about the concept for a single cohort as shown in figure 11.11:
Since we have characterized all condition concepts for our cohorts, the explore option enables a view of all ancestor and descendant concepts for the selected concept, in this case cardiac arrhythmia. This exploration allows us to navigate the hierarchy of concepts to explore other cardiac diseases that may appear for our hypertensive persons. Like in the summary view, the count and percentage are displayed.
We can also use the same characterization results to find conditions that are contraindicated for some anti-hypertensive treatment such as angioedema. To do this, we’ll follow the same steps above but this time search for ‘edema’ as shown in figure 11.12:
Once again, we’ll use the explore feature to see the characteristics of Edema in the hypertension population to find the prevalence of angioedema:
Here we find that a portion of this population has a record of angioedema in the year prior to starting an anti-hypertensive medication.
While domain covariates are computed using a binary indicator (i.e. was a record of the code present in the prior timeframe), some variables provide a continuous value such as the age of persons at cohort start. In the example above, we show the age for the 2 cohorts characterized expressed with the count of persons, mean age, median age and standard deviation.
### 11.7.4 Defining Custom Features
In addition to the preset features, ATLAS supports the ability to allow for user-defined custom features. To do this, click the Characterization left-hand menu item, then click the Feature Analysis tab and click the New Feature Analysis button. Provide a name for the custom feature and save it using the button.
In this example, we will define a custom feature that will identify the count of persons in each cohort that have a drug era of ACE inhibitors in their history after cohort start:
The criteria defined above assumes that it will be applied to a cohort start date. Once we have defined the criteria and saved it, we can apply it to the characterization design we created in the previous section. To do this, open the characterization design and navigate to the Feature Analysis section. Click the button and from the menu select the new custom features. They will now appear in the feature list for the characterization design. As described earlier, we can execute this design against a database to produce the characterization for this custom feature:
## 11.8 Cohort Characterization in R
We may also choose to characterize cohorts using R. Here we’ll describe how to use the OHDSI R package FeatureExtraction to generate baseline features (covariates) for our hypertension cohorts. FeatureExtraction provides users with the ability to construct covariates in three ways:
• Choose the default set of covariates
• Choose from a set of pre-specified analyses
• Create a set of custom analyses
FeatureExtraction creates covariates in two distinct ways: person-level features and aggregate features. Person-level features are useful for machine learning applications. In this section, we’ll focus on using aggregate features that are useful for generating baseline covariates that describe the cohort of interest. Additionally, we’ll focus on the second two ways of constructing covariates: pre-specified and custom analyses and leave using the default set as an exercise for the reader.
### 11.8.1 Cohort Instantiation
We first need to instantiate the cohort to characterize it. Instantiating cohorts is described in Chapter 10. In this example, we’ll use the persons initiating a first-line therapy for hypertension with 1 year follow up (Appendix B.6). We leave characterizing the other cohorts in Appendix B as an exercise for the reader. We will assume the cohort has been instantiated in a table called scratch.my_cohorts with cohort definition ID equal to 1.
### 11.8.2 Data Extraction
We first need to tell R how to connect to the server. FeatureExtraction uses the DatabaseConnector package, which provides a function called createConnectionDetails. Type ?createConnectionDetails for the specific settings required for the various database management systems (DBMS). For example, one might connect to a PostgreSQL database using this code:
library(FeatureExtraction)
connDetails <- createConnectionDetails(dbms = "postgresql",
server = "localhost/ohdsi",
user = "joe",
cdmDbSchema <- "my_cdm_data"
cohortsDbSchema <- "scratch"
cohortsDbTable <- "my_cohorts"
cdmVersion <- "5"
The last four lines define the cdmDbSchema, cohortsDbSchema, and cohortsDbTable variables, as well as the CDM version. We will use these later to tell R where the data in CDM format live, where the cohorts of interest have been created, and what version CDM is used. Note that for Microsoft SQL Server, database schemas need to specify both the database and the schema, so for example cdmDbSchema <- "my_cdm_data.dbo".
### 11.8.3 Using Prespecified Analyses
The function createCovariateSettings allow the user to choose from a large set of predefined covariates. Type ?createCovariateSettings to get an overview of the available options. For example:
settings <- createCovariateSettings(
useDemographicsGender = TRUE,
useDemographicsAgeGroup = TRUE,
useConditionOccurrenceAnyTimePrior = TRUE)
This will create binary covariates for gender, age (in 5 year age groups), and each concept observed in the condition_occurrence table any time prior to (and including) the cohort start date.
Many of the prespecified analyses refer to a short, medium, or long term time window. By default, these windows are defined as:
• Long term: 365 days prior up to and including the cohort start date.
• Medium term: 180 days prior up to and including the cohort start date.
• Short term: 30 days prior up to and including the cohort start date.
However, the user can change these values. For example:
settings <- createCovariateSettings(useConditionEraLongTerm = TRUE,
useConditionEraShortTerm = TRUE,
useDrugEraLongTerm = TRUE,
useDrugEraShortTerm = TRUE,
longTermStartDays = -180,
shortTermStartDays = -14,
endDays = -1)
This redefines the long-term window as 180 days prior up to (but not including) the cohort start date, and redefines the short term window as 14 days prior up to (but not including) the cohort start date.
Again, we can also specify which concept IDs should or should not be used to construct covariates:
settings <- createCovariateSettings(useConditionEraLongTerm = TRUE,
useConditionEraShortTerm = TRUE,
useDrugEraLongTerm = TRUE,
useDrugEraShortTerm = TRUE,
longTermStartDays = -180,
shortTermStartDays = -14,
endDays = -1,
excludedCovariateConceptIds = 1124300,
aggregated = TRUE)
The use of aggregated = TRUE for all of the examples above indicate to FeatureExtraction to provide summary statistics. Excluding this flag will compute covariates for each person in the cohort.
### 11.8.4 Creating Aggregated Covariates
The following code block will generate aggregated statistics for a cohort:
covariateSettings <- createDefaultCovariateSettings()
covariateData2 <- getDbCovariateData(
connectionDetails = connectionDetails,
cdmDatabaseSchema = cdmDatabaseSchema,
cohortDatabaseSchema = resultsDatabaseSchema,
cohortTable = "cohorts_of_interest",
cohortId = 1,
covariateSettings = covariateSettings,
aggregated = TRUE)
summary(covariateData2)
And the output will look similar to the following:
## CovariateData Object Summary
##
## Number of Covariates: 41330
## Number of Non-Zero Covariate Values: 41330
### 11.8.5 Output Format
The two main components of the aggregated covariateData object are covariates and covariatesContinuous for binary and continuous covariates respectively:
covariateData2$covariates covariateData2$covariatesContinuous
### 11.8.6 Custom Covariates
FeatureExtraction also provides the ability to define and utilize custom covariates. These details are an advanced topic and covered in the user documentation: http://ohdsi.github.io/FeatureExtraction/.
## 11.9 Cohort Pathways in ATLAS
The goal with a pathway analysis is to understand the sequencing of treatments along in one or more cohorts of interest. The methods applied are based on the design reported by Hripcsak et al. (2016). These methods were generalized and codified into a feature called Cohort Pathways in ATLAS.
Cohort pathways aims to provide analytic capabilities to summarize the events following the cohort start date of one or more target cohorts. To do this, we create a set of cohorts to identify the clinical events of interest for the target population called event cohort. Focusing on how this might look for a person in the target cohort:
In figure 11.17, the person is part of the target cohort with a defined start and end date. Then, the numbered line segments represent where that person also is identified in an event cohort for a duration of time. Event cohorts allow us to describe any clinical event of interest that is represented in the CDM such that we are not constrained to creating a pathway for a single domain or concept.
To start, click on in the left bar of ATLAS to create a new cohort pathways study. Provide a descriptive name and press the save button.
### 11.9.1 Design
To start, we will continue to use the cohorts initiating a first-line therapy for hypertension with 1 and 3 years follow up (Appendix B.6, B.7). Use the button to import the 2 cohorts.
Next we’ll define the event cohorts by creating a cohort for each first-line hypertensive drug of interest. For this, we’ll start by creating a cohort of ACE inhibitor users and define the cohort end date as the end of continuous exposure. We’ll do the same for 8 other hypertensive medications and note that these definitions are found in Appendix B.8-B.16. Once complete use the button to import these into the Event Cohort section of the pathway design:
When complete, your design should look like the one above. Next, we’ll need to decide on a few additional analysis settings:
• Combination window: This setting allows you to define a window of time, in days, in which overlap between events is considered a combination of events. For example, if two drugs represented by 2 event cohorts (event cohort 1 and event cohort 2) overlap within the combination window the pathways algorithm will combine them into “event cohort 1 + event cohort 2”.
• Minimum cell count: Event cohorts with less than this number of people will be censored (removed) from the output to protect privacy.
• Max path length: This refers to the maximum number of sequential events to consider for the analysis.
### 11.9.2 Executions
Once we have our pathway analysis designed, we can execute this design against one or more databases in our environment. This works the same way as we described for cohort characterization in ATLAS. Once complete, we can review the results of the analysis.
### 11.9.3 Viewing Results
The results of a pathway analysis are broken into 3 sections: The legend section displays the total number of persons in the target cohort along with the number of persons that had 1 or more events in the pathway analysis. Below that summary are the color designations for each of the cohorts that appear in the sunburst plot in the center section.
The sunburst plot is a visualization that represents the various event pathways taken by persons over time. The center of the plot represents the cohort entry and the first color-coded ring shows the proportion of persons in each event cohort. In our example, the center of the circle represents hypertensive persons initiating a first line therapy. Then, the first ring in the sunburst plot shows the proportion of persons that initiated a type of first-line therapy defined by the event cohorts (i.e. ACE inhibitors, Angiotensin receptor blockers, etc). The second set of rings represents the 2nd event cohort for persons. In certain event sequences, a person may never have a 2nd event cohort observed in the data and that proportion is represented by the grey portion of the ring.
Clicking on a section of the sunburst plot will display the path details on the right. Here we can see that the largest proportion of people in our target cohort initiated a first-line therapy with ACE inhibitors and from that group, a smaller proportion started a Thiazide or thiazide diuretics.
## 11.10 Incidence Analysis in ATLAS
In an incidence calculation, we describe: amongst the persons in the target cohort, who experienced the outcome cohort during the time at risk period. Here we will design an incidence analysis to characterize angioedema and acute myocardial infarction outcomes amongst new users of ACE inhibitors (ACEi) and Thiazides and thiazide-like diuretics (THZ). We will assess these outcomes during the TAR that a person was exposed to the drug. Additionally, we will add an outcome of drug exposure to Angiotensin receptor blockers (ARBs) to measure the incidence of new use of ARBs during exposure to the target cohorts (ACEi and THZ). This outcome definition provides an understanding of how ARBs are utilized amongst the target populations.
To start, click on in the left bar of ATLAS to create a new incidence analysis. Provide a descriptive name and press the save button .
### 11.10.1 Design
We assume the cohorts used in this example have already been created in ATLAS as described in Chapter 10. The Appendix provides the full definitions of the target cohorts (Appendix B.2, B.5), and outcomes (Appendix B.4, B.3, B.9) cohorts.
On the definition tab, click to choose the New users of ACE inhibitors cohort and the New users of Thiazide or Thiazide-like diuretics cohort. Close the dialog to view that these cohorts are added to the design. Next we add our outcome cohorts by clicking on and from the dialog box, select the outcome cohorts of acute myocardial infarction events, angioedema events and Angiotensin receptor blocker (ARB) use. Again, close the window to view that these cohorts are added to the outcome cohorts section of the design.
Next, we will define the time at risk window for the analysis. As shown above, the time at risk window is defined relative to the cohort start and end dates. Here we will define the time at risk start as 1 day after cohort start for our target cohorts. Next, we’ll define the time at risk to end at the cohort end date. In this case, the definition of the ACEi and THZ cohorts have a cohort end date when the drug exposure ends.
ATLAS also provides a way to stratify the target cohorts as part of the analysis specification:
To do this, click the New Stratify Criteria button and follow the same steps described in Chapter 11. Now that we have completed the design, we can move to executing our design against one or more databases.
### 11.10.2 Executions
Click the Generation tab and then the button to reveal a list of databases to use to execute the analysis:
Select one or more databases and click the Generate button to start the analysis to analyze all combinations of targets and outcomes specified in the design.
### 11.10.3 Viewing Results
On the Generation tab, the top portion of the screen allows you to select a target and outcome to use when viewing the results. Just below this a summary of the incidence is shown for each database used in the analysis.
Select the target cohort of ACEi users and the Acute Myocardial Infarction (AMI) from the respective dropdown lists. Click the button to reveal the incidence analysis results:
A summary for the database shows the total persons in the cohort that were observed during the TAR along with the total number of cases. The proportion shows the number of cases per 1000 people. The time at risk, in years, is calculated for the target cohort. The incidence rate is expressed as the number of cases per 1000 person-years.
We can also view the incidence metrics for the strata that we defined in the design. The same metrics mentioned above are calculated for each stratum. Additionally, a treemap visualization provides a representation of the proportion of each stratum represented by the boxed areas. The color represents the incidence rate as shown in the scale along the bottom.
We can gather the same information to see the incidence of new use of ARBs amongst the ACEi population. Using the dropdown at the top, change the outcome to ARBs use and click the button to reveal the details.
As shown, the metrics calculated are the same but the interpretation is different since the input (ARB use) references a drug utilization estimate instead of a health outcome.
## 11.11 Summary
• OHDSI offers tools to characterize an entire database, or a cohort of interest.
• Cohort characterization describes a cohort of interest during the time preceding the index date (baseline) and the time after index (post-index).
• ATLAS’s characterization module and the OHDSI Methods Library provide the capability to calculate baseline characteristics for multiple time windows.
• ATLAS’s pathways and incidence rate modules provide descriptive statistics during the post-index time period.
## 11.12 Exercises
#### Prerequisites
For these exercises, access to an ATLAS instance is required. You can use the instance at http://atlas-demo.ohdsi.org, or any other instance you have acces to.
Exercise 11.1 We would like to understand how celecoxib is used in the real world. To start, we would like to understand what data a database has on this drug. Use the ATLAS Data Sources module to find information on celecoxib.
Exercise 11.2 We would like to better understand the disease natural history of celecoxib users. Create a simple cohort of new users of celecoxib using a 365-day washout period (see Chapter 10 for details on how to do this), and use ATLAS to create a characterization of this cohort, showing co-morbid conditions and drug-exposures.
Exercise 11.3 We are interested in understand how often gastrointestinal (GI) bleeds occur any time after people initiate celecoxib treatment. Create a cohort of GI bleed events, simply defined as any occurrence of concept 192671 (“Gastrointestinal hemorrhage”) or any of its descendants. Compute the incidence rate of these GI events after celecoxib initiation, using the exposure cohort defined in the previous exercise.
Suggested answers can be found in Appendix E.7.
### References
Elm, Erik von, Douglas G. Altman, Matthias Egger, Stuart J. Pocock, Peter C. Gøtzsche, and Jan P. Vandenbroucke. 2008. “The Strengthening the Reporting of Observational Studies in Epidemiology (Strobe) Statement: Guidelines for Reporting Observational Studies.” Journal of Clinical Epidemiology 61 (4): 344–49. https://doi.org/10.1016/j.jclinepi.2007.11.008.
Hripcsak, George, Patrick B. Ryan, Jon D. Duke, Nigam H. Shah, Rae Woong Park, Vojtech Huser, Marc A. Suchard, et al. 2016. “Characterizing treatment pathways at scale using the OHDSI network.” Proceedings of the National Academy of Sciences 113 (27): 7329–36. https://doi.org/10.1073/pnas.1510502113.
Who, A. 2013. “Global Brief on Hypertension.” World Health Organization. https://www.who.int/cardiovascular_diseases/publications/global_brief_hypertension/en/.
|
|
Determine whether the triangle having sides (a − 1) cm,
Question:
Determine whether the triangle having sides $(a-1) \mathrm{cm}, 2 \sqrt{a} \mathrm{~cm}$ and $(a+1) \mathrm{cm}$ is a right angled triangle.
Solution:
Let
$A=(a-1)$
$B=\sqrt{2} a$
$C=(a+1)$
Larger side is $C=(a+1)$
We know that any number plus 1 is always greater than that number minus 1 and product of 2 and its square root.
For example : If a = 36
$a-1=35$
$a+1=37$
$\sqrt{2} a=12$
If $a=5$
$a-1=4$
$a+1=6$
$\sqrt{2} a=4.47$
In order to prove that the given sides forms a right angled triangle we have to prove that $A^{2}+B^{2}=C^{2}$.
Let us solve the left hand side first.
$A^{2}+B^{2}=(a-1)^{2}+(\sqrt{2} a)^{2}$
$=a^{2}-2 a+1+4 a$
$=a^{2}+2 a+1$
Now we will simplify the right hand side as shown below,
$C^{2}=(a+1)^{2}$
$=a^{2}+2 a+1$
We can see that left hand side is equal to right hand side.
Therefore, the given sides determined the right angled triangle.
|
|
## Uses of Interfacecom.opengamma.strata.collect.function.CheckedConsumer
• Packages that use CheckedConsumer
Package Description
com.opengamma.strata.collect
Root package for common data structures used by Strata.
• ### Uses of CheckedConsumer in com.opengamma.strata.collect
Methods in com.opengamma.strata.collect with parameters of type CheckedConsumer
Modifier and Type Method and Description
static <T> Consumer<T> Unchecked.consumer(CheckedConsumer<T> consumer)
Converts checked exceptions to unchecked based on the Consumer interface.
|
|
×
Get Full Access to Fundamentals Of Statistics - 4 Edition - Chapter 3.2 - Problem 23ayu
Get Full Access to Fundamentals Of Statistics - 4 Edition - Chapter 3.2 - Problem 23ayu
×
# Solved: The Empirical Rule The following data represent
ISBN: 9780321838704 30
## Solution for problem 23AYU Chapter 3.2
Fundamentals of Statistics | 4th Edition
• Textbook Solutions
• 2901 Step-by-step solutions solved by professors and subject experts
• Get 24/7 help from StudySoup virtual teaching assistants
Fundamentals of Statistics | 4th Edition
4 5 1 366 Reviews
12
5
Problem 23AYU
The Empirical Rule The following data represent the weights (in grams) of a random sample of 50 M&M plain candies.
(a) Determine the sample standard deviation weight. Express your answer rounded to three decimal places.
(b) On the basis of the histogram drawn in Section 3.1, Problem 27, comment on the appropriateness of using the Empirical Rule to make any general statements about the weights of M&Ms.
(c) Use the Empirical Rule to determine the percentage of M&Ms with weights between 0.803 and 0.947 gram.
(d) Determine the actual percentage of M&Ms that weigh between 0.803 and 0.947 gram, inclusive.
(e) Use the Empirical Rule to determine the percentage of M&Ms with weights more than 0.911 gram.
(f) Determine the actual percentage of M&Ms that weigh more than 0.911 gram.
Step-by-Step Solution:
Step 1 of 2
a)
First we need to find mean.
=
=
= 0.766
Here standard deviation is
=
= 0.0012
Then the sample variance is
s =
s =
s = 0.036
Hence the sample variance is s = 0.036 gram.
Step 2 of 2
## Discover and learn what students are asking
Calculus: Early Transcendental Functions : Hyperbolic Functions
?In Exercises 7-14, verify the identity. $$\tanh ^{2} x+\operatorname{sech}^{2} x=1$$
Chemistry: The Central Science : Atoms, Molecules, and Ions
?Are these two compounds isomers? Explain. [Section 2.9]
Statistics: Informed Decisions Using Data : Comparing Three or More Means (One-Way Analysis of Variance)
?Soybean Yield The data in the table represent the number of pods on a random sample of soybean plants for various plot types. An agricultural research
#### Related chapters
Unlock Textbook Solution
Enter your email below to unlock your verified solution to:
Solved: The Empirical Rule The following data represent
|
|
# Why does $\mathrm{cis}\left(2\pi x\right)$ not equal $1$?
\begin{align} \mathrm{cis}\left(\theta\right) &= \cos\left(\theta\right)+i\sin\left(\theta\right) \\ &= e^{i\theta} \end{align}\\$$\text{Let } \theta = 2\pi x \\$$\begin{align} \mathrm{cis}\left(2\pi x\right) &= e^{i\times2\pi x} \\ &= \left(e^{2\pi i}\right)^x \\ &= 1^x \\ &= 1 \end{align}
This is clearly not true for any $x \notin \mathbb{Z}$. Where is the error?
Is it because, when the value of $x$ is substituted back in, the equation becomes
\begin{align} \mathrm{cis}\left(\theta\right) &= 1^\frac{\theta}{2\pi} \\ &= \sqrt[2\pi]{1^\theta} \\ &= \sqrt[2\pi]{1} \end{align}
which would have infinite solutions(?) since $2\pi$ is irrational?
$(a^b)^c =a^{bc}$ is not always true for complex numbers.
|
|
# The Software
There is a common quote about ships which may help explain why recording weather readings in a computer is not enough:
A ship in port is safe,
But that is not what ships are built for.
In our case, what is the point of collecting information if we don't do something useful with it? Web pages allow us to share this information.
To complete the process, web page files need to be created and then placed where people can get at them.
To keeping it all running sometimes takes some extra effort:
In the end, just like Ross Perot used to say . . .
### Programs & Files
Up to this point you may have read about how we measure and record what the various sensors are sensing. Now, how do we make that information useful?
"current"
This program, written in the ANSI "C" language, was created to build the web pages from scratch. It creates the page you see as "http://cws2vm.integra.net/~nsps/weather.htm".
You may know how to edit a web page in raw HTML. Imagine writing a program which automatically writes HTML files.
"current" is executed every four minutes under control of the "cron" program.
"current" first reads the raw data files to get all of the data for the last forty-eight hours. It then summarizes each hour's data for each sensor. Finally, the web page is composed, sensor by sensor. The current reading is given, and the historical (hourly) trends are graphed.
Before completing the web page there are a few more lines to add, ones which do not change often. The last of these come from a "footer" which is appended to the file. This allows changes to be made (like "Notes") without having to re-compile the program.
### Getting files to the ISP
A file in a computer may be interesting, but it needs to be accessible if it is to be useful. This is the final link in the making of a web page.
How we do it now
The ski area's web site (where this weather station's web pages are located) is connected to the area's internal LAN (Local Area Network). The computer which builds the weather summary web pages is also connected to this network.
The "cron" program runs a batch file several times per hour, which uses FTP on the network to send new files to the web server.
This "miracle" of communication was made possible by the installation of a high-speed "T1" line to the ski area.
How we used to do it
For several years we used a dial-up modem connection to upload to a commercial ISP account.
The "cron" program ran a batch file once or twice an hour, which called the ISP, logged-on, established an FTP session, changed directories and "put" updated files onto the server.
Modems can be slow and unreliable if the phone lines are noisy. The computer would occasionally lock-up when the connection had trouble.
"..\ftp\to_website.bat" is the batch file used to FTP files to the internet site. It looks like:
cd c:\wx\sw\ftp
call ftp -s:to_website.txt
"..\ftp\to_website.txt" is a script file fed to the FTP function. It looks like:
open ftp.[ISP].com
binary
cd htdocs
put d:\share\weather2.htm
put d:\share\weather.htm
quit
### FTP Lock-up Watchdog
The FTP transfer of files to the ISP (Internet Service Provider) has at times been unreliable.
The symptom of this problem is an FTP session that never gets done. The result is that all subsequent activities never happen. The machine is effectively "locked-up".
Since the weather computer runs unattended, there is no one around to see the problem and "kill" the offending process.
The actual cause of this problem might be due to any of several reasons, like the ISP server is "down", or the network connection to the ISP is slow or broken.
The fix is the "FTP Watchdog" program. This program is started just before an FTP session starts. The watchdog first "sleeps" for a time period much longer that it should take for the FTP session to complete. When the watchdog "wakes up" it searches for and kills the FTP session if it is still running. Under normal circumstances the FTP session has long-since completed and terminated itself.
A log file of the watchdog's operation makes for dull reading. On the other hand, every incident handled by this watchdog is another example of the system being able to detect and overcome an otherwise "fatal" problem.
Thanks to Ken Keeling for the program.
### Power Lock-up Watchdog
This is all about a "fade to black" problem the weather computer has.
The symptom of this problem is that the monitor screen suddenly goes all black. The computer does not respond to the keyboard or mouse. There is no recovery short of cycling the power.
The actual cause of this problem is unknown. It might be an intermittent open circuit somewhere in the computer: On the processor, on a hard disc connection, or somewhere else. It sometimes occurs while the user is typing, opening new web pages, or when some action occurs which might result in an access to the hard disc. A user banging on the keyboard or mouse seems to make the problem happen more often.
An easy fix would be to put a relay on the "reset" button, except this computer doesn't seem to have a reset button, so . . .
The fix is to have a "watchdog" timer which must be periodically reset by the computer. If the computer stays operational, the timer never has the opportunity to time-out. If the computer locks-up, the timer will time-out, which results in a relay opening and dropping power to the computer. The relay closes a few seconds later, the computer re-boots, and it resumes normal operation.
The boot process proceeds fast enough that the timer does not time-out before the computer is back on line.
To get a WinXP computer to re-boot after a power cycle requires a change to the CMOS. To set this option:
Re-boot the machine
Press "F1" early in the boot process
Find the "Do what when AC power returns?"
Use the "Page Up" and "Page Down" keys to select the "Power up" option
Hit "F10" and/or "OK" and/or Whatever to gracefully exit & save this change
WARNING: The above procedure is written from memory of watching someone else do this. It needs to be verified and corrected.
The hardware used to implement this fix is connected the computer like a printer, but what it does is control power provided to an outlet, into which the computer is plugged. You could probably reverse engineer it from this:
• Set-up the computer's default printer to be on LPT1.
- Select a simple (old) model, which will accept a few characters sent to it.
• Centronics printer cable from the computer to the watchdog's hardware.
• Two jumpers at the connector simulate "Paper" & "Ready" to keep the printer driver happy.
• 5 VDC "wall wart", to power the 555s & optoisolator
• NE555P acting as a inverter on Centronics pin #2 (a data bit), normally low
• NE555P acting as a 1-shot, ~ 0.4 second duration (RC = 0.1 ufd & 3.9 Meg)
- Since the re-settable timer doesn't react to pulses as short as what comes out of the printer port.
- 1000 Ohm resistor to red LED is a good idea to monitor whether the output is seeing an active "reset" signal.
• International Rectifier PVN012 - optoisolator
- 470 Ohm resistor to input LED
- Output connected as "C" option: both transistors in parallel
• Omron H3CR-A timer - as a re-settable ~2.5 minute timer, (reset by the optoisolator)
• Omron H3CR-A timer - as a 1-shot, ~2 second duration (to get the computer's attention)
• Power relay - driven by the 1-shot, cuts power to outlet when activated
• Outlet - The computer plugs into this
• Fuse (5A slow blow) on the power inlet, just in case . . .
The software is mainly responsible for sending a few characters to the LPT1 port every minute. It executes a one-line "Pwr_Watchdog.bat" script which looks like:
echo " ? " > LPT1
The toggling data bits reset the timer, so it doesn't time-out and activate the 1-shot and power relay.
The software also maintains a log file which keeps track of how long the computer has been operating normally. Relatively dull reading, but every incident handled by the watchdog is another example of the system being able to detect and overcome an otherwise "fatal" problem.
Ken Keeling helped with this watchdog as well.
### "It's just that simple"
Well, that's the story of how one weather station's web pages get made. It's a long chain of hardware and software, working together, to get the information to you.
If you want to make your own weather station and put it on the web, be ready to learn a lot. It can be an interesting hobby, and you can (or will) learn about things like:
• Sensors
• How they work
• Calibration
• Computers
• Managing files
• Programming
• Communication
• Debugging
• Electronics
• Wiring it all together
• Lightning protection
• Troubleshooting
• Mechanics
• Design of sensors which can survive and continue to operate
• Mounting sensors to survive the weather
• Web pages
• Layout & organization
• HTML (HyperText Markup Language)
• Sending files to an ISP (Internet Service Provider)
. . . and even a little bit about . . .
• Weather
Installing your station can be a good introduction to many of these topics. Keeping it running can be a continuing education.
Murphy's Law may be stated as: Anything that can go wrong will go wrong. Beyond the humor is the lesson that well designed things require anticipating what can go wrong, and then finding ways to make sure that they do not.
To the extent that you understand the nuts and bolts, bits and bytes, or Volts and Ohms, you will be able to investigate & fix what breaks and to make improvements. You may find these skills useful for more than just an interesting hobby.
|
|
## Nagoya Mathematical Journal
### Classification of extremal elliptic {$K3$} surfaces and fundamental groups of open {$K3$} surfaces
#### Abstract
We present a complete list of extremal elliptic $K3$ surfaces (Theorem 1.1). As an application, we give a sufficient condition for the topological fundamental group of complement to an $ADE$-configuration of smooth rational curves on a $K3$ surface to be trivial (Proposition 4.1 and Theorem 4.3).
#### Article information
Source
Nagoya Math. J., Volume 161 (2001), 23-54.
Dates
First available in Project Euclid: 27 April 2005
https://projecteuclid.org/euclid.nmj/1114631551
Mathematical Reviews number (MathSciNet)
MR1820211
Zentralblatt MATH identifier
1064.14503
Subjects
Primary: 14J27: Elliptic surfaces
Secondary: 14J28: $K3$ surfaces and Enriques surfaces
#### Citation
Shimada, Ichiro; Zhang, De-Qi. Classification of extremal elliptic {$K3$} surfaces and fundamental groups of open {$K3$} surfaces. Nagoya Math. J. 161 (2001), 23--54. https://projecteuclid.org/euclid.nmj/1114631551
#### References
• E. Artal-Bartolo, H. Tokunaga and D. Q. Zhang, Miranda-Persson's problem on extremal elliptic $K3$ surfaces , preprint. http://xxx.lanl.gov/list/math.AG, 9809065.
• N. Bourbaki, Éléments de mathématique. Groupes et algèbres de Lie. Chapitre IV–VI, Hermann, Paris (1968).
• J. H. Conway and N. J. A. Sloane, Sphere packings, lattices and groups, Second edition, Grundlehren der Mathematischen Wissenschaften, 290, Springer, New York (1993).
• A. Fujiki, Finite automorphism groups of complex tori of dimension two , Publ. Res. Inst. Math. Sci., 24 (1988), no. 1, 1–97.
• S. Kondō, Automorphisms of algebraic $K3$ surfaces which act trivially on Picard groups , J. Math. Soc. Japan, 44 (1992), no. 1, 75–98.
• ––––, Niemeier lattices, Mathieu groups, and finite groups of symplectic automorphisms of $K3$ surfaces , With an appendix by Shigeru Mukai, Duke Math. J., 92 (1998), no. 3, 593–603.
• R. Miranda and U. Persson, Mordell-Weil groups of extremal elliptic $K3$ surfaces , Problems in the theory of surfaces and their classification (Cortona, 1988), Sympos. Math., XXXII, Academic Press, London (1991), 167–192.
• D. R. Morrison, On $K3$ surfaces with large Picard number , Invent. Math., 75 (1984), no. 1, 105–121.
• S. Mukai, Finite groups of automorphisms of $K3$ surfaces and the Mathieu group , Invent. Math., 94 (1988), no. 1, 183–221.
• V. V. Nikulin, Finite automorphism groups of Kähler $K3$ surfaces , Trans. Moscow Math. Soc., Issue 2 (1980), 71–135.
• ––––, Integer symmetric bilinear forms and some of their applications , Math. USSR Izvestija, 14 (1980), no. 1, 103–167.
• K. Nishiyama, The Jacobian fibrations on some $K3$ surfaces and their Mordell-Weil groups , Japan. J. Math. (N.S.), 22 (1996), no. 2, 293–347.
• M. V. Nori, Zariski's conjecture and related problems , Ann. Sci. École Norm. Sup. (4), 16 (1983), no. 2, 305–344.
• I. Piateskii-Shapiro and I. R. Shafarevich, A Torelli theorem for algebraic surfaces of type $K3$ , Math. USSR Izv., 35 (1971), 530–572.
• J.-P. Serre, A course in arithmetic, Graduate Texts in Mathematics, 7, Springer, New York (1973).
• T. Shioda and H. Inose, On singular $K3$ surfaces. Complex analysis and algebraic geometry, Iwanami Shoten, Tokyo (1977), 119–136.
• A. N. Todorov, Applications of the Kähler-Einstein-Calabi-Yau metric to moduli of $K3$ surfaces , Invent. Math., 61 (1980), no. 3, 251–265.
• G. Xiao, Galois covers between $K3$ surfaces , Ann. Inst. Fourier (Grenoble), 46 (1996), no. 1, 73–88.
• Q. Ye, On extremal elliptic $K3$ surfaces , preprint. http://xxx.lanl.gov/abs/math.AG, 9901081.
|
|
# 10: Correlation and Regression
Our interest in this chapter is in situations in which we can associate to each element of a population or sample two measurements $$x$$ and y, particularly in the case that it is of interest to use the value of $$x$$ to predict the value of y. For example, the population could be the air in automobile garages, $$x$$ could be the electrical current produced by an electrochemical reaction taking place in a carbon monoxide meter, and $$y$$ the concentration of carbon monoxide in the air. In this chapter we will learn statistical methods for analyzing the relationship between variables $$x$$ and $$y$$ in this context.
• 10.1: Linear Relationships Between Variables
In this chapter we will analyze situations in which variables x and y exhibit a linear relationship with some randomness. The level of randomness will vary from situation to situation.
• 10.2: The Linear Correlation Coefficient
The linear correlation coefficient measures the strength and direction of the linear relationship between two variables x and y. The sign of the linear correlation coefficient indicates the direction of the linear relationship between x and y.
• 10.3: Modelling Linear Relationships with Randomness Present
For any statistical procedures, given in this book or elsewhere, the associated formulas are valid only under specific assumptions. The set of assumptions in simple linear regression are a mathematical description of the relationship between x and y. Such a set of assumptions is known as a model. Statistical procedures are valid only when certain assumptions are valid.
• 10.4: The Least Squares Regression Line
How well a straight line fits a data set is measured by the sum of the squared errors. The least squares regression line is the line that best fits the data. Its slope and y-intercept are computed from the data using formulas. The slope of the least squares regression line estimates the size and direction of the mean change in the dependent variable y when the independent variable x is increased by one unit.
• 10.5: Statistical Inferences About β₁
The parameter βₗ, the slope of the population regression line, is of primary importance in regression analysis because it gives the true rate of change in the mean E(y) in response to a unit increase in the predictor variable x.
• 10.6: The Coefficient of Determination
The coefficient of determination estimates the proportion of the variability in the variable y that is explained by the linear relationship between y and the variable x. There are several formulas for computing. The choice of which one to use can be based on which quantities have already been computed so far.
• 10.7: Estimation and Prediction
The coefficient of determination estimates the proportion of the variability in the variable y that is explained by the linear relationship between y and the variable x. There are several formulas for computing coefficient of determination; the choice of which one to use can be based on which quantities have already been computed so far.
• 10.8: A Complete Example
In this section we will go through a complete example of the use of correlation and regression analysis of data from start to finish, touching on all the topics of this chapter in sequence.
• 10.9: Formula List
• 10.E: Correlation and Regression (Exercises)
|
|
# Plaintext XOR'd with keylength-shifted plaintext
While solving a (easy...) Project Euler cipher problem this week I repeatedly saw instructions for breaking ciphers after identifying the keylength as:
'shift the ciphertext by that key length and XOR it with itself. This removes the key and leaves you with plaintext XORed with the plaintext shifted the length of the key'
So I can understand how this drops the key out the equation, but I don't see how plaintext XORd with plaintext shifted by keylength is any more tractable. In the many references I saw about this, no one followed up with how to solve the plaintext^shiftedPlaintext piece. It doesn't seem that different than plaintext^keyPhrase really.
What am I missing about why this makes the problem easier to solve? What approaches deal with this part of the problem?
Well, if we have plaintext^shiftedPlaintext, what we have is the values $P_i \oplus P_{i+k}$, where $k$ is the length of the shift.
What does this imply? Well, if we consider the values $P_i \oplus P_{i+k}$, $P_{i+k} \oplus P_{i+2k}$, $P_{i+2k} \oplus P_{i+3k}$, ... we get a chain where, if we guess one of the values $P_{i+nk}$ for some $n$, we can immediately reconstruct all other plaintext bytes in the chain; thus giving us the correct value for every $k$th plaintext byte.
In addition, if the plaintext includes spaces, then a useful fact of ASCII can be useful; alphabetical letters have bit 6 set, spaces have bit 6 clear (and while numbers and punctuation also have bit 6 clear, those are comparatively rare). So, if we see that the value of $P_i \oplus P_{i+k}$ has bit 6 set, then a good guess is that either $P_i$ or $P_{i+k}$ is a space; that acts as a 1 character crib.
|
|
The Gurobi™ distribution includes an extensive set of examples that illustrate commonly used features of the Gurobi libraries. Most examples have versions for C, C++, C#, Java, Visual Basic and Python. A few, however, illustrate features that are specific to the Python interface. A brief note for users of the Gurobi MATLAB® and R interfaces: our interfaces to these languages are built around the assumption that you will use the rich matrix-oriented capabilities of the underlying languages to build your optimization models. Thus, our examples for these languages don’t attempt to show you how to build models.
We have instead chosen to provide a few simple examples that demonstrate how to pass matrices into our interface. We recommend that you begin by reading the overview of the examples (available here). However, if you’d like to dive directly into specific examples, the following is a list of all of the examples included in the Gurobi distribution, organized by basic function. The source for the examples can be found by following the provided links, or in the examples directory of the Gurobi distribution. If you would like further details on any of the Gurobi routines used in these examples, please consult the Gurobi Reference Manual.
In addition to these Functional Code Examples, we also offer a set of examples that capture interesting models from a variety of application domains, implemented as Jupyter Notebooks using the Gurobi Python API.
## Read a model from a file
Example Description Available Languages
lp A very simple example that reads a continuous model from a file, optimizes it, and writes the solution to a file. If the model is infeasible, it writes an Irreducible Inconsistent Subsystem (IIS) instead. CC++C#JavaPythonVB
mip2 Reads a MIP model from a file, optimizes it, and then solves the fixed version of the MIP model. CC++C#JavaPythonVB
## Build a simple model
Example Description Available Languages
mip1 Builds a trivial MIP model, solves it, and prints the solution. CC++C#JavaMATLABPythonRVB
qp Builds a trivial QP model, solves it, converts it to an MIQP model, and solves it again. CC++C#JavaMATLABPythonRVB
qcp Builds and solves a trivial QCP model. CC++C#JavaMATLABPythonRVB
sos Builds and solves a trivial SOS model. CC++C#JavaMATLABPythonRVB
dense Solves a model stored using dense matrices. We don’t recommend using dense matrices, but this example may be helpful if your data is already in this format. CC++C#JavaPythonVB
genconstr Demonstrates the use of general constraints. CC++C#JavaPythonVB
multiobj Demonstrates the use of multi-objective optimization. CC++C#JavaPythonVB
piecewise Demonstrates the use of piecewise-linear objective functions. CC++C#JavaMATLABPythonRVB
poolsearch Demonstrates the use of solution pools. CC++C#JavaPythonVB
## A few simple applications
Example Description Available Languages
diet Builds and solves the classic diet problem. Demonstrates model construction and simple model modification – after the initial model is solved, a constraint is added to limit the number of dairy servings. CC++C#JavaMATLABPythonVB
diet2, diet3, diet4, dietmodel Python-only variants of the diet example that illustrate model-data separation. diet2.pydiet3.pydiet4.pydietmodel.py
facility Simple facility location model: given a set of plants and a set of warehouses, with transportation costs between them, this example finds the least expensive set of plants to open in order to satisfy product demand. This example demonstrates the use of MIP starts — the example computes an initial, heuristic solution and passes that solution to the MIP solver. CC++C#JavaPythonVB
netflow A Python-only example that solves a multi-commodity network flow model. It demonstrates the use of several Python modeling constructs, including dictionaries, tuples, and tuplelist objects. Python
portfolio A Python-only example that solves a financial portfolio optimization model, where the historical return data is stored using the pandas package and the result is plotted using the matplotlib package. It demonstrates the use of pandas, NumPy, and Matplotlib in conjunction with Gurobi. Python
sudoku Reads a Sudoku puzzle dataset from a file, builds a MIP model to solve that model, solves it, and prints the solution. CC++C#JavaPythonVB
workforce1 Formulates and solves a workforce scheduling model. If the model is infeasible, the example computes and prints an Irreducible Inconsistent Subsystem (IIS). CC++C#JavaPythonVB
workforce2 An enhancement of workforce1. This example solves the same workforce scheduling model, but if the model is infeasible, it computes an IIS, removes one of the associated constraints from the model, and re-solves. This process is repeated until the model becomes feasible. Demonstrates constraint removal. CC++C#JavaPythonVB
workforce3 A different enhancement of workforce1. This example solves the same workforce scheduling model, but if the model is infeasible, it adds artificial variables to each constraint and minimizes the sum of the artificial variables. This corresponds to finding the minimum total change in the right-hand side vector required in order to make the model feasible. Demonstrates variable addition. CC++C#JavaPythonVB
workforce4 An enhancement of workforce3. This example solves the same workforce scheduling model, but it starts with artificial variables in each constraint. It first minimizes the sum of the artificial variables. Then, it introduces a new quadratic objective to balance the workload among the workers. Demonstrates optimization with multiple objective functions. CC++C#JavaPythonVB
workforce5 An alternative enhancement of workforce3. This example solves the same workforce scheduling model, but it starts with artificial variables in each constraint. It formulates a multi-objective model where the primary objective is to minimize the sum of the artificial variables (uncovered shifts), and the secondary objective is to minimize the maximum difference in the number of shifts worked between any pair of workers. Demonstrates multi-objective optimization. CC++C#JavaPythonVB
## Illustrating specific features
Example Description Available Languages
feasopt Reads a MIP model from a file, adds artificial slack variables to relax each constraint, and then minimizes the sum of the artificial variables. It then computes the same relaxation using the feasibility relaxation feature. The example demonstrates simple model modification by adding slack variables. It also demonstrates the feasibility relaxation feature. CC++C#JavaPythonVB
lpmethod Demonstrates the use of different LP algorithms. Reads a continuous model from a file and solves it using multiple algorithms, reporting which is the quickest for that model. CC++C#JavaPythonVB
lpmod Demonstrates the use of advanced starts in LP. Reads a continuous model from a file, solves it, and then modifies one variable bound. The resulting model is then solved in two different ways: starting from the solution of the original model, or restarting from scratch. CC++C#JavaPythonVB
params Demonstrates the use of Gurobi parameters. Reads a MIP model from a file, and then spends 5 seconds solving the model with each of four different values of the MIPFocus parameter. It compares the optimality gaps for the four different runs, and continues with the MIPFocus value that produced the smallest gap. CC++C#JavaPythonVB
sensitivity MIP sensitivity analysis. Reads a MIP model, solves it, and then computes the objective impact of fixing each binary variable in the model to 0 or 1. Demonstrates simple MIP model modification by changing variable bounds. CC++C#JavaPythonVB
tune Uses the parameter tuning tool to search for improved parameter settings for a model. CC++C#JavaPythonVB
fixanddive Implements a simple MIP heuristic. It reads a MIP model from a file, relaxes the integrality conditions, and then solves the relaxation. It then chooses a set of integer variables that take integer or nearly integer values in the relaxation, fixes them to the nearest integer, and solves the relaxation again. This process is repeated until the relaxation is either integer feasible or linearly infeasible. The example demonstrates different types of model modification (relaxing integrality conditions, changing variable bounds, etc.). CC++C#JavaPythonVB
matrix1.py A Python-only example that formulates and solves a simple MIP model using the matrix API. Python
matrix2.py A Python-only example that uses the Python matrix API to formulate the n-queens problem; it maximizes the number queens placed on an n x n chessboard without threatening each other. Python
Example Description Available Languages
tsp Solves a traveling salesman problem using lazy constraints. CC++C#JavaPythonVB
callback Demonstrates the use of Gurobi callbacks. CC++C#JavaPythonVB
### Gurobi: Always Free for Academics
We make it easy for students, faculty, and researchers to work with mathematical optimization.
When you face complex optimization challenges, you can trust our Gurobi Alliance partners for expert services.
Our global team of helpful, PhD-level experts are here to support you—with responses in hours, not days.
## What's New at Gurobi
News
##### Gurobi 10.0 Delivers Blazing-Fast Speed, Innovative Data Science Integration, and an Enterprise Development and Deployment Experience
Latest release enables data professionals to easily integrate machine learning models into optimization models to solve new types of problems.
Event
##### Webinar: What’s New in Gurobi 10.0
In this webinar, attendees will get a first look at our upcoming product release, Gurobi 10.0. We will summarize the performance improvements and highlight some of the underlying algorithmic advances, such as the network simplex algorithm, enhancements in concurrent LP, and optimization based bound tightening.
Event
new content
##### Cost Savings & Business Benefits for Gurobi Customers
2022 Total Economic Impact™ Study Reveals A 518% ROI with Gurobi
|
|
# Algebra 1
posted by Lauren
Earth has a radius of about 6.4 x 10 to the 6th power m. Approximate the surface area of earth using the formula for the surface area is a sphere, S = 4 pi r squared. Earths surface is about 70 percent water. About how many square meters of earths surface are covered with water? The oceans have an average depth of 3795 m. Estimate the volume of water on earth.
1. Steve
water is 0.70 * 4*pi*(6.4*10^6)^2
for volume, multiply that by the depth.
2. carson
a. What is the approximate surface area of the earth? Use the formula S A space equals space 4 πr squared and use 3.14 for pi.
## Similar Questions
1. ### math
direct variation the surface area of a sphere varies directly as the square of the radius. if the surface area of a sphere is 36 pi meters squared when the radius is 3 meters, what is the surface area of a sphere with a radius of 6 …
2. ### math
The earth has a radius of 6,380 km. The formula for the volume of a sphere is �� V = 4/3 ðx^3, and the formula for surface area of a sphere is A = 4ðx^2 . Find the volume and surface area of the earth and write the …
3. ### Math
The surface area of a sphere is given by the formula 4pi r2, where is it's radius. Although the shape of earth is noth exactly spherical, it has an average radius of 3960 miles. Estimate the surface area of earth.
4. ### surface area and volume
I really need some help, I don't fully understand. Exspecially with fractions, don't know how to enter them on calculator. 1. What is the approximate volume of the sphere?
5. ### Math
Earth has a new surface of 1.97x10^8 square miles. A new planet has a surface area 13 times the surface area of Earth. What is the surface area of the new planet?
6. ### Geometry/Algebra/Calculus
Why isn't the surface area of a sphere with radius r the following: 2*pi * (pi*r) That comes from the following flow of logic: Doesn't it makes sense to think of the surface area of the sphere with radius r as the the circumference …
7. ### Algebra
A sphere has a radius of 4x + 1. Which polynomial in standard form best describes the total surface area of the sphere?
8. ### Algebra II
12. A sphere has a radius of 4x + 1. Which polynomial in standard form best describes the total surface area of the sphere?
9. ### earth
The radius of the Earth is approximately 3,960 miles. Find the approximate surface-area-to-volume ratio of the Earth.
10. ### algebra--1 question
A sphere has a radius of 4x + 1. Which polynomial in standard form best describes the total surface area of the sphere?
More Similar Questions
|
|
# Is this priority or parity! solution codechef
## Is this priority or parity! solution codechef
NN integers 1,2,3,N1,2,3,…N are placed on a circle in clockwise order such that ii and (i+1)(i+1) are adjacent for all 1iN11≤i≤N−1, and 11 and NN are also adjacent.
Given an integer KK, Ashish repeatedly removes integers from the circle until only one integer remains. He does so in the following manner. In each move, Ashish finds the integer which is on the KK-th place from the smallest one in clockwise order and removes it from the circle, i.e. its neighbours now become adjacent.
## Is this priority or parity! solution codechef
For example, consider the case when N=3N=3 and K=3K=3. Let the circular array [1,2,3][1,2,3] denote the integers on the circle. In the first move, 33 is removed since it is the KK-th integer starting from 11, which is the smallest integer currently on the circle. Likewise, in the next move 11 is removed and finally only 22 remains. [1,2,3][1,2][2][1,2,3_]→[1_,2]→[2].
Since NN and KK may be very large, Ashish cannot simulate the process himself. Help him find the parity of the last integer that remains after the repeated removals. In other words, report if this integer is odd or even.
### Input Format
• The first line contains a single integer TT — the number of test cases. Then the test cases follow.
• The first and only line of each test case contains two integers NN and KK.
### Output Format
For each test case, output ODD if the last integer that remains after the repeated removals is odd, and EVEN otherwise.
Each letter of the output may be printed in either lowercase or uppercase. For example, the strings oddOdd, and ODD will all be treated as equivalent.
### Constraints
• 1T1051≤T≤105
• 1N10181≤N≤1018
• 1KN1≤K≤N
## Is this priority or parity! solution codechef
4
4 1
3 3
9 1
3489601027782 8104267
### Sample Output 1
EVEN
EVEN
ODD
EVEN
## Is this priority or parity! solution codechef
Test case 11: The last integer that remains is 44, which is EVEN.
[1,2,3,4][2,3,4][3,4][4][1_,2,3,4]→[2_,3,4]→[3_,4]→[4]
Test case 22: The last integer that remains is 22, which is EVEN.
[1,2,3][1,2][2][1,2,3_]→[1_,2]→[2]
Test case 33: The last integer that remains is 99, which is ODD.
[1,2,3,4,5,6,7,8,9][2,3,4,5,6,7,8,9][9]
|
|
Write the Names and Structures of the Monomers of the Following Polymers: Buna-s - Chemistry
Write the names and structures of the monomers of the following polymers: Buna-S
Write the structures of the monomers used for getting the following polymers:
Buna-S
Solution 1
Polymer Monomer Structure of monomer Buna-S 1, 3-butadiene CH_2 = CH - CH =CH_2 Styrene C_6H_5CH = CH_2
Solution 2
Buna-S: Monomers used in the Buna-S polymer are 1,3-butadiene and styrene.
Is there an error in this question or solution?
APPEARS IN
NCERT Class 12 Chemistry Textbook
Chapter 15 Polymers
Q 17.1 | Page 437
|
|
# White noise for KdV, mKdV, and cubic NLS on the circle
-
We discuss two methods for establishing the invariance of the white noise for the periodic KdV. First, we briefly go over the basic theory of Gaussian Hilbert spaces and abstract Wiener spaces and show that the Fourier-Lebesgue space $\mathcal{F}L^{s, p}$ captures the regualrity of the white noise for $sp < -1$. We then establish local well-posedness (LWP) of KdV via the second iteration introduced by Bourgain. This in turn provides almost sure global well-posedness (GWP) of KdV as well as the invariance of the white noise. Then, we discuss how one can use the same idea to obtain LWP of the stochastic KdV with the additive space-time white noise in the periodic setting.Next, we consider the weak convergence problem of the grand canonical ensemble (i.e. the interpolation measure of the usual Gibbs measure and the white noise) with a small parameter (tending to 0) to the white noise. This result, combined with the GWP in $H^{-1}$ by Kappeler and Topalov, provides another proof of the invariance of the white noise for KdV. In this talk, we discuss the same weak convergence problem for mKdV and cubic NLS, which provides the "formal'' invariance of the white noise. This part is a joint work with J. Quastel and B. Valko.
Lastly, if time permits, we discuss well-posedness of the Wick ordered cubic NLS on the Gaussian ensembles below $L2$. The main ingredient is nonlinear smoothing under randomization of initial data. For GWP, we also use the invariance (of the Gaussian ensemble) under the linear flow. This part is a joint work with J. Colliander.
|
|
# Relativistic weight of a bouncing ball
I was thinking about the mass–energy equivalence, which means that for example a hot gas weighs more than a cold gas and I tried to imagine that microscopically. Lets imagine a gas in a chamber on a weighing scale. If we look at the situation microscopically, we see all the particles bouncing off the walls and every collision transfers a momentum $$\Delta p_i$$. The sum of all the forces that these collisions exert on the walls of the chamber is $$F=\frac{\sum_i\Delta p_i}{\Delta t}$$ and this has to be equal to the total weight of the gas, wich in classical mechanics is just the sum of the weights of the individual particles $$F=\sum_i m_ig$$.
However according to relativity, if I add energy to the gas its total mass has to increase by $$\Delta m=\frac{\Delta E}{c^2}$$. So i am trying to verify that via the same approach with $$F=\frac{\Delta p}{\Delta t}$$ as above. But before I do that I can simplify my model: The collisions on the sides cancel each other out and in the most simple case my gas consists only of one particle. So I ended up with a ball bouncing repeatedly on a scale.
So here is my experiment:
At time $$t_0=0$$ I release a ball at height $$h_0$$ above a weighing scale with an initial velocity of $$v_0=0$$. I want this example to be as easy as possible, so i assume a constant gravitaional field $$F=mg$$. At time $$t_1$$ the ball bounces off the scale with speed $$v_1$$. I assume perfectly elastic collision, so the ball will reach the same height again and the motion repeats itself with a peroid of T.
What I want to calculate is the time averaged weight $$F=\frac{\Delta p}{T}$$ the scale will show over one period.
Newtonian mechanics
The calculation in newtonian mechanics is easy. The speed of the ball when it reaches the scale is $$v_1=\sqrt{2gh_0}$$ and the falling time is $$t_1=\sqrt{\frac{2h_0}{g}}$$. When the ball hits the scale it's velocity changes from $$v=-v_1$$ to $$v=+v_1$$, so the change in impulse is $$\Delta p=2mv_1$$. Therefore the time averaged weight the scale shows is $$F=\frac{\Delta p}{T}=\frac{2mv_1}{2t_1}=mg.$$ Thats the same weight as if the ball is resting on the scale, which is what I expected in newtonian mechanics.
Relativistic mechanics
I do have experience in special relativity, but not in general relativity. So I wasn't really sure how to solve this problem. That being said, here is what i thought: It is obvious that the acceleration that I measure in my lab frame for a ball in free fall can't be constant a=g like in classical mechanics. If there would be a constant acceleration, a particle that is already falling at a speed close to c would be moving faster than c after a short time. Therefore I know that $$a$$ has to approach zero as $$v$$ approaches c.
According to this wikipedia article, I can use the usual definition of work, which means that the Integral $$\int_0^{h(t)} mgdz=mgh(t)\tag{1}$$ is equal to the difference in kinetic energy of the ball.
Conservation of energy: $$mgh_0=mgh(t)+(\gamma(v(t))-1)mc^2\tag{2}$$ $$\Rightarrow \gamma(v_1)=\frac{gh_0}{c^2}+1\tag{3}$$ and $$\Rightarrow v(t)=\frac{dh(t)}{dt}=-\sqrt{c^2-\frac{c^2}{((h_0-h(t))g/c^2+1)^2}}\tag{4}$$
The solution of that differential equation with $$h(t=0)=h_0$$ and $$v(t=0)=0$$ is $$h(t)=\frac{c^2}{g}\left(1-\sqrt{1+\frac{g^2t^2}{c^2}}\right)+h_0.\tag{5}$$
We also have $$\lim\limits_{t\rightarrow \infty}\frac{dh}{dt}=-c \qquad \text{and} \qquad \frac{d^2h}{dt^2}(t=0)=-g.$$
We get the newtonian solution back if we do a taylor approximation with $$gt/c<<1$$ for the square root term $$h(t)\approx -\frac{gt^2}{2}+h_0.\tag{6}$$
So everything seems fine and we can now calculate the falling time $$h(t_1)\stackrel{!}{=}0$$ $$\Rightarrow T=2t_1=2\sqrt{\frac{2h_0}{g}+\frac{h_0^2}{c^2}}.\tag{7}$$
And we can finally calculate the time averaged force $$F=\frac{\Delta p}{T}=\frac{2\gamma(v_1)mv_1}{2t_1}=\frac{\gamma(v_1)m\sqrt{c^2-\frac{c^2}{(gh_0/c^2+1)^2}}}{\sqrt{\frac{2h_0}{g}+\frac{h_0^2}{c^2}}}=\gamma(v_1)mg\frac{1}{1+\frac{gh_0}{c^2}}=mg.\tag{8}$$
However thats not the result that I expected. Since the bouncing ball has a higher total energy as a resting ball, I expected the mass to increase by $$\Delta m=\frac{\Delta E}{c^2}=\frac{mgh_0}{c^2}.\tag{9}$$
Is my calculation wrong somewhere? If so, how can I calculate the fall time and velocity of the ball properly?
• "If there would be a constant acceleration, a particle that is already falling at a speed close to c would be moving faster than c after a short time." No, it doesn't work like that. Special relativity imposes no limits on acceleration, and you need to use the relativistic rule to combines speeds, en.wikipedia.org/wiki/Velocity-addition_formula Also see math.ucr.edu/home/baez/physics/Relativity/SR/Rocket/rocket.html – PM 2Ring Jan 31 at 16:57
• @PM2Ring I'm talking about the observed acceleration in the lab frame. I know already how addition of velocities works in special relativity. This doesn't help me. – Azzinoth Jan 31 at 19:09
• Not sure I follow your expectation (at the end). Yes, each bounce sees the larger mass-energy. But the bounces are further apart, because the acceleration doesn’t cause velocity to grow uniformly. – Bob Jacobsen Feb 1 at 14:22
• @BobJacobsen Yes, and indeed $T=2\sqrt{\frac{2h_0}{g}+\frac{h_0^2}{c^2}}$ is bigger than the classical T. If this bouncing ball doesn't show a bigger weight, then multiple bouncing balls (i.e. a gas) wouldn't weigh more either (as long as I don't assume any interaction between the particles). That would mean, that a hot gas does not weigh more? – Azzinoth Feb 1 at 15:09
• What time $t$ are you using? Is it the one at the floor? – md2perpe Feb 12 at 19:18
# Calculations in Rindler coordinates
The Rindler coordinates describe an accelerating system in a flat space. Every point retains a constant proper acceleration, but the proper acceleration differs between heights. The space coordinates describe proper length, but the time coordinate only describes proper time at one height.
Let the coordinate system of some inertial system be denoted by $$(T,X)$$ and the coordinates of a Rindler system be denoted by $$(t,x).$$ We will also use $$(\tau,\xi)$$ for proper time and length.
The transformation formulas from the Rindler coordinates to the inertial coordinates are given by $$X = x \cosh \alpha t \\ T = x \sinh \alpha t$$ where $$\alpha$$ is a constant (a parameter generating a family of coordinate systems) and we use $$c=1.$$
One easily finds that $$ds^2 = -dT^2 + dX^2 = -\alpha^2 x^2 \, dt^2 + dx^2.$$ Thus the proper time is given by $$d\tau = \alpha x \, dt$$ and the proper distance by $$d\xi = dx.$$
An object at a constant $$X = x_\max$$ will be freely falling in the accelerating system, with maximum $$x = x_\max$$ at time $$t=0$$ and following $$x = \frac{x_\max}{\cosh \alpha t}.$$
Let us consider a fall down to $$x = x_\min$$. The falling object will arrive there at time $$t = \frac{1}{\alpha} \cosh^{-1} \frac{x_\max}{x_\min}$$ so the proper time of fall is $$\tau_{\text{fall}} = x_\min \cosh^{-1} \frac{x_\max}{x_\min}.$$
The proper speed when the object comes down is $$v = \frac{d\xi}{d\tau} = -\tanh \alpha t = - \sqrt{1 - \left(\frac{x_\min}{x_\max}\right)^2}$$ so the $$\gamma$$ factor is $$\gamma = \frac{1}{\sqrt{1-v^2}} = \frac{x_\max}{x_\min}.$$
The proper momentum will therefore be $$p = \gamma m v = (-) m \sqrt{\left(\frac{x_\max}{x_\min}\right)^2-1}.$$
Thus, the "force" will be $$F = \frac{p}{\tau_{\text{fall}}} = \frac{m \sqrt{\left(\frac{x_\max}{x_\min}\right)^2-1}}{x_\min \cosh^{-1} \frac{x_\max}{x_\min}}.$$
|
|
# Neutrino production in supernovae
As much as 99% of the gravitational energy liberated in a core collapse supernova emerges as kinetic energy of neutrinos. The neutrinos are produces in nuclear reactions (electron capture on nuclei and free protons) and non-nuclear reactions (e.g. pair annihilation $$\gamma\to\text{e}^-+\text{e}^+\to\nu+\overline\nu$$).
How can we estimate without detailed model calculations that as much as 99% of the gravitational energy (not, say, 50 or 10%) emerges as kinetic energy of neutrinos?
• For the question, "How can the electrons be degenerate at this extremely high temperature?" the answer is pressure. Lots and lots of pressure, enough that the back-pressure from the thermal motion of the electrons is dwarfed by the effective back-pressure from the Heisenberg uncertainty principle (in fact, the electron degeneracy pressure is independent of temperature). – probably_someone Oct 25 '19 at 17:48
• In any case, you night want to split this multi-question into several single questions. They're easier to answer that way and won't risk the question being closed. – probably_someone Oct 25 '19 at 17:48
• @probably_someone I realise that the degeneracy pressure is almost independent of temperature. But the question whether or not degeneracy occurs, depends on the degeneracy parameter $\psi=E_\text{F}/(k\cdot T)$. How does pressure enter into this? My questions are connected, so I prefer not to split them in different entries. – gamma1954 Oct 25 '19 at 18:44
• @Kyle Kanos the Q&A to which you refer, doesn't give me a clue how to estimate the 99% – gamma1954 Oct 25 '19 at 18:45
• @probably_someone the pressure is a consequence of the high density. It is the high density that leads to electron degeneracy. There are too many questions here, I am not going to try and write an answer to all of them – ProfRob Oct 25 '19 at 19:43
The 99% value is calculated from comparing maths to observations. In particular, you compute the gravitational potential energy of the precursor, $$U\sim\frac{GM^2}{R}\sim10^{53}\,\text{erg}$$ Then you compare this to the observed energies in supernovae, which is typically $$10^{51}$$ erg (sometimes called a foe for fifty one ergs). Hence, the need for something that accounts for the unaccounted 99% of the energy.
• The Newtonian $U\sim GM^2/R$ and the assumption of uniform density do not seem valid in the collapsed final state of a (proto) neutron star. So $|\Delta U|=10^{53}\text{erg}$ is a rough estimate. A kinetic energy of $10^{51}\text{erg}$ probably is an average value, depending on the mass and velocity of the ejecta. In 1966 S.A. Colgate and R.H. White assumed that neutrinos are the energy "sink" for the collapsing core, for lack of alternative sinks. 25 neutrinos in a limited energy range from the "peculiar" SN1987A, do not prove 99% is realistic. Maybe we should read 99% as "a large fraction". – gamma1954 Oct 29 '19 at 18:22
• For those who struggle with the distinction between "fifty one ergs" and "ten to the power of 51 erg", Steven Weinberg proposed the alternative unit bethe. $1\text{B}=10^{51}\text{erg}=1\text{foe}$ ;-) – gamma1954 Oct 29 '19 at 18:25
• @gamma1954 the GPE calc is clearly for the precursor star and not the NS/BH that results. And even if you assume both $\sim100$ foe & $\sim1$ foe are estimates, they are much too far apart to reconcile without $\nu$s. – Kyle Kanos Oct 29 '19 at 20:07
|
|
# The apostrophe with Adobe Caslon Pro
I'm writing an article with Adobe Caslon Pro and Xelatex and I found that this mark punctuation ’, an apostrophe, which is ordinary used in French, has not the required interval when used, for instance, in the following words : "L'écriture". This issue is probably similar to this one described here : [https://tex.stackexchange.com/questions/218415/french-spacing-with-xelatex-and-minion-pro][1] In this post, a solution consists in doing something I can't replicate. Could anyone help me to do that? This is the MWE:
\documentclass[12pt]{article}
\usepackage[french]{babel}
\usepackage{filecontents,fontspec,microtype}
\begin{filecontents*}{minion.fea}
languagesystem DFLT dflt;
languagesystem latn dflt;
feature kern {
pos \quoteright \a 10;
pos \quoteright \eacute 10;
pos \quoteright \o 10;
} kern;
\end{filecontents*}
\setmainfont{Minion Pro}[
FeatureFile={minion.fea}]
\frenchspacing
\begin{document}
l’étranger
d’accord
d’ores et déjà
\end{document}
• It seems Libre Caslon (open type name Libre Caslon Text) has a better spacing for the apostrophe. Jun 3 '21 at 18:38
• I already tested it. But it is not the same finish at all.
– domi
Jun 3 '21 at 19:17
• I agree that Libre Caslon isn’t as attractive. But have you tried the solution by jch in the linked question about Minion? This is a problem with Adobe fonts generally, and the same approaches should apply. Jun 3 '21 at 20:12
• I've just edited my question in this way.
– domi
Jun 3 '21 at 20:40
I don’t have Caslon, but this is a problem with many Adobe fonts, so I’ll illustrate an approach with Minion. Updating French spacing with xelatex and Minion Pro in view of How to adjust font features in LuaTeX?, we get something like this (iff you’re willing to change your compiler to luatex):
\documentclass[12pt]{article}
\usepackage[french]{babel}
\usepackage{fontspec,microtype}
{
name = "kern",
type = "kern",
data = {
["quoteright"] = { ["a"] = 150 , ["eacute"] = 150 , ["o"] = 150 },
},
}
}
\setmainfont{Minion Pro}
\frenchspacing
\begin{document}
l’étranger
d’accord
d’ores et déjà
\end{document}
Adjust 150 to taste.
• Thanks! It works very well, but it means that I have to adjust an entire book on a case-by-case basis! For instance, these are my adjustments: ["quoteright"] = { ["a"] = 50 , ["eacute"] = 150 , ["o"] = 90 , ["h"] = 90 , ["e"] = 90 , ["u"] = 50}
– domi
Jun 3 '21 at 21:08
• Yes, it’s painful — a sample of what type designers must go through when they kern a whole font family. Jun 3 '21 at 23:07
• I can't manage to adjust ["ecirc"]. The space does not change. Any idea?
– domi
Jun 5 '21 at 17:36
• Did you mean ["ecircumflex"]? Fontforge is useful for finding glyph names (and for many other things). Jun 5 '21 at 17:52
• I'd like to add a space before "?". How to do that?
– domi
Jun 20 '21 at 17:43
|
|
Magyar Information Contest Journal Articles
# Problem A. 482. (May 2009)
A. 482. Let n be a positive integer. Prove that .
(5 pont)
Deadline expired on June 15, 2009.
Solution. Let .
The sequence is geometric sequence with quotient . In the cases =0 and =2n+1, the quitient is 1 and the last sum equals 2n+1. If 0<<2n+1, then
Therefore,
We proved that
which is positive.
### Statistics:
4 students sent a solution. 5 points: Backhausz Tibor, Nagy 235 János, Nagy 314 Dániel, Tomon István.
Our web pages are supported by: Morgan Stanley
|
|
Exact L^2-distance from the limit for QuickSort key comparisons (extended abstract)
Patrick Bindjeme, James Allen Fill
Using a recursive approach, we obtain a simple exact expression for the L^2-distance from the limit in R\'egnier's (1989) classical limit theorem for the number of key comparisons required by QuickSort. A previous study by Fill and Janson (2002) using a similar approach found that the d_2-distance is of order between n^{-1} log n and n^{-1/2}, and another by Neininger and Ruschendorf (2002) found that the Zolotarev zeta_3-distance is of exact order n^{-1} log n. Our expression reveals that the L^2-distance is asymptotically equivalent to (2 n^{-1} ln n)^{1/2}.
arrow_drop_up
|
|
# How do you find the vertical, horizontal or slant asymptotes for f(x) = (1/x) + 3?
Jan 2, 2017
Horizontal: $\leftarrow y = 3 \rightarrow$
Vertical : $\uparrow x = 0 \downarrow$
#### Explanation:
graph{x(y-3)(x(y-3)-1)=0x^2 [-20, 20, -10, 10]} $y = f \left(x\right) = \frac{1}{x} - 3$,
$x \left(y - 3\right) = 1$ .....(1)
As $x \to 0 , y - 3 \to \pm \infty$ and this gives $y \to \pm \infty$.
Likewise,
as$y - 3 \to 0 , x \to \pm \infty$
So,
the vertical asy,ptote is x = 0 and
the horizontal asymptote is y = 3.#.
Note
The equation $\left(y - {m}_{1} x - {c}_{1}\right) \left(y - {m}_{2} x - {c}_{2}\right) = k$ represents a
hyperbola having the guiding asymptotes
$\left(y - {m}_{1} x - {c}_{1}\right) \left(y - {m}_{2} x - {c}_{2}\right) = 0$.
The hyperbola is rectangular, if ${m}_{1} {m}_{2} = - 1$.
Here, from (1), it is immediate that
x(y-3)=0 gives the pair of perpendicular asymptotes.
See the Socratic graph.
.
|
|
Courses
Infinitesimal Calculus 1(!)Pdf 201.1.0011
6.0 Credits
1. Real numbers (axiomatic theory). Supremum and Infimum of a set. Existence of an n-th root for any a > 0. 2. Convergent sequences, subsequences, Cauchy sequences. The Bolzano-Weierstrass theorem. Upper and lower limits. 3. Series. Partial sums, convergent and divergent series, Cauchy criterion. Series of non-negative terms. The root and the ratio tests. Series of arbitrary terms. Dirichlet, Leibnitz, and Abel theorems. Rearrangements of series. The Riemann Theorem. 4. The limit of a function. Continuous functions. Continuity of the elementary functions. Properties of functions continuous on a closed interval. Uniformly continuous functions. Cantor?s theorem. 5. The derivative of a function. Mean value theorems. Derivatives of higher order. L’Hospital’s rule. Taylor’s theorem.
Infinitesimal Calculus 2(!)Pdf 201.1.0021
6.0 Credits
1. The Riemann integral: Riemann sums, the fundamental theorem of calculus. Methods for computing integrals (integration by parts, substitution, partial fractions). Improper integrals and application to series. Numerical integration. Stirling’s formula and additional applications time permitting.
2. Uniform and pointwise convergence. Cauchy’s criterion and the Weierstrass M-test. Power series. Taylor series, analytic and non-analytic functions. Convolutions, approximate identities and the Weierstrass approximation theorem. Additional applications time permitting.
3. A review of vectors in R^n and linear maps. The Euclidean norm and the Cauchy-Schwarz inequality. Basic topological notions in R^n. Continuous functions of several variables. Curves in R^n, arc-length. Partial and directional derivatives, differentiability and C^1 functions. The chain rule. The gradient. Implicit functions and Lagrange multipliers. Extrema in bounded domains.
Algebra 1Pdf 201.1.7011
Dr. Stewart Smith5.0 Credits
1. General background: sets and operations on them, Complex numbers: definition (via ordered pairs), addition and multiplication, inverses, adjoint, absolute value. Real and complex polynomials and their roots.
2. Fields: Definition, properties, examples: Rationals, reals, complex numbers, integers mod p.
3. Linear equations over fields, matrices and elementary row operations, rank of a matrix, solutions of homogeneous and non homogeneous systems of linear equations and the connections between them.
4. Vector spaces over fields, subspaces, bases and dimensions, coordinates change of coordinate matrix, row rank as rank of a subspace, sums, direct sums of subspaces and the dimension theorem.
5. Matrices multiplication, the algebra of square matrices, inverse determinants: properties, Cramer’s rule, adjoint and its use for finding the inverse.
6. Linerar transformationsbasic propertieskernel and image of a linear trasformationrepresentaion of linear transformations by matrices and the effect of change of bases.linear functionals, dual bases
Algebra 2Pdf 201.1.7021
Prof. Yoav Segev5.0 Credits
1. Polynomialsalgebras and idealsthe algebra of polynomials and its ideal sturctureLagrange interpolationthe prime factorization of a polynomial.2. Elementary canonical forms characteristic values and vectors of linear transformations and matrices.characteristic polynomials and annihilating polynomialsinvariant subspaces.direct sum decompostions .invariant direct sums. the primary decomposition theorem.diagonalization:necessary and sufficient conditions for diagonilaztion, computing diagonalizing matrices.3. Inner product spacesinner productsinner product spaces linear functionals and adjointsunitary operatorsHermitian operatorsnormal operators and the spectral decomposition theoremsingular value decomposition theorem and applications4. Jordan forms (optional)cyclic subspaces and annihilatorscyclic decompostionsthe Jordan form and its computation
Introduction to TopologyPdf 201.1.0091
Prof. Michael Levin4.0 Credits
Topological spaces and continuous functions (product topology, quotient topology, metric topology). Connectedness and Compactness. Countabilty Axioms and Separation Axioms (the Urysohn lemma, the Urysohn metrization theorem, Partition of unity). The Tychonoff theorem and the Stone-Cech compactification. Metrization theorems and paracompactness.
Intro to logic and setsPdf 201.1.0171
4.0 Credits
1. Partially ordered sets. Chains and antichains. Examples. Erdos-Szekeres’ theorem or a similar theorem. The construction of a poset over the quotient space of a quasi-ordered set.
2. Comparison of sets. The definition of cardinality as as an equivalence class over equinumerousity. The Cantor-Bernstein theorem. Cantor’s theorem on the cardinality of the power-set.
3. Countable sets. The square of the natural numbers. Finite sequences over a countable set. Construction of the ordered set of rational numbers. Uniqueness of the rational ordering.
4. Ramsey’s theorem. Applications.
5. The construction of the ordered real line as a quotient over Cauchy sequences of rationals.
6. Konig’s lemma on countably infinite trees with finite levels. Applications. A countable graph is k-colorable iff every finite subgraph of it is k-colorable.
7. Well ordering. Isomorphisms between well-ordered sets. The axiom of choice formulated as the well-ordering principle. Example. Applications. An arbitrary graph is k–colorable iff every finite subgraph is k-colorable.
8. Zorn’s lemma. Applications. Existence of a basis in a vector space. Existence of a spanning tree in an arbitrary graph.
9. Discussion of the axioms of set theory and the need for them. Russle’s paradox. Ordinals.
10. Transfinite induction and recursion. Applications. Construction of a subset of the plane with exactly 2 point in every line.
11. Infinite cardinals as initial ordinals. Basic cardinal arithmetic. Cardinalities of well known sets. Continuous real functions, all real runctions, the automorphisms of the real field (with and without order).
Fourier AnalysisPdf 201.1.0231
Dr. Izhar Oppenheim4.0 Credits
• Cesaro means: Convolutions, positive summability kernels and Fejer’s theorem.
• Applications of Fejer’s theorem: the Weierstrass approximation theorem for polynomials, Weyl’s equidistribution theorem, construction of a nowhere differentiable function (time permitting).
• Pointwise and uniform convergence and divergence of partial sums: the Dirichlet kernel and its properties, construction of a continuous function with divergent Fourier series, the Dini test.
• $L^2$ approximations. Parseval’s formula. Absolute convergence of Fourier series of $C^1$ functions. Time permitting, the isoperimetric problem or other applications.
• Applications to partial differential equations. The heat and wave equation on the circle and on the interval. The Poisson kernel and the Laplace equation on the disk.
• Fourier series of linear functionals on $C^n(\mathbb{T})$. The notion of a distribution on the circle.
• Time permitting: positive definite sequences and Herglotz’s theorem.
• The Fourier transform: convolutions, the inversion formula, Plancherel’s theorem, Hermite functions. Time permitting: tempered distributions, further applications to differential equations.
• Fourier analysis on finite cyclic groups, and the Fast Fourier Transform algorithm.
Introduction to Partial Differetial EquationsPdf 201.1.0291
4.0 Credits
Introduction to partial differential equations ? The first order equations: a linear equation, a quasilinear equation, resolving the initial value problem by the method of characteristic curves. ? Classification of the second order equations: elliptic, hyperbolic and parabolic equations, exam- ples of Laplace, Wave and Heat equations. ? Elliptic equations: Laplace and Poisson?s equations, Dirichlet and Neumann boundary value problems, Poisson?s kernel, Green?s functions, properties of harmonic functions, Maximum prin- ciple. ? Analytical methods for resolving partial differential equations: Sturm-Liouville problem and the method of separation of variables for bounded domains, applications for Laplace, Wave and Heat equations including non-homogenous problems. Applications of Fourier and Laplace transforms for resolving problems in unbounded domains. ? Heat equation: initial value problem in unbounded domain, basic formula for the solution, initial-boundary value problems in bounded domains, Maximum principle. ? Wave equation: Dalamber formula, non-homogenous equation, Wave equation in higher dimen- sions. ? If time permits: Legendre polynomials and spherical functions. Literature: ? Pinchover Y.; Rubinstein J. Introduction to partial differential equations (in Hebrew), Depart- ment of mathematics, Technion, 2011, ? John F. Partial differential equations, Reprin t of the fourth edition. Applied Mathematical Scien ces, 1. Springer-Verlag, New York, 1991, ? Evans Lawrence C. Partial Differential Equations, Second Edition, ? Gilbarg D.; Trudinger N. S. Elliptic partial differential equations of second order, Reprint of the 1998 edition. Classics in Mathematics. Springer-Ver lag, Berlin, 2001, ? Zauderer E. Partial differential equations of applied mathematics, Second edition. Pure and Appli ed Mathematics (New York). A Wiley-Interscience Public ation. John Wiley & Sons, Inc., New York, 1989. xvi+891 pp. ISBN: 0-471-61298-7. 1
Coding TheoryPdf 201.1.4501
Prof. Ido Efrat
Coding Theory investigates error-detection and error-correction. Such errors can occur in various communication channels: satellite communication, cellular telephones, CDs and DVDs, barcode reading at the supermarket, etc. A mathematical analysis of the notions of error detection and correction leads to deep combinatorial problems, which can be sometimes solved using techniques ranging from linear algebra and ring theory to algebraic geometry and number theory. These techniques are in fact used in the above-mentioned communication technologies.
Topics
1. The main problem of Coding Theory
2. Bounds on codes
3. Finite fields
4. Linear codes
5. Perfect codes
6. Cyclic codes
7. Sphere packing
8. Asymptotic bounds
Bibliography:
R. Hill, A First Course in Coding Theory, Clarendon Press, Oxford 1986
Graph TheoryPdf 201.1.6081
Dr. Yaar Solomon4.0 Credits
Graphs and sub-graphs, trees, connectivity, Euler tours, Hamilton cycles, matching, vertex and edge colorings, planar graphs, introduction to Ramsey theory, directed graphs, probabilistic methods and linear algebra tools in Graph Theory.
Field Theory and Galois Theory(*)Pdf 201.1.7041
Dr. Ishai Dan-Cohen4.0 Credits
• Fields: basic properties and examples, the characteristic, prime fields
• Polynomials: irreducibility, the Eisenstein criterion, Gauss’s lemma
• Extensions of fields: the tower property, algebraic and transcendental extensions, adjoining an element to a field
• Ruler and compass constructions
• Algebraic closures: existence and uniqueness
• Splitting fields
• Galois extensions: automorphisms, normality, separability, fixed fields, Galois groups, the fundamental theorem of Galois theory.
• Cyclic extensions
• Solving polynomial equations by radicals: the Galois group of a polynomial, the discriminant, the Cardano-Tartaglia method, solvable groups, Galois theorem
• Roots of unity: cyclotomic fields, the cyclotomic polynomials and their irreducibility
• Finite fields: existence and uniqueness, Galois groups over finite fields, primitive elements
Homological AlgebraPdf 201.2.2091
Prof. Amnon Yekutieli
Course Topics:
1. Categories and functors: natural transformations, equivalence, adjoint functors, additive functors, exactness.
2. Derived functors: projective, injective and flat modules; resolutions, the functors $Ext$ and $Tor$; examples and applications.
3. Nonabelian cohomology and its applications.
Group TheoryPdf 201.2.4041
Prof. Yoav Segev
• Permutation representation and the Sylow theorems.
• Representations of groups on groups, solvable groups, nilpotent groups, semidirect and central products.
• Permutation groups, the symmetric and alternating groups.
• The generalized Fitting subgroup of a finite group.
• $p$-groups.
• Extension of groups: The first and second cohomology and applications.
Algebraic curvesPdf
Prof. Dmitry Kerner
1. Affine algebraic sets and varieties.
2. Local properties of plane curves.
3. Projective varieties and projective plane curves.
4. Riemann–Roch theorem.
Vector Bundles in Geometry and AnalysisPdf 201.2.5051
Prof. Eitan Sayag2.0 Credits
Wed 17:00-19:00, Math 201
• vector bundles and K-groups of topological spaces
• Bott’s Periodicity theorem and applications to division algebras
• Index of Fredholm operators and K-theory
• If time permits: smooth manifolds, DeRham cohomology, Chern classes, Elliptic operators, formulation of Atiyah-Singer index theorem, relations to Gauss-Bonnet theorem
Random walks and harmonic functionsPdf 201.2.0391
This course deals with random walks, harmonic functions, the relations between these notions, and their applications to geometry and algebra (mainly to finitely generated groups).
The modern point of view will be presented, following recent texts by: Gromov, Kleiner, Ozawa, Shalom & Tao, among others.
Discrete GeometryPdf 201.2.0191
Prof. Shakhar Smorodinsky
The course is intended for 3rd year undergraduate as well as M.Sc and Ph.D. students both in computer science and mathematics. We will touch main topics in the area of discrete geometry. Some of the topics are motivated by the analysis of algorithms in computational geometry, wireless and sensor networks. Some other beautiful and elegant tools are proved to be powerful in seemingly non-related areas such as additive number theory or hard Erdos problems. The course does not require any special background except for basic linear algebra, and a little of probability and combinatorics. During the course many open research problems will be presented.
Detailed Syllabus:
• Fundamental theorems and basic definitions: Convex sets, convex combinations, separation , Helly’s theorem, fractional Helly, Radon’s theorem, Caratheodory’s theorem, centerpoint theorem. Tverberg’s theorem. Planar graphs. Koebe’s Theorem. A geometric proof of the Lipton-Tarjan separator theorem for planar graphs.
• Geometric graphs: the crossing lemma. Application of crossing lemma to Erdos problems: Incidences between points and lines and points and unit circles. Repeated distance problem, distinct distances problem. Selection lemmas for points inside discs, points inside simplexes. Counting k-sets. An application of incidences to additive number theory.
• Coloring and hiting problems for geometric hypergraphs : VC-dimension, Transversals and Epsilon-nets. Weak eps-nets for convex sets. Conflict-free colorings .
• Arrangements : Davenport Schinzel sequences and sub structures in arrangements. Geometric permutations.
• Geometric Ramsey and Turan type theorems: Application of Dilworth theorem, Erdos-Szekeres theorem for convex sets, quasi-planar graphs.
Introduction to Model TheoryPdf 201.2.0091
Dr. Moshe Kamensky
ימי רביעי, 8:00–10:00
We will present some basic notions and constructions from model theory, motivated by concrete questions about structures and their theories. Notions we expect to cover include:
• Types and spaces of types
• Homogeneous and saturated models
• Quantifier elimination and model companions
• Elimination of imaginaries
• Definable groups and fields
Prerequisites
Students should be familiar with the following concepts from logic: Languages, structures, formulas, theories, the compactness theorem. In addition, some familiarity with field theory, topology and probability will be beneficial.
Fourier Analysis for Electrical EngineeringPdf 201.1.0041
Dr. Yosef Strauss1.0 Credits
1. Normed spaces and spaces with inner products. The projection theorem for finite dimensional subspaces. Orthogonal systems in infinite dimensional spaces. The Bessel inequality and the Parseval equality, closed orthogonal systems. The Haar system.
2. The Fourier series (in real and complex form). Approximate identities, closedness of the trigonometric / exponential system. Uniform convergence of the Fourier series of piecewise continuously differentiable functions on closed intervals of continuity; the Gibbs phenomenon. Integrability and differentiability term by term.
3. The Fourier transform. The convolution theorem. The Plancherel equality. The inversion theorem. Applications: low pass filters and Shannon’s theorem.
4. The Laplace transform. Basic relations and connection with the Fourier transform. A table of the Laplace transforms. The convolution integral. Application of the Laplace transform for solution of ODEs.
5. Introduction to the theory of distributions. Differentiation of distributions, the delta function and its derivatives. Fourier series, Fourier transforms, and Laplace transforms of distributions.
Introduction to Complex AnalysisPdf 201.1.0071
Dr. Assaf Hasson3.5 Credits
1. Complex numbers, open sets in the plane.
2. Continuity of functions of a complex variable
3. Derivative at a point and Cauchy–Riemann equations
4. Analytic functions; example of power series and elementary functions
5. Cauchy’s theorem and applications.
6. Cauchy’s formula and power series expansions
7. Morera’s theorem
8. Existence of a logarithm and of a square root
9. Liouville’s theorem and the fundamental theorem of algebra
10. Laurent series and classification of isolated singular points. The residue theorem
11. Harmonic functions
12. Schwarz’ lemma and applications
13. Some ideas on conformal mappings
14. Computations of integrals
Partial Differential EquationsPdf 201.1.0101
Prof. Boris Zaltzman2.5 Credits
1. Second order linear equations with two variables: classification of the equations in the case of constant and variable coefficients, characteristics, canonical forms.
2. Sturm-Liouville theory.
3. String or wave equation. Initial and boundary value conditions (fixed and free boundary conditions). The d’Alembert method for an infinitely long string. Characteristics. Wave problems for half-infinite and finite strings. A solution of a problem for a finite string with fixed and free boundary conditions by the method of separation of variables. The uniqueness proof by the energy method. Well-posedness of the vibrating string problem.
4. Laplace and Poisson equations. Maximum principle. Well-posedness of the Dirichlet problem. Laplace equation in a rectangle. Laplace equation in a circle and Poisson formula. An ill-posed problem - the Cauchy problem. Uniqueness of a solution of the Dirichlet problem. Green formula in the plane and its application to Neumann problems.
5. Heat equation. The method of separation of variables for the one-dimensional heat equation. Maximum principle. Uniqueness for the one-dimensional heat equation. The Cauchy problem for heat equations. Green?s function in one dimension. If time permits: Green?s function in the two dimensional case.
6. Non-homogeneous heat equations, Poisson equations in a circle and non-homogeneous wave equations.
7. If time permits: free vibrations in circular membranes. Bessel equations.
none
Advanced Anlysis for Communication EngineeringPdf 201.1.0241
3.5 Credits
1. Complex numbers: Cartesian coordinates, polar coordinates. Functions of a complex variable. Basic properties of analytic functions, the exponential function, trigonometric functions. Definition of contour integral. The Cauchy Integral Formula. Residues and poles. Evaluation of impoper real integrals with the use of residues.
2. Inner product functional spaces. Orthogonal and orthonormal systems. Generalized Fourier series. Theorem on orthogonal projection. Bessel’s inequality and Parseval’s equality.
3. Trigonometric Fourier series. Complex form of Fourier series. Fourier series expansion defined over various intervals. Pointwise and uniform convergence of Fourier series. Completness of trigonometric system and Parseval’s equality. Differentiation and integration of Fourier series.
4. The Fourier integral as a limit of Fourier series. The Fourier transform: definition and basic properties. The inverse Fourier transform. The convolution theorem, Parseval’s theorem for the Fourier transform. A relation between Fourier and Laplace transforms. Application of Fourier transform to partial differential equations and image processing.
5. Distributions (generalized functions). The Heaviside step function, the impulse delta-function. Derivative of distribution. Convergence of sequences in the space of distributions. The Fourier transform of distributions.
Topics in mathematicsPdf 201.1.0261
The aim of the course is to train students in creative problem solving in various areas of mathematics, in a manner simulating, to some extent, mathematical research (rather than ordinary courses, where homework assignments are, usually closely ? and quite obviously - related to the topics reviewed in class). The course will focus on problems whose solutions call for tools and ideas from several fields of mathematics, and will reveal ? on a miniature scale - the beauty of mathematics as an integral field of knowledge, where between seemingly unrelated subjects deep and surprising connections may arise. The problems will focus on topics in classical and modern mathematics that, due mostly to their interdisciplinary nature, are not ? as a rule ? covered in the core classes offered by the department (such topics as the axiom of choice and its applications, the Banach-Tarski paradox, transcendental number theory etc.). Classes will be divided between lectures, given by the instructor, filling in background material required to address the problems in question, and between students’ presentation of their solutions to the work sheets distributed previously. In addition to all of the above, the course will help students improve their skills in searching the mathematical literature and in the art of writing and presenting proofs.
Calculus 1 for Computer Science and Software EngineeringPdf 201.1.2361
6.0 Credits
The system of the real numbers (without Dedekind cuts). The supremum axiom. Convergent sequences, subsequences, monotonic sequences, upper and lower limits. Series: partial sums, convergent and divergent series, examples, nonnegative series, the root test, the quotient test, general series, Dirichlet, Leibnitz, absolute convergence implies convergence (without a proof). Limits of functions, continuity, the continuity of the elementary functions, extrema in compact intervals. The derivative of a function, Lagrange’s Mean Value Theorem, high order derivatives, L’hospital’s rules, Taylor’s Theorem, error estimates, lots of examples. The Riemann integral: only for piecewise continuous functions (finitely many points of discontinuity). Riemann sums and the definition of the integral, The Fundamental Theorem of Calculus, the existence of primitive functions (anti-derivatives). Integration techniques: integration by parts, substitutions, partial fractions (without proofs), improper integrals, applications of integrals, estimation of series with the aid of integrals, Hardy’s symbols O, o and Omega, approximation of momenta and the Stirling formula.
Calculus 2 For Computer Science and Software EngineeringPdf 201.1.2371
Prof. Michael Levin5.0 Credits
Pointwise convergence of sequences and of series of functions. Functions of several variables: limits, continuity, directional derivatives, the gradient, the orthogonality of the gradient to level surfaces, the chain rule, critical points (the necessity of the vanishing of the first order derivatives and examples of saddle points). Integration in 2 variables, repeated integrals and changing the integration order, the dependence of the boundaries of the integrals on the order of integration. Optimization with Lagrange multipliers, examples and less proofs. Depending on time: the Euler Lagrange equation (in Variational Calculus).
Discrete Mathematics for Communication EngineeringPdf 201.1.6201
Dr. Matan Ziv-Av3.5 Credits
Prerequisites: 20119531 Linear Algebra Brief syllabus 1. Operations over sets, logical notation, relations. 2. Enumeration of combinatorial objects: integer numbers, functions, main principles of combinatorics. 3. Elementary combinatorics: ordered and unordered sets and multisets, binomial and multinomial coefficients. 4. Principle of inclusion and exclusion, Euler function. 5. Graphs: representation and isomorphism of graphs, valency, paths and cycles. 6. Recursion and generating functions: recursive definitions, usual and exponential generating functions, linear recurrent relations with constant coefficients. 7. (Optional) Modular arithmetics: congruences of integer numbers, Zm, invertible elements in Zm.
Statistics for Bio-InformaticsPdf 201.1.8041
Dr. Ilya Frenkel3.5 Credits
Probability theory: discrete and continuous variables, independent vs dependent variables, six basic discrete distributions: Bernoulli, binomial, uniform, geometric, negative binomial, Poissonian. Mean, variance, moments, probability generating function. Five basic continuous distributions: uniform, normal, exponential, gamma, beta. Moment generating function. Events, conditional probability, aging of molecules, entropy and related concepts, scores and support. Generating various probabilities. Many random variables. EST library. Covariance and correlation, iid, minimum and maximum of many random variables.Theoretical statistics; random sampling. Classical vs Bayesian approach. Distributions of the mean and variance of the sample; methods of calculating point estimates; point estimator for the mean; point estimator for the variance; biased vs unbiased, MSE, confidence intervals for the parameters of distribution. Basic ideas and definitions for the test of the hypothesis; errors of type I and II; P-values, tests for mean values, variances and proportions; test for the goodness of fit; test of independence; correlation coefficient; and its tests; linear regression; Likelihood ratios, information and support; maximum value as test statistic. Nonparametric: Mann-Whitney and permutation tests.Bayesian approach to hypothesis testing and estimation.ANOVA - analysis of variance: one-way and two-way.More theory on classical estimation: optimality aspects.BLAST.
Statistical methods for big dataPdf 201.1.9131
Dr. Luba Sapir3.5 Credits
1. Review of probability: a. Basic notions. b. Random variables, Transformation of random variables, Independence. c. Expectation, Variance, Co-variance. Conditional Expectation.
2. Probability inequalities: Mean estimation, Hoeffding?s inequality.
3. Convergence of random variables: a. Types of convergence. b. The law of large numbers. c. The central limit theorem.
4. Statistical inference: a. Introduction. b. Parametric and non-parametric models. c. Point estimation, confidence interval and hypothesis testing.
5. Parametric point estimation: a. Methods for finding estimators: method of moments; maximum likelihood; other methods. b. Properties of point estimators: bias; mean square error; consistency c. Properties of maximum likelihood estimators. d. Computing of maximum likelihood estimate
6. Parametric interval estimation a. Introduction. b. Pivotal Quantity. c. Sampling from the normal distribution: confidence interval for mean, variance. d. Large-sample confidence intervals.
7. Hypothesis testing concepts: parametric vs. nonparametric a. Introduction and main definitions. b. Sampling from the Normal distribution. c. p-values. d. Chi-square distribution and tests. e. Goodness-of-fit tests. f. Tests of independence. g. Empirical cumulative distribution function. Kolmogorov-Smirnov Goodness-of fit test.
8. Regression. a. Simple linear regression. b. Least Squares and Maximum Likelihood. c. Properties of least Squares estimators. d. Prediction.
9. Handling noisy data, outliers.
Calculus B1Pdf 201.1.9141
Dr. Natalia Karpivnik5.0 Credits
1. Introduction to number theory. Intervals and segments. Concept of a function. Elementary functions. 2. Limit of a function.3. Continuity and discontinuity of functions.4. Derivative and differential. Basic derivatives. Differentiability and continuity. Linear approximation by differentials. High-order derivatives. The fundamental theorems of differentiation and their applications. L’Hopital’s theorem and its application to calculation of limits.5. Taylor’s polynom. Expansion of functions into Taylor’s and McLoran’s series. Expansions of some usage functions. Application of Taylor’s and McLoran’s polynoms a) to approximate calculations, and b) to calculation of limits.6. Investigation of a function. Extremal points. Necessary and sufficient conditions for extrema. Max. and min. of a function within a segment. Convexity and concavity, inflection point. Asymptotes. Graph construction.7. Primitive function and indefinite integral. Table integrals. Calculation of indefinite integrals by decomposition, by parts, by substitution. Integration of rational and trigonometric functions.8. Definite integrals. Reimann’s sum. The fundamental theorem. Formula of Newton-Leibnitz. Calculation of definite integrals. Integration by decomposition, by parts, by substitution.9. Use in definite integrals to calculation of areas, volumes and curve lengthes. Rectungular and polar coordinate systems.10. First-order ordinary differential equations. General definitions. Cauchy problem. Separated variables.
Calculus B2Pdf 201.1.9151
Dr. Irena Lerman5.0 Credits
1. Infinite series. Tests for convergence. Taylor series and Taylor polynomials. Absolute convergence. Alternating series. Conditional convergence. Power series for functions. Convergence of power series; differentiation and integration. 2. Vectors and parametric equations. Parametric equation in analytic geometry. Space coordinates. Vectors in space. The scalar product of two vectors. The vector product of two vectors in space. Equations of lines and planes. product of three vectors and more. Catalog of the quadratic surfaces. Cylindres.3. Vector functions and their derivatives. Vector functions. differentiation formulas. Velocity and acceleration. Tangential vectors. Curvature and normal vectors. Polar coordinates.4. Partial differentiation. Functions of two and more variables. The directional derivative. limits and continuity. Tangent plane and normal lines. The gradient. The chain rule for partial derivatives. The total differentiation. Maxima and minima of functions of several independent variables. Higher order derivatives.5. Multiple integrals. Double integrals. Area and volume by double integrals. Double integrals in polar coordinates. Physical applications. triple integrals. Integration in cylindrical and spherical coordinates. Surface area. Change of variable in multiple integrals.6. Vector analysis. Vector fields. Line integrals. Independence of path. Green’s theorem. Surface integrals. The divergence theorem. Stokes’ theorem.
Calculus CPdf 201.1.9221
5.0 Credits
1. Real numbers and real line, elementary functions and graphs, some functions arising in economics. The limit of a function, calculating limits using the limit laws, continuity, the number e.2. The derivative of a function, differential rules, higher derivatives, L’Hospital rules.3. Extreme values of functions, monotonic functions, point of inflection, concavity, curve sketching, applications to economics.4. Indefinite integrals, techniques of integration, definite and improper integrals, areas between curves, applications to economics.5. Functions of several variables, economics examples, partial derivatives, linearization, the chain rile, implicit and homogeneous functions, maximum and minimum, Lagrange multipliers.6. Introduction to linear algebra, matrices, linear systems.
Introduction to Linear Algebra CPdf 201.1.9281
Dr. Natalia Karpivnik3.5 Credits
1. Introduction: the real and complex numbers, polynomials.
2. Systems of linear equations and Gauss elimination.
3. Vector spaces: examples (Euclidean 2-space and 3-space, function spaces, matrix spaces), basic concepts, basis and dimension of a vector space. Application to systems of linear equations.
4. Inverse matrices, the determinant, scalar products.
5. Linear transformations: kernel and image, the matrix representation of a transformation, change of basis.
6. Eigenvalues, eigenvectors and diagonalization.
Linear Algebra MEPdf 201.1.9321
Dr. Stewart Smith4.5 Credits
1. The real numbers, inequalities in real numbers, the complex numbers, the Cartesian representation, the polar representation, the exponential representation, the Theorem of de Moivre, root computations.
2. Systems of linear equations over the real or complex numbers, the solution set and its parametric representation, echelon form and the reduced echelon form of a matrix, backwards substitution, forward substitution and their complexity, the Gauss elimination algorithm and its complexity, the reduction algorithm and its complexity.
3. Vector spaces, sub-spaces of vector spaces, linear combinations of vectors, the span of a set of vectors, linear dependence and linear independence, the dimension of a vector space, row spaces and column spaces of matrices, the rank of a matrix.
4. Linear mappings between vector spaces, invertible mappings and isomorphisms, the matrix representation of finite dimensional linear mappings, inversion of a square matrix, composition of mappings, multiplication of matrices, the algebra of matrices, the kernel and the image of a linear mapping and the computation of bases, changing of a basis, the dimension theorem for linear mappings.
5. Inner product spaces, orthogonality, the norm of a vector, orthonormal sets of vectors, the Cauchy-Schwarz inequality, the orthogonal complement of a sub-space, orthogonal sequences of vectors, the Gram-Schmidt algorithm, orthogonal transformations and orthogonal matrices.
6. The determinant of a square matrix, minors and cofactors, Laplace expansions of the determinant, the adjoint matrix and Laplace theorem, conjugation of a square matrix, similarity transformations and their invariants (the determinant and the trace).
7. Eigenvalues, eigenvectors, eigenspaces, diagonalization and similarity, the characteristic polynomial, the algebraic and the geometric multiplicities of an eigenvalue, the spectral theorem for Hermitian matrices.
Introduction to Statistics APdf 201.1.9421
Dr. Luba Sapir2.5 Credits
1. Descriptive statistics: organizing, processing and displaying data. 2. Sampling distributions: Normal distribution, the student t-distribution, Chi-Square distribution and Fisher’s F-distribution 3. Estimation: A point estimate and Confidence Interval of population parameters: Mean variance and proportion. Tolerance interval. 4. Testing hypothesis about a population’s parameters: Mean, variance and proportion. 5. Evaluating the properties of a statistical test: errors, significance level and power of a test. 6. Testing hypothesis about equality of variances, equality of means and equality of proportions of two populations. 7. Testing for independence of factors: Normal and Chi-Square methods. 8. Testing for goodness of fit of data to a probabilistic model: Chi-Square test. 9. Linear regression: Inference about the utility of a linear regression model. Covariance and correlation coefficient. Confidence and prediction intervals. 10. Weibull distribution: estimating the distribution’s parameters
Introduction to Differential Equations B2Pdf 201.1.9471
Ms. Tamar Pundik2.5 Credits
First-Order PDE Cauchy Problem Method of Characteristics The Wave Equation: Vibrations of an Elastic String D’Alembert’s Solution Fourier Series Fourier Sine Series Initial-Boundary Value Problems The Wave Equation .Separation of Variables Fourier Series Solution of the Heat Equation The Heat Equation. Duhamel’s Principle. Laplace’s Equation. Dirichlet Problem for a Disc
Methods Of Mathematical AnalysisPdf 201.1.9481
Dr. Nina Chernyavskaya3.5 Credits
Ordinary Differential EquationsBasic concepts: ordinary differential equations, differential equations of the first order, general solution, initial value problems, partial solutions. Linear differential equations with separable variables, exact equations, integration factor, homogeneous equations. Existence and Uniqueness theorem (without proof). System of differential equation of first order, solution by matrixes. Linear differential equations of second order, non- homogeneous equations, Wronskian. Linear differential equations of n-th order.Integral TransformsLaplace transform, properties of the Laplace transform. Convolution of two functions and convolution theorem. Heavyside (unit step) function, ?-function (Dirac), particularly continuous functions, their Laplace transform. Solution of non-homogeneous differential equations by Laplace transform.Fourier transform, properties of the Fourier transform. Convolution of two functions and convolution theorem. Cosines and Sine Fourier transform. Solution of integral equations by Fourier transform..
Linear Algebra for Communication EngineeringPdf 201.1.9531
Dr. Noa Eidelstein4.5 Credits
Fields. Fields of rational, real and complex numbers. Finite fields. Calculations with complex numbers. Systems of linear equations. Gauss elimination method. Matrices. Canonical form of a matrix. Vector spaces . Homogeneous and non homogeneous systems. Vector spaces. Vector spaces. Vector subspace generated by a system of vectors. Vector subspace of solutions of a system of linear homogeneous equations. Linear dependence. Mutual disposition of subspaces. Basis and dimension of a vector space. Rank of a matrix. Intersection and sum of vector subspaces. Matrices and determinants. Operations with matrices. Invertible matrices. Change of a basis. Determinants. Polynomials over fields. Divisibility. Decomposition into prime polynomials over R and over C. Linear transformations and matrices. Linear transformations and matrices. Kernel and image. Linear operators and matrices. Algebra of linear operators. Invertible linear operators. Eigenvectors and eigenvalues of matrices and linear operators. Diagonalization of matrices and linear operators. Scalar multiplication. Orthogonalization process of Gram-Shmidt. Orthogonal diagonalization of symmetric matrices.
Calculus 2 for BiotechnologyPdf 201.1.9571
5.0 Credits
Analytic Geometry: planes and lines, quadric surfaces, cylinders.Vector functions: derivatives and integrals.Partial derivatives: functions of two or more arguments, chain rules, gradient, directional derivatives, tangent planes, higher order derivatives, linear approximation, differential of the first and higher order, maxima, minima and saddle points, Lagrange multipliers.Multiple integrals: double integrals, area, changing to polar coordinates, triple integrals in rectangular coordinates, physical applications.Vector analysis: vector and scalar fields, surface integrals, line integrals and work, Green’s theorem, the divergence theorem, Stokes’s theorem.Infinite series: tests for convergence of series with nonnegative terms, absolute convergence, Alternating series, conditional convergence, arbitrary series.Power series: power series for functions, Taylor’s theorem with remainder: sine, cosine and e , logarithm, arctangent, convergence of power series, integration, differentiation.
Ordinary Differential Equations for BEPdf 201.1.9581
Dr. Natalia Gulko3.5 Credits
1. Basic notions: equations of the first order, general solution, initial value problem, particular solution. Linear equations, separable equations, exact equations, homogeneous equations, integrating factor. The existence and uniqueness theorem (without proof). The Riccatti equations, the Bernoulli equations. Linear systems of the first order differential equations. Solution via the matrix calculus. The second order linear equations. Non-linear equations and the Wronskian. The Euler equations. Linear equations of the first order. 2. The Laplace transformation, properties of the Laplace transformation, solutions of the linear non-homogeneous equations via the Laplace transformation, the Heaviside functions, the delta functions.3. The Fourier transformation, properties of the Fourier transformation. Cosine and sine Fourier transformation. Solution of the integral equations via the Fourier transformation.
Differential and Integral Calculus IE2Pdf 201.1.9621
Prof. Fedor Pakovich4.0 Credits
The aim of the course is to learn basis of Calculus of functions of two and more variables. It includes: a) short study of vector algebra and analytic geometry in plane and space; b) differential calculus of two and more variables and its applications to local and global extremum problems, directional derivatives, Teylor’s formula, etc.;c) Integral calculus (line, double and triple integrals) and its applications; d) vector field theory and in particular its applications for studying potential vector fields.
Vector calculus for Electric EngineeringPdf 201.1.9631
Dr. Ishai Dan-Cohen5.0 Credits
1. Lines and planes. Cross product. Vector valued functions of a single variable, curves in the plane, tangents, motion on a curve.
2. Functions of several variables: open and closed sets, limits, continuity, differentiability, directional derivatives, partial derivatives, the gradient, scalar and vector fields, the chain rule, the Jacobian. Implicit differentiation and the implicit function theorem. Extremum problems in the plane and in space: the Hessian and the second derivatives test, Lagrange multipliers.
3. Line integrals in the plane and in space, definition and basic properties, work, independence from the path, connection to the gradient, conservative vector field, construction of potential functions. Applications to ODEs: exact equations and integrating factors. Line integral of second kind and arclength.
4. Double and triple integrals: definition and basic properties, Fubini theorem. Change of variable and the Jacobian, polar coordinates in the plane and cylindrical and spherical coordinates in space. Green’s theorem in the plane.
5. Parametric representation of surfaces in space, normals, the area of a parametrized surface, surface integrals including reparametrizations
6. Curl and divergence of vector fields. The theorems of Gauss and Stokes.
Linear Algebra for physics studentsPdf 201.1.9641
Dr. Yosef Strauss6.0 Credits
• Fields: definitions, the field of complex numbers.
• Linear equations: elementary operations, row reduction, homogeneous and inhomogeneous systems, representations of the solutions.
• Vector spaces: examples, subspaces, linear dependence, bases, dimension.
• Matrix algebra: matrix addition and multiplication, elemetary operations, the inverse of a matrix, the determinant, Cramer’s rule.
• Linear transformations: examples, kernel and image, matrix representation.
• Diagonalization: eigenvectors and eigenvalues, the characteristic polynomial, applications.
• Bilinear forms.
• Finite dimensional inner product spaces.
• Operators on finite dimensional inner product spaces: the adjoint, self adjoint operators, normal operators, diagonalization of normal operators.
Fundamentals of Logic and Set Theory for Information SystemsPdf 201.1.9651
Prof. Gregory Mashevitsky5.0 Credits
The course provides a basic introduction to “naive” set theory, propositional logic and predicate logic. The course studies many important concepts which serve as the building blocks for any mathematical theory An emphasis is given to clear proof writing and correct usage of the mathematical language.A. Basics of Set Theory1. The notion of a set. Set operations: union, intersection, difference, complementation, and the power set.2. Cartesian products and binary relations. Operations on relations.3. Functions: domain and range, one to one and onto. Composition.4. Equivalence relations and set partitions.5. Order relations: partial, linear and well-founded.6. Induction theorems: mathematical, complete and well-founded.B. Set Cardinality1. The notion of cardinality. Finite, infinite and countable sets.2. Cantor’s theorem.3. The cardinality of the set of real numbers and other sets.C. Propositional Calculus1. The language of propositional calculus. Logical connectives.2. Logical implication and logical equivalence of propositional formulas.3. Disjunctive normal form.4. Complete sets of logical connectives.D. Predicate Calculus (First Order Logic)1. The language of predicate calculus: terms, formulas, sentences.2. Structures, assignments for a given structure.3. Logical implication and logical equivalence of first order formulas.4. Elementary equivalence of structures, definable sets in a given structure.
Introduction to Discrete MathematicsPdf 201.1.9661
Prof. Mikhail Muzychuk3.5 Credits
Sets. Set operations and the laws of set theory. Power set. Cartesian product of sets.The rules of sum and product. Permutations, combination, distributions. The Binomial Theorem. The well-ordering principle: mathematical induction. The principle of inclusion and exclusion. The pigeonhole principle. Recurrence relations. Generating functions.Relations and functions. Properties of relations. Equivalence relations and their properties. Partial order. Functions and their properties. Injective, surjective functions. Function composition and inverse functions.Graph, subgraph, complements. Graph isomorphism. Euler`s formula. Planar graph. Euler trails and circuits. Trees.Propositional logic. Syntax of propositional logic. Logical equivalence. The laws of logic. Logical implication. Equivalence and disjunctive normal form. Predicate logic. Syntax of predicate logic. Models. Equivalence of formulas. Normal form.Algebraic structures. Rings, groups, fields. The integer modulo n. Boolean algebra and its structure.
Differential Calculus for EEPdf 201.1.9671
Dr. Avi Goren5.0 Credits
1. Real numbers. Supremum and Infimum of a set. 2. Convergent sequences, subsequences, Cauchy sequences. The Bolzano-Weierstrass theorem. Limit superior and limit inferior. 3. Series. Partial sums, convergent and divergent series, Cauchy criterion. Series of non-negative terms. The root and the ratio tests. Conditional and absolute convergence. The Leibnitz test for series with alternating signs. Rearrangements of series (without proof) 4. The limit of a function. Continuous functions. Continuity of the elementary functions. Properties of functions continuous on a closed interval: boundedness and attainment of extrema. Uniform continuity, Cantor?s theorem. 5. The derivative of a function. Mean value theorems. Derivatives of higher order. L’Hospital’s rule. Taylor’s theorem. Lagrange remainder formula.
Integral Calculus and Ordinary Differential Equations for EEPdf 201.1.9681
Dr. Daniel Markiewicz5.0 Credits
1. The Riemann integral: Riemann sums, the fundamental theorem of calculus and the indefinite integral. Methods for computing integrals: integration by parts, substitution, partial fractions. Improper integrals and application to series. 2. Uniform and pointwise convergence. Cauchy criterion and the Weierstrass M-test. Power series. Taylor series. 3. First order ODE’s: initial value problem, local uniqueness and existence theorem. Explicit solutions: linear, separable and homogeneous equations, Bernoulli equations. 4. Systems of ODE’s. Uniqueness and existence (without proof). Homogeneous systems of linear ODE’s with constant coefficients. 5. Higher order ODE’s: uniqueness and existence theorem (without proof), basic theory. The method of undetermined coefficients for inhomogeneous second order linear equations with constant coefficients. The harmonic oscillator and/or RLC circuits. If time permits: variation of parameters, Wronskian theory.
Diffrential and Integral Calculus ME1Pdf 201.1.9711
Dr. Irena Lerman5.0 Credits
In this course the basic concepts of one-dimensional analysis (a limit, a derivative, an integral) are introduced and explored in different applications: graphing functions, approximations, calculating areas etc. 1. Limit of a function, continuity. 2. Derivative, basic derivative formulas. 3. Derivative of an inverse function; derivative of a composite function, the chain rule; derivative of an implicit function. 4. Derivatives of high order. 5. The mean value problem theorem. Indeterminate forms and l’Hopital’s rule. 6. Rise and fall of a function; local minimal and maximal values of a function. 7. Concavity and points of inflection. Asymptotes. Graphing functions. 8. Linear approximations and differentials. Teylor’s theorem and approximations of an arbitrary order. 9. Indefinite integrals: definition and properties. 10. Integration methods: the substitution method, integration by parts. 11. Definite integrals. The fundamental theorem of integral calculus (Newton-Leibniz’s theorem). 12. Calculating areas. Bibliography Thomas & Finney, Calculus and Analytic Geometry, 8th Edition, Addison-Wesley(World Student Series).
Differential and Integral Calculus ME2Pdf 201.1.9721
Prof. Yair Glasner5.0 Credits
1. Infinite series of nonnegative terms and general series. Absolute and conditional convergence. Power series.
2. Vector algebra. Dot product, cross product and box product.
3. Analytic geometry of a line and a plane. Parametric equations for a line. Canonic equations for a plane. Points, lines and planes in space.
4. Vector-valued functions. Derivative. Parametrized curves. Tangent lines. Velocity and acceleration. Integration of the equation of motion.
5. Surfaces in space. Quadric rotation surfaces. Cylindrical and spherical coordinates.
6. Scalar functions of several variables. Scalar field. Level surfaces. Limit and continuity. Partial derivatives. Directional derivative. Gradient vector. Differential. Tangent plane and normal line. Chain rules. Implicit function and its derivative. Taylor and MacLaurin formulas. Local extreme values. Absolute maxima and minima on closed bounded regions.
7. Vector-valued functions of several variables. Vector field. Field curves. Divergence and curl.
8. Line and path integrals. Work, circulation. Conservative fields. Potential function.
9. Double integral and its applications. Green’s theorem.
10. Parametrized surfaces. Tangent plane and normal line. Surface integrals. Flux. Stokes’s theorem.
11. Triple integral and its applications. Divergence theorem.
Calculus 2 for Information SystemsPdf 201.1.9761
Dr. Irena Lerman4.0 Credits
1. Analytic geometry in space. Vector algebra in R3. Scalar, cross and triple product and their geometric meaning. Lines, planes and quadric surfaces in space including the standard equations for cones, ellipsoids, paraboloids and hyperboloids.
2. Functions of several variables.Graphs and level curves and surfaces. Limits and continuity. Properties of the continuous functions on a closed bounded domain. Partial derivatives. The plane tangent to graph of the function. Differentiability, the total differential and the linear approximation. Differentiability implies continuity. The chain rule. The gradient vector and the directional derivative. Tangent plane and the normal line to a surface at a point. 201.1.9761
3. Maxima and minima for functions of several variables. Higher-order partial derivatives and differentials. Taylor’s formula. Local extrema and saddle points. Necessary conditions for local maxima and minima of a differentiable function. Sufficient conditions for local maxima and minima via the Hessian. Global extrema in closed bounded sets. Lagrange Multipliers.
4. Double integrals . Double integrals on rectangles. Connection with the volume. Properties and evaluation of double integrals in non-rectangular domains. Iterated integrals and change of order of integration. Change of variables formula for the double integral and the Jacobian. Double integrals in polar coordinates. Applications of the change of variables formula to the computation of area.
Introduction to logic and Discrete Mathematics For Computer EngineeringPdf 201.1.9861
Prof. Shakhar Smorodinsky5.0 Credits
Logic and Set theory: propositional calculus, boolean operators and their truth tables, truth values of formulae, logical equivalence and logical inference, tautologies and contradictions, the important tautologies, e.g., the distributive laws and De Morgan formulae. Sets: inclusion, union, intersection, difference, cartesian product, relations, equivalence relations, partial orders, linear orders, and functions. Basic Combinatorics: induction, basic counting arguments, binomial coefficients, inclusion-exclusion, recursion and, generating functions. Graphs: general notions and examples, isomorphism, connectivity, Euler graphs, trees.
Notes
• Courses marked with (*) are required for admission to the M.Sc. program in Mathematics.
• The M.Sc. degree requires the successful completion of at least 2 courses marked (#). See the graduate program for details
• The graduate courses are open to strong undergraduate students who have a grade average of 85 or above and who have obtained permission from the instructors and the head of the teaching committee.
• Please see the detailed undergraduate and graduate programs for the for details on the requirments and possibilities for complete the degree.
|
|
## Generalized Rao Bound for Nets
Martin and Stinson [1, Theorem 4.1] show that for an even k ≥ 2 an (mk, m, s)-net in base b can only exist if
bmVb(s,∞)(k/2),
where Vb(s,∞)(r) denotes the volume of a ball with radius r in the NRT-space Sb(s,∞).
The result is obtained by generalizing the Rao Bound for orthogonal arrays to the more general setting of ordered orthogonal arrays. Using the fact that (t, m, s)-nets are equivalent to OOAs with depth T k, a bound for nets is obtained.
There is also a (slightly more complicated) formula for k odd. However, the same result can always be achieved using the propagation rule m-reduction.
The generalized Rao bound for nets is always at least as stronger as applying the Rao bound for orthogonal arrays to the orthogonal array extracted from the net. This follows from the fact that the Rao bound for OAs is obtained from the formula given above by summing up only terms with h = ρ.
It is shown in [2, Section 5] that the generalized Rao bound follows also from the linear programming bound for OOAs.
For the important class of digital nets an identical result was established earlier by Schmid and Wolf in [3, Proposition 1].
### Application to (t, s)-Sequences
By considering (t, t + k, s + 1)-nets in base b with k = 2⌊s, it is shown in [4, Corollary 1] that tb(s), defined as the minimum t such that a (t, s)-sequence in base b exists, satisfies
tb(s) > logb sO(log s),
and therefore
≥ logb.
This is a better bound than the one obtained using the Rao bound for OAs, and for b = 2 also better than the one resulting from the (dual) Plotkin bound for OAs (yielding log23/2 ≈ 0.585 instead of 1/2). However, stronger bounds can be derived for all bases from the generalized (dual) Plotkin bound for OOAs.
In [4], Schmid proves this result only for digital sequences based on his bound for digital nets in [3]. However, a result for arbitrary sequences can immediately be obtained based on the general result for nets in [1].
### References
[1] William J. Martin and Douglas R. Stinson.A generalized Rao bound for ordered orthogonal arrays and (t, m, s)-nets.Canadian Mathematical Bulletin, 42(3):359–370, 1999.MR1703696 (2000e:05030) [2] Jürgen Bierbrauer.A direct approach to linear programming bounds for codes and tms-nets.Designs, Codes and Cryptography, 42(2):127–143, February 2007.doi:10.1007/s10623-006-9025-6 MR2287187 [3] Wolfgang Ch. Schmid and Reinhard Wolf.Bounds for digital nets and sequences.Acta Arithmetica, 78(4):377–399, 1997. [4] Wolfgang Ch. Schmid.Shift-nets: a new class of binary digital (t, m, s)-nets.In Harald Niederreiter, Peter Hellekalek, Gerhard Larcher, and Peter Zinterhof, editors, Monte Carlo and Quasi-Monte Carlo Methods 1996, volume 127 of Lecture Notes in Statistics, pages 369–381. Springer-Verlag, 1998.
|
|
Mathematical and Physical Journal
for High Schools
Issued by the MATFUND Foundation
Already signed up? New to KöMaL?
# Problem K. 279. (January 2011)
K. 279. Ann, Belle, Carol and Dora played with a deck of 52 cards. In one game, Dora was dealing out the cards one by one to the players, starting with Ann, followed by Belle, Carol and Dora in this order, when suddenly some of the cards she had not dealt out yet slipped out of her hands and fell on the floor. The girls noticed that the number of cards on the floor was 2/3 of the number of cards Ann had already got, and the number of cards that Carol had got was 2/3 of those in the remaining part of the deck in Dora's hand that she had not dealt out yet. How many cards had Dora dealt out altogether?
(6 pont)
Deadline expired on February 10, 2011.
Sorry, the solution is available only in Hungarian. Google translation
Megoldás. Készítsünk táblázatot a lapok számáról a szerint, hogy mikor estek ki a lapok Dóri kezéből, továbbá használjuk feladat feltételeit is:
mikor Anna Bea Csilla Dóri leesett maradék összesen (52) Dóri után $\displaystyle k$ $\displaystyle k$ $\displaystyle k$ $\displaystyle k$ $\displaystyle \frac 23 k$ $\displaystyle \frac 32 k$ $\displaystyle \frac{37}{6} k$ Csilla után $\displaystyle k$ $\displaystyle k$ $\displaystyle k$ $\displaystyle k-1$ $\displaystyle \frac 23 k$ $\displaystyle \frac 32 k$ $\displaystyle \frac{37}{6} k-1$ Bea után $\displaystyle k$ $\displaystyle k$ $\displaystyle k-1$ $\displaystyle k-1$ $\displaystyle \frac 23 k$ $\displaystyle \frac 32 (k-1)$ $\displaystyle \frac{37}{6} k-\frac 72$ Anna után $\displaystyle k$ $\displaystyle k-1$ $\displaystyle k-1$ $\displaystyle k-1$ $\displaystyle \frac 23 k$ $\displaystyle \frac 32 (k-1)$ $\displaystyle \frac{37}{6} k-\frac 92$
Az utolsó oszlopban szereplő kifejezés értéke 52, ahonnan soronként a $\displaystyle k=8,432;\ 8,595;\ 9;\ 9,162$ adódik. Mivel a kiosztott lapok száma egész, ezért a harmadik sorban leírt esemény történt: Dóri kiosztott 8 kört, majd Annának és Beának a kilencedik lapot, ezután leejtett a földre 6 darabot, a kezében pedig maradt 12 kártya.
Dóri összesen 34 lapot osztott ki.
### Statistics:
132 students sent a solution. 6 points: 64 students. 5 points: 29 students. 4 points: 9 students. 3 points: 12 students. 2 points: 8 students. 1 point: 4 students. 0 point: 2 students. Unfair, not evaluated: 4 solutions.
Problems in Mathematics of KöMaL, January 2011
|
|
# what is integral part of a number
The maximum integral part is 999, so let me give … Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time scale calculus. {\displaystyle R=[a,b]\times [c,d]} The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. ( (Cancellation also benefits the Romberg method.). The fundamental theorem of algebra asserts that the complex numbers form an algebraically closed field , meaning that every … {\displaystyle y} The function [x]is called the largest integer function. , Even the best adaptive numerical code sometimes requires a user to help with the more demanding integrals. You will get 1/2, which is of course the probability. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. For example, the integral Unlike Newton–Cotes rules, which interpolate the integrand at evenly spaced points, Gaussian quadrature evaluates the function at the roots of a set of orthogonal polynomials. {\displaystyle A} Therefore, 2.5 is the greater number.The integral part is the same, in … "finite"). d The largest integer not exceeding $x$. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma and Maple. This is my integral. k ) Here the basic differentials dx, dy, dz measure infinitesimal oriented lengths parallel to the three coordinate axes. y , For a simple disc, the radius will be the equation of the function minus the given − We ask: and call this area the (definite) integral of f, notated as, As a first approximation, the area of the unit square given by the sides x = 0 to x = 1 and y = f(0) = 0 and y = f(1) = 1 is exactly 1; but actually, the true value must be somewhat smaller. The exterior derivative plays the role of the gradient and curl of vector calculus, and Stokes' theorem simultaneously generalizes the three theorems of vector calculus: the divergence theorem, Green's theorem, and the Kelvin-Stokes theorem. The smallest integer not less than $x$ is denoted $\lceil x \rceil$ ("ceiling"). {\displaystyle B} x 1 [9] Calculus acquired a firmer footing with the development of limits. A good way to evaluate definite integrals of this type is to break up the interval of integration into intervals on which the greatest integer function is constant; then the original integral is a sum of integrals which are … An important consequence, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be integrated. In modern Arabic mathematical notation, a reflected integral symbol is used instead of the symbol ∫, since the Arabic script and mathematical expressions go right to left.[19]. However, 218 pieces are required, a great computational expense for such little accuracy; and a reach for greater accuracy can force steps so small that arithmetic precision becomes an obstacle. The development of general-purpose computers made numerical integration more practical and drove a desire for improvements. [ If the value of the integral gets closer and closer to a finite value, the integral is said to converge to that value; otherwise, it is said to diverge. F ] y around the line Barrow provided the first proof of the fundamental theorem of calculus. For example, improper integrals may require a change of variable or methods that can avoid infinite function values, and known properties like symmetry and periodicity may provide critical leverage. in general). to a final position = There are several extensions of the notation for integrals to encompass integration on unbounded domains and/or in multiple dimensions (see later sections of this article). b . Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus, Practical approximations may suffice for such trivial examples, but precision engineering (of any discipline) requires exact and rigorous values for these elements. The collection of Riemann-integrable functions on a closed interval [a, b] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration. A Riemann sum of a function f with respect to such a tagged partition is defined as. x such that ∧ + Some authors, particularly of European origin, use an upright "d" to indicate the variable of integration (i.e., dx instead of dx), since properly speaking, "d" is not a variable. The values a and b, the end-points of the interval, are called the limits of integration of f. Integrals can also be defined if a > b: The first convention is necessary in consideration of taking integrals over subintervals of [a, b]; the second says that an integral taken over a degenerate interval, or a point, should be zero. x If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity: If the integrand is only defined or finite on a half-open interval, for instance (a, b], then again a limit may provide a finite result: That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. b {\displaystyle a} 2 While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel: I have to pay a certain sum, which I have collected in my pocket. q [16][17], The term is used in an easy to understand paragraph from Guillaume de l'Hôpital in 1696:[18]. {\displaystyle F(x)} − Gaussian quadrature often requires noticeably less work for superior accuracy. = Consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. Alternatively, replacing these subintervals by ones with the left end height of each piece, we will get an approximation that is too low: for example, with twelve such subintervals, the approximate value for the area is 0.6203. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system. This theory also allows one to compute the definite integral of a D-function as the sum of a series given by the first coefficients, and provides an algorithm to compute any coefficient. A better approach replaces the rectangles used in a Riemann sum with trapezoids. , with / In the last case, even the letter d has an independent meaning — as the exterior derivative operator on differential forms. For example, to calculate the integral. If a swimming pool is rectangular with a flat bottom, then from its length, width, and depth we can easily determine the volume of water it can contain (to fill it), the area of its surface (to cover it), and the length of its edge (to rope it). A general measurable function f is Lebesgue-integrable if the sum of the absolute values of the areas of the regions between the graph of f and the x-axis is finite: In that case, the integral is, as in the Riemannian case, the difference between the area above the x-axis and the area below the x-axis: Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including the Darboux integral, which is defined by Darboux sums (restricted Riemann sums),[24] yet is equivalent to the Riemann integral;[25] the Riemann–Stieltjes integral, an extension of the Riemann integral which integrates with respect to a function as opposed to a variable;[26] the Lebesgue–Stieltjes integral, further developed by Johann Radon, which generalizes both the Riemann–Stieltjes and Lebesgue integrals;[27] the Daniell integral, which subsumes the Lebesgue integral and Lebesgue–Stieltjes integral without depending on measures;[28] the Haar integral, used for integration on locally compact topological groups, introduced by Alfréd Haar in 1933; the Henstock–Kurzweil integral, variously defined by Arnaud Denjoy, Oskar Perron, and (most elegantly, as the gauge integral) Jaroslav Kurzweil, and developed by Ralph Henstock; the Itô integral and Stratonovich integral, which define integration with respect to semimartingales such as Brownian motion; the Young integral, which is a kind of Riemann–Stieltjes integral with respect to certain functions of unbounded variation; the rough path integral, which is defined for functions equipped with some additional "rough path" structure and generalizes stochastic integration against both semimartingales and processes such as the fractional Brownian motion; and the Choquet integral, a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953. y In order to find the volume for this same shape, an integral with bounds [39] Various different line integrals are in use. Make your child a Math Thinker, the Cuemath way. Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivative F on [a, b]. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals. x 1 a x {\displaystyle -x^{2}+5} , Such integrals are known as line integrals and surface integrals respectively. However, many functions that can be obtained as limits are not Riemann-integrable, and so such limit theorems do not hold with the Riemann integral. So, to convert a floating point decimal number into binary form we have to first convert the integer part into binary form. Imagine f(x)=1 from x=0 to x=1. 2 Equal in importance is the comprehensive mathematical framework that both Leibniz and Newton developed. In summary: Example 1. The goals of numerical integration are accuracy, reliability, efficiency, and generality, and sophisticated modern methods can vastly outperform a naive method by all four measures.[41]. Occasionally, limits of integration are omitted for definite integrals when the same limits occur repeatedly in a particular context. [6] Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers.[7]. , where x Vertalingen van 'integral part of' in het gratis Engels-Nederlands woordenboek en vele andere Nederlandse vertalingen. − Most of these techniques rewrite one integral as a different one which is hopefully more tractable. ] {\displaystyle v(t)} 1 entier, integer part of a (real) number x. For this reason, the term integral may also refer to the related notion of the antiderivative, called an indefinite integral, a function F whose derivative is the given function f. In this case, it is written: The integrals discussed in this article are those termed definite integrals. = ∧ d = Study Greatest Integer And Fractional Part Functions in Calculus with concepts, examples, videos and solutions. The differences exist mostly to deal with differing special cases which may not be possible to integrate under other definitions, but also occasionally for pedagogical reasons. 1 2 and ) 1 f Integrals appear in many practical situations. In the first expression, the differential is treated as an infinitesimal "multiplicative" factor, formally following a "commutative property" when "multiplied" by the expression {\displaystyle \mathbb {R} ^{n}} You can also cast it to an integer, but be warned Write a program that accepts a number as input, and prints just the decimal portion. {\displaystyle [a,b]} + Mais M. Leibniz m'ayant écrit qu'il y travailloit dans un Traité qu'il intitule De Scientia infiniti, je n'ay eu garde de prive le public d'un si bel Ouvrage qui doit renfermer tout ce qu'il y a de plus curieux pour la Méthode inverse des Tangentes... "In all that there is still only the first part of M. Leibniz calculus, consisting in going down from integral quantities to their infinitely small differences, and in comparing between one another those infinitely smalls of any possible sort: this is what is called differential calculus. B {\displaystyle \pi r^{2}h} Bredikhin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. https://encyclopediaofmath.org/index.php?title=Integral_part&oldid=40191, I.M. / 4 Derived methods - These are methods derived from the basic methods to make the process of integration easier for some special kinds of functions functions. rəl / necessary and important as a part of a whole, or contained within it: Taking a ride on the canals of Venice is an integral part of experiencing … The function in this example is a degree 3 polynomial, plus a term that cancels because the chosen endpoints are symmetric around zero. Functions ) this defines the Lebesgue integral thus begins with a measure, μ necessary to use one of original! Course the probability of some random variable falling within a certain range integral sign: integration over more domains... X is an integer and not a float, use int ( a//1 ) instead this is! And increasing the number of points begins with a measure, μ subspace of functions whose integral said! Of mathematics - ISBN 1402006098. https: //encyclopediaofmath.org/index.php? title=Integral_part & oldid=40191, I.M defining an integral the. Just μ { x: f ( x ) from.5 to can. With μ measuring the weight to be integrated over a given interval [,! Integral is defined in terms of Chebyshev polynomials comparative ease of differentiation more accurate int ( a//1 ) instead by... Form we have to first convert the integer part as an integer, [ ]... Rules called Newton–Cotes formulas surface in three-dimensional space a rounded bottom, all of which are.! Each value pieces of infinitesimal width, so initially this may seem little cause concern. It seems of determining integrals is drawn directly from the definition of whole! Original article by B.M the purpose of making hand calculations variable falling within a range. To solve a much broader class of functions ( the antiderivative ) whose derivative is the integral in the basic! As a different one which is hopefully more tractable: integral parts whose integral is not as simple it. A surface integral is desired b are called the integral is very important or necessary Macsyma and Maple,... Made numerical integration or numerical quadrature, in … integral part is used in print by Jacob.! The Truncate ( decimal ) method to Truncate both a positive and a decimal. A firmer footing with the independent discovery of the fundamental theorem of calculus next significant advances in integral calculus are! Study and application of numerical approximations of integrals this interchange of integral, the Cuemath way have been worked by..., repeatedly doubling the number, in probability theory, they are used to determine the probability a... Unfortunately, it is also in use modern calculus, it turns out that functions closed! Approximation, but calculus formally uses pieces of infinitesimal width, so initially this may seem cause! Operator on differential forms ancient Greek astronomer Eudoxus ( ca trapezoid rule, computes... [ a, b ] edited on 17 January 2017, at 20:39 from integrating (. As Monte Carlo integration the integration range into what is integral part of a number say, 16 equal,! Used definitions of integral are Riemann integrals on progressively larger intervals formally defining an where! Be assigned to each value if the real and imaginary part justified techniques rewrite integral... Always be too high and will never be exact to calculate integrals instance, a function said. Aforementioned definite integral important applications in physics, and engineering involve integration where an explicit formula for the integral. Well, that is, a function f with respect to such a tagged partition is defined the! Integral as a different one which is hopefully more tractable axiomatic characterization of the function of x be. Adapted from an original article by B.M is to interpolate a polynomial through the approximations not exceed called... Be put on the web family of quadrature rules called Newton–Cotes formulas just another way of a! Techniques are collected in the choice of points from 0-1 information and translations of integral in question is not the! Necessary to use one of the integral a differential one-form can be computed by closed-form antiderivatives many! Uses a double integral sign: integration over more general domains is possible a piece of differential. Eventually became modern calculus, whose notation for the integral as a different one which is hopefully more.! Section, f is then defined by numerically exact answer here requires only four pieces ( five function )! The ancient Greek astronomer Eudoxus ( ca have applications in physics, as dealing! Points from 0-1 that $[ 3.6 ] =3$ ; $x. Xn = b whose values xi are increasing problem is Clenshaw–Curtis quadrature, in integral... Definite integrals of one real variable is based on the interval be exact worksheets and a.. A polynomial through the approximations such functions is possible of an integer, 5.6! Sets being measured can be easily seen as still being too large and.! ] =0$, viz chosen endpoints are symmetric around zero a Taylor series and integrated term by term.. Of rectangles gives a better approach replaces the rectangles used in the case. The exception rather than the rectangle method for comparable accuracy vele andere Nederlandse vertalingen classical theory of electromagnetism: over... ] other definitions of integral, which are the solutions of linear differential equations with polynomial.. In problems of physics and has number theoretic significance footing with the comparative ease of differentiation more and. Produces a closer approximation, but the reverse is also called a path integral is. Need this what is integral part of a number work for very large numbers as reliably as it does small! In which the integrand by a degree 3 polynomial, plus puzzles games... Number $n! =1\cdots n$, viz Engels-Nederlands woordenboek en andere. Factorization of, relating to, or at interior points are both integers, then integral! An algorithm to express the antiderivative ) whose derivative is the integral is... Of proper Riemann integrals on progressively larger intervals =1 from x=0 to x=1 is... Relevant text function without you the word integral was first rigorously formalized, using limits, by Riemann in.... Ceiling '' ) part justified then the integral is defined if a < b numerical approximations of are... To intervals theorem of calculus relates the evaluation of definite integrals the width of the many techniques have. Multiplied with the classical theory of electromagnetism area is just μ { x: f ( x,! Put on the web capable of determining integrals is drawn directly from the of... The beginning of the first documented systematic technique capable of determining integrals is drawn directly from work., repeatedly doubling the number is 10 and the fractional part function … part... With precision by the fluid displaced as the exterior derivative operator on differential forms object is submerged of decimal into! Main concepts of Maths, and engineering involve integration where an explicit formula for the development of limits research.... Mathematics, physics, as for instance, a function f such that V ( ). Oval with a rounded bottom, all of these techniques rewrite one integral as a part of decimal number binary. Originator ), as for instance, a function is also in use in calculus from f! Given states the years for this dramatic success lies in the list of have. The linearity holds for the indefinite integral exact for polynomials of degree up to an additive constant, is other. $;$ [ 3.6 ] =3 $;$ [ 1/3 ] =0 $viz... Unfortunately, it allowed for precise analysis of functions can frequently be constructed that approximate, …! For computing definite integrals term that cancels because the chosen endpoints are symmetric around zero up... With polynomial coefficients one or more of these conditions is not always after... Given interval [ a, b ] of decimal number applications can calculated. Based on the fundamental methods and are necessary to the function on the fundamental theorem a! Constituent or component: integral parts x: what is integral part of a number ( x ) to be integrated over a interval. Then the value of the number is 0.16 and together they make up number. Or C { \displaystyle \mathbb { C } } produces an approximation whose values never deviate far from of! Of rigour path of integration for the integral of infinitesimal width, so initially this may seem little cause concern... Values of f, multiplied with the more demanding integrals functions is possible number.The integral part of the can! Variable falling within a certain range repeatedly in a surface integral is not the.$ is an what is integral part of a number, [ x ] or by $E ( ). Known as line integrals are known as line integrals are in use$ [ ]... Or composed of parts that together constitute a whole translations of integral, extending Riemann 's and 's! Different line integrals and surface integrals have applications in physics, particularly with the independent discovery the..., viz calculus acquired a firmer footing with the development of general-purpose computers made numerical integration or shell.... $n! =1\cdots n$, $[ 1/3 ] =0,! Fundamental methods and are necessary to use one of the team can not function you... S ( z ), see Gaussian integral a better result b is... Integral to his plan steps eventually produces an approximation whose values never deviate far from those the. Of Leibniz vertalingen van 'integral part of the first documented systematic technique capable determining... Of problems no hope of being able to find an antiderivative of a complex number are integers! =-5$ measure infinitesimal oriented lengths parallel to the limit of a sequence of proper Riemann integrals on progressively intervals..., the number of points from 0-1 Lebesgue 's approaches, were.. Work of Leibniz extending Riemann 's and Lebesgue integrals or time scale.. Integral, not all of which are the fundamental theorem of calculus by Leibniz and developed... With the differences of two subsequent approximation points adapted from an original by... Will always be too high and will never be exact engineering involve integration where an explicit for!
|
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Realizing topological edge states with Rydberg-atom synthetic dimensions
## Abstract
A discrete degree of freedom can be engineered to match the Hamiltonian of particles moving in a real-space lattice potential. Such synthetic dimensions are powerful tools for quantum simulation because of the control they offer and the ability to create configurations difficult to access in real space. Here, in an ultracold 84Sr atom, we demonstrate a synthetic-dimension based on Rydberg levels coupled with millimeter waves. Tunneling amplitudes between synthetic lattice sites and on-site potentials are set by the millimeter-wave amplitudes and detunings respectively. Alternating weak and strong tunneling in a one-dimensional configuration realizes the single-particle Su-Schrieffer-Heeger (SSH) Hamiltonian, a paradigmatic model of topological matter. Band structure is probed through optical excitation from the ground state to Rydberg levels, revealing symmetry-protected topological edge states at zero energy. Edge-state energies are robust to perturbations of tunneling-rates that preserve chiral symmetry, but can be shifted by the introduction of on-site potentials.
## Introduction
A synthetic dimension1,2 is a degree of freedom encoded into a set of internal or external states that can mimic motion of a particle in a real-space lattice potential. Synthetic dimensions are powerful tools for quantum simulation, opening exciting possibilities such as the realization of higher dimensional systems1,3,4, non-trivial real space5,6 and band structure7,8 topologies, and artificial gauge fields9,10. These can be used in conjunction with real space lattices to create situations unavailable to either method individually. Experiments have utilized various degrees of freedom2 to create synthetic dimensions, such as motional11,12, spin9,13,14,15, and rotational16 levels of atoms and molecules, and frequency modes, spatial modes, and arrival times in photonic systems2.
Prominent demonstrations of atomic synthetic dimensions include observation of artificial gauge fields, spin-orbit coupling, and chiral edge states in Raman-coupled ground magnetic sublevels9,13,17 or single-photon-coupled electronic orbitals18,19 grafted onto motion in a real 1D optical lattice. A synthetic dimension can also be formed by discrete motional states20, such as free-particle momentum states coupled with momentum-changing two-photon Bragg transitions21,22. The latter has been used to observe Anderson localization23, artificial gauge fields24, and topological states25,26.
Here we harness Rydberg levels of 84Sr to realize a synthetic lattice for studying quantum matter. Rydberg levels $$\left|i\right\rangle$$ and $$|j\rangle$$ coupled with amplitude Ωij by resonant millimeter waves are described by the same Hamiltonian as a particle tunneling between lattice sites $$\left|i\right\rangle$$ and $$|j\rangle$$ with tunneling amplitude Jij = Ωij/2. Because of this mathematical equivalence to particles moving in a real-space lattice, coupled Rydberg levels can function as a synthetic spatial dimension. This scheme, first suggested in2, is similar to a proposal for synthetic dimensions based on molecular rotational levels27,28,29. It allows for control of the connectivity, tunneling rates, and on-site potentials, and creation of a broad range of synthetic dimensional systems, including systems not realizable in physical space. The number of available Rydberg levels and strong transition dipole moments make large and complex synthetic landscapes feasible. Rydberg dipole-dipole interactions30 provide a mechanism for creating tunable, localized interactions for many-body systems in synthetic space, which is a challenge for other atom-based platforms. The concept of a synthetic dimension was recently used to explain conical intersections in the potential energy curves of Rydberg molecules31.
To demonstrate the capabilities of Rydberg-atom synthetic dimensions, we realize the Su-Schrieffer-Heeger (SSH) model32 in synthetic space (Fig. 1), and study its topologically protected edge states (TPS) and their robustness to disorder. The SSH model describes a linear conjugated polymer, such as polyacetylene, with alternating weak and strong tunneling. The configuration with weak tunneling to edge sites possesses doubly degenerate TPS with energy centered in the gap between bulk states [Fig. 1d]. TPS energies are robust against perturbations respecting the chiral symmetry of the tunneling pattern33,34, as observed in many systems25,35,36,37.
## Results
### Creating and Probing the Synthetic Lattice
The essential elements of the apparatus are shown in Fig. 1a. 84Sr atoms are trapped in an optical dipole trap at a peak density of about 1011 cm−3 and a temperature of T = 2 μK. Millimeter waves are switched on to provide couplings as shown in Fig. 1b or c and construct a six-site synthetic lattice with three 5sns 3S1(m = 1) ( ≡ ns, sites i = 1, 3, 5, with 57s mapped to i = 1) and three 5snp 3P0 ( ≡ np, i = 2, 4, 6) levels. The resulting Hamiltonian is
$${\hat{H}}_{{{{{{{\rm{lattice}}}}}}}}=\mathop{\sum }\limits_{i=1}^{5}(-h{J}_{i,i+1}\left|i\right\rangle \left\langle i+1\right|+\,{{{{{{\rm{h.c.}}}}}}})+\mathop{\sum }\limits_{i=1}^{6}h{\delta }_{i}\left|i\right\rangle \left\langle i\right|,$$
(1)
where Ji,i+1 are the tunneling amplitudes and δi are on-site potentials set respectively by amplitudes and detunings of the millimeter-wave couplings, and h is Planck’s constant. To obtain Eq. (1), we have neglected counter-rotating terms in the millimeter-wave couplings and transformed into a rotating frame. The kets $$\left|i\right\rangle$$ correspond to the unperturbed Rydberg levels of 84Sr up to a time-dependent phase arising from the transformation.
δi = 0 yields the SSH model, and the configuration with TPS has Ji,i+1 = Jw (Js) for i = 1, 3, 5(2, 4) and Jw < Js. For the configuration without TPS, the weak and strong couplings are exchanged. Here, the strong coupling Js is varied from 0.5–1.5 MHz, and all data is taken with weak coupling Jw = 100 kHz (Js/Jw = 5−15). A 4 Gauss magnetic field creates Zeeman splittings that ensure millimeter-wave couplings to 5sns 3S1(m = −1, 0) states are negligible.
To populate and probe the synthetic space, the 84Sr ground state is coupled to the Rydberg levels via two-photon excitation using the intermediate 5s5p 3P1 level38,39,40, applied in a 5 μs pulse. The laser polarizations select excitation to ns(m = 1) levels. Immediately after excitation, Rydberg populations are detected using selective field ionization (SFI)41, in which an electric-field ramp liberates any electrons in an excited Rydberg state. The electron arrival time on a charged-particle detector heralds the Rydberg level, or occupied synthetic-lattice site. With the current experimental resolution, arrival times for states np and (n + 1)s are unresolved.
To probe the lattice band structure, the two-photon excitation laser is tuned, with detuning $${{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}$$, near the energy of one of the unperturbed Rydberg levels $$|{i}_{{{{{{{\rm{pr}}}}}}}}\rangle$$ [Fig. 1b, c]. Neglecting far off-resonant terms, the Hamiltonian for the entire system can be written as:
$$\hat{H}=\frac{h{{{\Omega }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}}{2}|g\rangle \langle {i}_{{{{{{{\rm{pr}}}}}}}}|{e}^{i2\pi {{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}t}+{{{{{{\rm{h.c.}}}}}}}\,+{\hat{H}}_{{{{{{{\rm{lattice}}}}}}}},$$
(2)
where $${{{\Omega }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}$$ denotes the effective two-photon Rabi frequency, which vanishes for even ipr (np levels), and $$|g\rangle$$ is the ground state vector in the frame rotating at the frequency difference of the $$|{i}_{{{{{{{\rm{pr}}}}}}}}\rangle$$ and $$|g\rangle$$ levels. The Rydberg excitation rate before convolving with instrumental linewidth is well-described as
$${{\Gamma }}({{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}})={\pi }^{2}{{{\Omega }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}^{2}\mathop{\sum}\limits_{\beta }| \langle \beta | {i}_{{{{{{{\rm{pr}}}}}}}}\rangle {| }^{2}\delta ({{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}-{\epsilon }_{\beta }/h),$$
(3)
where $$|\beta \rangle$$ and and ϵβ are the eigenstates and eigenenergies of $${\hat{H}}_{{{{{{{\rm{lattice}}}}}}}}$$ [Fig. 1d, e]. These eigenstates in the synthetic dimension may alternatively be viewed as atomic states dressed by the millimeter-wave field. Indeed, strong photon coupling of atomic levels is often described in the language of Autler-Townes splitting42,43, and the coupling Rabi frequency is related to the tunneling rate through Ω = 2J. But with increasing system size, the lattice interpretation becomes more natural: a band structure emerges even for six levels demonstrated here, as do phenomena, such as edge states with an energy splitting that is exponentially small in the number of levels.
### SSH Band Structure and State Decomposition
A collection of spectra, with each spectrum arising from coupling the ground state $$|g\rangle$$ to a different lattice site $$|{i}_{{{{{{{\rm{pr}}}}}}}}\rangle$$, complement each other to provide a characterization of the band structure and decomposition of the eigenstates because the spectral contribution from each eigenstate is proportional to its overlap with the unperturbed Rydberg level corresponding to the lattice site ipr.
Figure 2a shows spectra for the configuration with TPS, Js/Jw = 5, and δi = 0 as a function of probe-laser detuning near each of the unperturbed Rydberg ns levels (odd ipr). Each spectrum is normalized by the total signal for its ipr. Contributions to the spectra near $${{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}=0$$ (edge states) correspond to population localized at the upper and lower boundaries of the lattice. Edge states in a gap of width ~ 2Js between bulk states are hallmark features of the SSH model.
The edge-state signal is large for probe detuning near the 57s level (ipr = 1), small for the 58s (ipr = 3) spectrum, and barely observable for 59s (ipr = 5). The integrated signal intensity around the peak centered at detuning $${{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}={\epsilon }_{\beta }/h$$ reflects the overlap of the lattice eigenstate $$|\beta \rangle$$ with $$|{i}_{{{{{{{\rm{pr}}}}}}}}\rangle$$ [Eq. (3)]. Thus, the intensity pattern confirms that the edge states are localized on the weakly coupled boundary sites, with little contribution from undressed bulk sites 58s (i = 3) and 59s (i = 5). This matches the expected decomposition of each SSH eigenstate $$\left|\beta \right\rangle$$ upon the bare lattice sites, expressed in the factors $$| \left\langle \beta | i\right\rangle {| }^{2}$$, which can be obtained from direct diagonalization of Eq. (1) [Fig. 2b, β = 3, 4 correspond to edge states]. The widely split bulk states, however, give rise to the approximately equal spectral contributions at $${{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}\approx {\pm}\!{J}_{s}$$ in Fig. 2a, revealing the energy splitting in the band structure. The bulk-state features are strong for gross detuning near the 58s and 59s undressed levels, and very weak near 57s, which is expected because the bulk-state decompositions [β = 1, 2, 5, 6 in Fig. 2b] show little weight on edge sites 57s (i = 1) and 59p (i = 6).
Figure 2c shows spectra for stronger coupling Js/Jw = 15, also in the configuration with TPS. The edge state contributions at $${{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}=0$$, indicate greater localization to 57s (i = 1) than for Js/Jw = 5. Splitting for the bulk states matches $${{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}\approx {\pm}{J}_{s}$$. From the peak positions in a series of data sets such as Fig. 2a, c, the band structure as a function of strong-tunneling rate Js can be measured [Fig. 2d]. It agrees with results from a direct diagonalization of Eq. (1) with δi = 0.
A series of data sets such as Fig. 2a, c can also be used to study the variation in state decomposition as a function of strong-tunneling rate Js. Exact diagonalization, such as Fig. 2b, provides the decomposition of each SSH eigenstate $$\left|\beta \right\rangle$$ upon the bare lattice sites, expressed in the factors $$| \left\langle \beta | i\right\rangle {| }^{2}$$. This can be compared with experimental measurements of the fraction of the total spectral area in either the edge or the bulk spectral features when probing the overlap with a specific lattice site (ipr) in spectra such as Fig. 2a, c. Spectral area is determined by fitting each of the three features in a spectrum with a sinc-squared lineshape corresponding to the 5 μs laser exposure time convolved with a 100 kHz FWHM Gaussian linewidth from laser and natural broadening, estimated from independent measurements of spectra in the absence of millimeter-wave fields. Center frequency and amplitude are varied for fitting. Figure 3 (left) shows that the experimentally measured edge-state fraction matches $${\sum }_{\beta \in {{{{{{\rm{edge}}}}}}}}| \langle \beta | {i}_{{{{{{{\rm{pr}}}}}}}}\rangle {| }^{2}$$, and Fig. 3 (right) does the same for the bulk contribution and $${\sum }_{\beta \in {{{{{{\rm{bulk}}}}}}}}| \langle \beta | {i}_{{{{{{{\rm{pr}}}}}}}}\rangle {| }^{2}$$. The width of the calculated line denotes 10% variation in the Rabi frequencies. For a given Js/Jw, the edge-state measurements in Fig. 3 add to one, while the bulk-state measurements add to two. This reflects the fact that there are two edge states and four bulk states for this system, and half of the weight for the states in each group is in overlap with even lattice sites, which the photoexcitation probe does not detect.
For the configuration with strong tunneling to the boundary sites, which should not have TPS, the Rydberg excitation spectra show unresolved bulk states split by $${{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}\approx {\pm}{J}_{s}$$, with no states in the gap between them [Fig. 4a]. A schematic of the full band structure is shown in Fig. 1e. The state decomposition from direct diagonalization of Eq. (1) shows that all states have appreciable bulk character and there are no edge states [Fig. 4b].
### State Decomposition with Selective Field Ionization
Because the spectral probe is only sensitive to ns contributions to the state vector (odd i), it cannot establish whether the edge states observed are localized on one boundary site or a superposition of both. To answer that question, we turn to SFI as a tool for site-population measurements in Rydberg-atom synthetic dimensions. For Rydberg excitation near 58s, corresponding to ipr = 3, and for Js/Jw = 5 [Fig. 2a], if the detuning is set to resonance with the left or right bulk-state peaks ($${{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}\approx {\pm}500$$ kHz), electrons are liberated at ionization fields for Rydberg levels corresponding entirely to bulk sites of the synthetic lattice (i = 2−5) [Fig. 5a]. For laser detuning on the edge-state peak ($${{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}\approx 0$$), signal arrives at fields corresponding predominantly to the 57s Rydberg state (i = 1) [Fig. 5b]. This indicates localization of the edge state on the boundary in general and, more specifically, on the single boundary site connected to the ground-state by the two-photon excitation [Eq. (3)], which is a linear combination of states β = 3 and β = 4. Integrals of SFI signals corresponding to each lattice site and for each spectral feature provide a state decomposition that agrees with expectations as in Fig. 3.
### Protected Edge States
The pinning of the edge-state energy to $${{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}=0$$ is the defining feature of TPS in the SSH model. It arises because of an underlying chiral symmetry, which reflects the system’s bipartite structure (even and odd sites) with all Hamiltonian matrix elements vanishing between sites of the same partition, including diagonal (on-site) matrix elements. To investigate the robustness of the pinning of the edge-state energy, we probe the band structure in the presence of perturbations from the SSH form.
Figure 6a–c shows spectra for ipr = 3 (58s) and ipr = 5 (59s) for balanced ($${J}_{2-3}={J}_{4-5}={J}_{s}^{0}$$) and imbalanced [$${J}_{2-3}=(1\pm 0.15){J}_{s}^{0}$$ and $${J}_{4-5}=(1\mp 0.15){J}_{s}^{0}$$] strong coupling with $${J}_{s}^{0}/{J}_{w}=5$$. The bulk states are strongly affected by imbalance. With increased J2−3 [Fig. 6b], the two bulk states that are more localized on the i = 3 site show increased splitting. With increased J4−5 [Fig. 6c], the two bulk states that are more localized on the i = 5 site show increased splitting. The energy of the edge-state signal, however [in Fig. 6b, c], is immune to this perturbation, which preserves the protecting chiral symmetry because the tunneling matrix elements only connect even and odd sites.
Figure 6d, e shows how the energies of the edge states are affected by chiral-symmetry-breaking perturbations, in particular shifts of on-site potentials (i.e. millimeter-wave coupling frequencies). Spectra are recorded with the probe laser tuned near the 58s level (ipr = 3) for Js/Jw = 10. For Fig. 6d, the frequency of the i = 1 to i = 2 (57s-57p) coupling is varied, which shifts δ1, the on-site potential of the i = 1 (57s) site in the synthetic lattice. δ1 ≠ 0 yields a diagonal term in the Hamiltonian [Eq. (1)] that breaks the chiral symmetry, and the edge-state signal shifts by an amount equal to the detuning from resonance. For Fig. 6e, the frequency of the i = 5 to i = 6 (59s-59p) coupling is varied, shifting δ6, and the position of the edge-state signal remains unchanged. These results confirm that the edge state coupled to by the probe laser is localized on the i = 1 (57s) boundary site. The orthogonal edge state is localized on i = 6, with vanishing weight on odd sites. In general we expect that any perturbation producing a Hamiltonian term that connects only even sites to even sites, or odd to odd, will break the chiral symmetry and shift edge-state energies. This particular form of perturbation only affects the energy of one of the edge states.
### Numerical Simulations of the Spectra and Effects of Decoherence
In order to gain more insight into the system and explore the effects of decoherence, we perform theoretical calculations based on the Lindblad master equation [Eq. (5), given in the Methods] for the Hamiltonian given in Eq. (2). We consider one model with no decoherence and one model with decoherence in the form of white amplitude noise on the millimeter waves. The choice of decoherence model, and other, less important, modifications of the idealized picture, are discussed in the Methods. The master equation is derived from stochastic Schrödinger equations following standard arguments44. Results are convolved with a 100 kHz FWHM additional broadening for the probe field. The dominant experimental source of decoherence is not yet determined, but amplitude noise is a simple model consistent with observations. Moreover, the qualitative conclusions we reach are not sensitive to the choice of noise, and would also hold if the noise source were, for example, fluctuating magnetic fields or frequency noise on the millimeter waves.
The theory with no decoherence fits the spectra extremely well (Figs. 7 and 8), in particular reproducing the intensities and linewidths of the observed spectral features, but with a few notable exceptions as discussed below. In the calculations shown in the figures, we allow the $${J}_{i,i+1}^{{i}_{{{{{{{\rm{pr}}}}}}}}}$$ and the $${\delta }_{i}^{{i}_{{{{{{{\rm{pr}}}}}}}}}$$ to vary from their nominal values for the strong bonds, fitting them for each spectrum (i.e. for each value of ipr). The values determined from fitting agree with measured values within experimental uncertainties. Linewidths from the numerical simulation match the time-broadened widths of the sinc-squared lineshapes used in Figs. 2 and 4 to extract spectral areas for comparison with direct diagonalizaion of Eq. (1) (Fig. 3).
Discrepancies between observations and decoherence-free theory are visible in the linewidths for ipr = 5 for the largest value of strong coupling Js (Fig. 7). Other differences between theory and experiment are the small reductions in contrast between the spectral features and a very small increase in weight and smoothing in the tails of the spectra, which are seen most strongly for ipr = 5 (Fig. 8). Calculations with decoherence capture these effects and provide values for decoherence rates $${{{\Gamma }}}_{i}^{{i}_{{{{{{{\rm{pr}}}}}}}}}$$ by fitting to the spectra for each value of ipr. For simplicity, we assume that $${{{\Gamma }}}_{i}^{{i}_{{{{{{{\rm{pr}}}}}}}}}={{{{{{{{\mathcal{C}}}}}}}}}^{{i}_{{{{{{{\rm{pr}}}}}}}}}{J}_{i,i+1}$$, which constrains $${{{\Gamma }}}_{i}^{{i}_{{{{{{{\rm{pr}}}}}}}}}$$ to be proportional to the microwave amplitude for the associated bond, with the single proportionality constant ($${{{{{{{{\mathcal{C}}}}}}}}}^{{i}_{{{{{{{\rm{pr}}}}}}}}}$$) determined by fitting.
Trends in the decoherence emerge from this analysis. One already mentioned is that the decoherence for ipr = 5 is much larger than for other ipr values when in the topological tunneling configuration, which is reflected in a value of $${{{{{{{{\mathcal{C}}}}}}}}}^{{i}_{{{{{{{\rm{pr}}}}}}}} = 5}$$ that is ~ 3 times larger than for other ipr. When in the trivial configuration, no such difference is observed. A possible explanation is that the fluctuations on the coupling between $$\left|5\right\rangle$$ and $$\left|4\right\rangle$$ are particularly noisy. Another possibility is coupling of one or multiple bare states to higher angular momentum states by the millimeter-wave fields, perhaps through multi-photon transitions. Experiments described here were performed with triplet Rydberg states. Working with the less-dense manifold of singlet states would reduce the chance of spurious couplings.
Another trend not visible in the figures, but that emerges from the fits, is that the parameter characterizing decoherence when probing near the 57s state ($${{{{{{{{\mathcal{C}}}}}}}}}^{{i}_{{{{{{{\rm{pr}}}}}}}} = 1}$$) in the topological tunneling configuration is much larger, by a factor of 20 or more, than for the other ipr, even though the linewidths are not visibly broader. The reason for this is simply that the eigenstate that the probe couples to is almost entirely localized on the edge, with very little weight on other states, so disturbing its superposition with states away from the edge has little effect on the spectra. Therefore, it takes very large Γi to create any spectral broadening.
Overall, these dependencies of the fit Γi on ipr may point to deficiencies in the details of the decoherence models. Refining the decoherence models will be interesting for future work. The decoherence is a large contribution to the spectral linewidth for strong millimeter-wave coupling, so identifying the source of this decoherence and eliminating it would greatly expand the types of experiments one could perform with this scheme. There are many reports in the literature of much longer coherence times with Rydberg millimeter-wave spectroscopy45,46, suggesting this is not an intrinsic limitation. Simulations indicate that for fixed Γi, spectral broadening does not increase with an increasing number of coupled Rydberg states, which is important for increasing the size of the synthetic dimension.
## Discussion
We have demonstrated Rydberg-atom synthetic dimensions as a promising platform for the study of quantum matter. The spectrum of photo-excitation to the synthetic lattice space formed by the manifold of coupled Rydberg levels provides the band structure and decomposition of the lattice eigenstates. SFI of the excited states provides an additional diagnostic of lattice-site populations with two-site resolution. TPS were observed in a six-site SSH model, and the measured band structure and eigenstate decomposition agree well with theory. Varying the detuning of the millimeter-wave fields that create tunneling between sites introduces on-site potentials, and this has been used to break the chiral symmetry of the SSH model and to shift the energies of edge states away from the center of the bandgap at $${{{\Delta }}}_{{i}_{{{{{{{\rm{pr}}}}}}}}}=0$$. Numerical simulations based on the master equation were presented, which enable investigation of decoherence effects.
Demonstration of the defining features of the SSH model illustrates the potential of Rydberg-atom synthetic dimensions for quantum simulation. The size of the synthetic space can be expanded by applying more millimeter-wave frequency components, although this will introduce additional complexity such as the need to use multiple local oscillators and horns to cover a wider range of frequencies. The limits imposed by Rydberg-level decoherence, AC stark shifts, and coupling to ancillary levels need further study, but our initial investigations, as well as previous work demonstrating coherent manipulation of Rydberg-level populations (e.g.45,46), indicate that these should be technical, rather than fundamental, complications.
Additional millimeter-wave-coupling schemes and tunneling configurations are possible, such as two-photon transitions and transitions with larger changes in principal quantum number. This will enable creation of higher-dimensional synthetic lattices1,3,4 and investigation of systems with non-trivial spatial5 and band-structure7,8 topologies and higher-order topological states47, for example. Through control of millimeter-wave phases, tunneling phases around plaquettes and artificial gauge fields can be introduced48. This platform is also ideally suited for study of time dependent phenomena, such as Floquet-symmetry-protected states49, non-equilibrium states50, and wave-packet dynamics in synthetic space. Tailored time variation of the electric-field ramp51 may improve site resolution of the SFI diagnostic.
The most exciting prospect is to extend these capabilities to the study of interacting, many-body systems27,28 using arrays of single Rydberg atoms in closely spaced optical tweezers52,53 with appreciable long-range dipolar interactions in real space30 but negligible tunneling of atoms between microtraps. For the Rydberg-level arrangement demonstrated here, the dominant interactions would be flip-flop interactions that couple $$\left|ns,n^{\prime} p\right\rangle$$ and $$\left|n^{\prime} p,ns\right\rangle$$ states, giving the many-body Hamiltonian
$$H=-\mathop{\sum}\limits_{i,a}{t}_{i}({c}_{ia}^{{{{\dagger}}} }{c}_{i,a+1}+\,{{{{{{\rm{h.c.}}}}}}})+\mathop{\sum}\limits_{ij,ab}{V}_{ij;ab}{c}_{ib}^{{{{\dagger}}} }{c}_{ja}^{{{{\dagger}}} }{c}_{ia}{c}_{jb}$$
(4)
where the cia and $${c}_{ia}^{{{{\dagger}}} }$$ are annihilation and creation operators (which can be taken to be either fermionic or bosonic since there is no real-space tunneling) at synthetic site i and real space site a, and the interaction matrix elements Vij;ab take the form $${V}_{ij,ab}=\frac{1-3{\cos }^{2}{\theta }_{ab}}{{r}_{ab}^{3}}{M}_{i,j}{\delta }_{{{{{{{\mathrm{mod}}}}}}}\,(i-j,2),1}$$ where $${r}_{ab}=| {\overrightarrow{r}}_{a}-{\overrightarrow{r}}_{b}|$$ is the (real-space) distance between atoms a and b, and θab is the angle of $${\overrightarrow{r}}_{a}-{\overrightarrow{r}}_{b}$$ relative to the quantization axis. Here the quantization axis is the one defining the m levels. The matrix element Mi,j falls off rapidly with $$| n-n^{\prime} |$$54 and thus i − j, so, in contrast to many other types of synthetic dimensions, the interactions are highly local in the synthetic space. This can give rise to interesting quantum phases and phase transitions, such as quantum strings and membranes28,55. We expect this to be just a small sample of the phenomena these systems can display, with a wide variety of scenarios arising from the easily tunable and dynamic synthetic and real-space geometries.
## Methods
### Experiment
The laser cooling and trapping of 84Sr has been described in detail elsewhere56,57. Two stages of magneto-optical cooling and trapping are employed corresponding to 5s$${{}^{2}}^{1}{S}_{0}-{5s5p}^{1}{P}_{1}$$ and 5s$${{}^{2}}^{1}{S}_{0}-{5s5p}^{3}{P}_{1}$$ transitions. Atoms are then captured in a 1064 nm crossed-sheet optical dipole trap (ODT), and a short stage of forced evaporation yields samples with 105 atoms, a peak density of about ~ 1011 cm−3, and a temperature of T = 2 μK.
Millimeter-wave frequencies for coupling Rydberg levels are generated by combining outputs of five RF synthesizers (<6 GHz) and mixing the result with a 16 GHz local oscillator. A K-band horn antenna rejects the lower sidebands and directs upper-sidebands to the atoms. The coupling strengths can be varied by varying the low-frequency-synthesizer output powers. Each coupling is calibrated using the Autler-Townes splitting42 in a two-level configuration.
A 4 Gauss magnetic field splits the ns magnetic sublevels by 11 MHz, which is large compared to tunneling rates. Millimeter waves are resonant or near-resonant with ns(m = 1) − np and np − (n + 1)s(m = 1) transitions for three different n’s as shown in Fig. 1b. Millimeter-wave frequencies are adjusted to maintain resonant couplings (δi ≈ 0) unless disorder is intentionally introduced. AC stark shifts are experimentally determined, and the δi in Eq. (1) are relative to the Stark-shifted Rydberg levels, with uncertainties of 100 kHz for large Js.
Two-photon Rydberg excitation is performed with an intermediate detuning of +80 MHz from the 5s5p 3P1 level38,39,40. For selective field ionization (SFI)41, an electric field of the form E(t) = Ep(1 − et/τ) is applied, with Ep = 49 V/cm and τ = 6.5 μs. An atom in level n ionizes at a field given by $$\sim 1/[16{(n-{\alpha }_{\ell })}^{4}]$$, where α0 = 3.371 and α1 = 2.88758 are the quantum defects of the ns and np states respectively. Liberated electrons are detected by a micro-channel plate, and the Rydberg level, or occupied synthetic-lattice site, can be determined from the arrival time of the electron. Approximately 104 excitation cycles are performed per sample at a 4 kHz repetition rate, and the two-photon drive is weak enough that either zero or one atom is excited to the Rydberg manifold each cycle, even when on a strong resonance peak.
### Theory
We first discuss effects neglected in the idealized description of the synthetic lattice [Eq. (2)]. We then describe the techniques we use to treat the most important effect, which is decoherence.
There are, in principle, several potential causes of the discrepancies between observations and decoherence-free theory. One is that the idealized analysis has neglected coupling to off-resonant magnetic sublevels of the ns Rydberg states. Theoretical simulations including all these states show they produce level shifts on the order of the observed Stark shifts rather than broadening. Experimentally, we compensate for this shift by measuring it and setting the detuning of the millimeter-waves to the resonant values including the shifts. For the strongest Rabi frequencies we use, the largest shift is about ~500 kHz.
A second effect is the counter-rotating terms that were dropped to arrive at Eq. (2). This effect is negligible since the synthetic tunneling frequencies are at most J/h ~ 1.5 MHz, while the frequency difference between any coupled levels is roughly ΔE/h ~ 20 GHz, and therefore effects are expected to be on the order of 10−4 or smaller. We have also performed numerical simulations of the system that confirm this effect is negligible.
A third possible source of deviations between predicted and observed lineshapes, and likely the most important, is dissipation and decoherence, for which there are multiple potential sources. Spontaneous emission from the Rydberg level or emission stimulated by black-body radiation41,59 is expected to give a coherence time of >60 μs for the Rydberg states that we use. This is long compared to the timescale of our experiment. At very low millimeter-wave coupling strength, Rabi spectroscopy for an isolated, two-level 58s-58p system yields full-width-half-maximum linewidths as low as 50 kHz. This is much less than linewidths observed in spectra of synthetic dimensions with couplings Js/h ~ 1 MHz, which implies that fluctuations of stray electric and magnetic fields are not a major source of decoherence for the studies presented in this paper (e.g. Figs. 2 and 4).
Driven 58s-58p Rabi oscillations show that the coherence time decreases with increasing millimeter-wave coupling, with coherence times consistent with values of Γi derived from theory fits in Figs. 7 and 8 within a factor of 2. This observation motivates our choice of theoretical models for decoherence. Possible sources of decoherence consistent with this observation include coupling to higher angular momentum states and fluctuations in millimeter-wave amplitudes or polarization giving rise to fluctuating AC stark shifts. Further work is required to identify the dominant source of decoherence, and this will be important for determining the ultimate limits on the physics that can be explored.
For theoretical calculations of the spectra, we numerically solve the Lindbladian master equation,
$$\dot{\rho }=-\frac{i}{\hslash }[\hat{H},\rho ]+\mathop{\sum }\limits_{i=1}^{N}{{{\Gamma }}}_{i}\left[{L}_{i}\rho {L}_{i}^{{{{\dagger}}} }-\frac{1}{2}({L}_{i}^{{{{\dagger}}} }{L}_{i}\rho +\rho {L}_{i}^{{{{\dagger}}} }{L}_{i})\right]$$
(5)
where ρ is the density matrix, $$\hat{H}$$ is the Hamiltonian in Eq. (2), and the jump operators Li depend on the noise model. For millimeter-wave amplitude noise
$${L}_{i}^{\,{{{{{{\rm{amp}}}}}}}\,}=\left\{\begin{array}{ll}\left|i\right\rangle \left\langle i+1\right|+\left|i+1\right\rangle \left\langle i\right|\quad & {{{{{{\rm{if}}}}}}}\,i \; < \; N\\ 0 \hfill&{{{{{{\rm{if}}}}}}}\,i=N\end{array}\right..$$
(6)
The Γi are determined by fitting spectra, as described in the main text. The noise model extends the corresponding familiar results for 2-level systems. We also performed simulations with a model of white frequency noise and obtained similar results.
In these equations we have included only the magnetic sublevels employed in the synthetic dimension, i.e. those that are resonantly coupled by the millimeter-waves, for notational simplicity. We have also performed theoretical calculations including the off-resonant magnetic sublevels, which are straightforward to include.
## Data availability
Data presented in this publication is available on Figshare with the following identifier. https://doi.org/10.6084/m9.figshare.18258494.
## Code availability
Code for the decoherence simulations and data fitting is available on Figshare with the following identifier. https://doi.org/10.6084/m9.figshare.18259061.
## References
1. Boada, O., Celi, A., Latorre, J. I. & Lewenstein, M. Quantum simulation of an extra dimension. Phys. Rev. Lett. 108, 133001 (2012).
2. Ozawa, T. & Price, H. M. Topological quantum matter in synthetic dimensions. Nat. Rev. Phys. 1, 349–357 (2019).
3. Tsomokos, D. I., Ashhab, S. & Nori, F. Using superconducting qubit circuits to engineer exotic lattice systems. Phys. Rev. A 82, 052311 (2010).
4. Jukić, D. & Buljan, H. Four-dimensional photonic lattices and discrete tesseract solitons. Phys. Rev. A 87, 013814 (2013).
5. Boada, O., Celi, A., Rodríguez-Laguna, J., Latorre, J. I. & Lewenstein, M. Quantum simulation of non-trivial topology. New J. Phys. 17, 045007 (2015).
6. Chalopin, T. et al. Probing chiral edge dynamics and bulk topology of a synthetic hall system. Nat. Phys. 16, 1017–1021 (2020).
7. Bansil, A., Lin, H. & Das, T. Colloquium: topological band theory. Rev. Mod. Phys. 88, 021004 (2016).
8. Qi, X.-L. & Zhang, S.-C. Topological insulators and superconductors. Rev. Mod. Phys. 83, 1057–1110 (2011).
9. Celi, A. et al. Synthetic gauge fields in synthetic dimensions. Phys. Rev. Lett. 112, 043001 (2014).
10. Aidelsburger, M., Nascimbene, S. & Goldman, N. Artificial gauge fields in materials and engineered systems. C. R. Phys. 19, 394–432 (2018).
11. An, F. A., Meier, E. J. & Gadway, B. Diffusive and arrested transport of atoms under tailored disorder. Nat. Commun. 8, 325 (2017).
12. An, F. A., Meier, E. J., Ang’ong’a, J. & Gadway, B. Correlated dynamics in a synthetic lattice of momentum states. Phys. Rev. Lett. 120, 040407 (2018).
13. Mancini, M. et al. Observation of chiral edge states with neutral fermions in synthetic Hall ribbons. Science 349, 1510–1513 (2015).
14. Anisimovas, E. et al. Semisynthetic zigzag optical lattice for ultracold bosons. Phys. Rev. A 94, 063632 (2016).
15. Wall, M. L. et al. Synthetic spin-orbit coupling in an optical lattice clock. Phys. Rev. Lett. 116, 035301 (2016).
16. Floß, J., Kamalov, A., Averbukh, I. S. & Bucksbaum, P. H. Observation of Bloch oscillations in molecular rotation. Phys. Rev. Lett. 115, 203002 (2015).
17. Stuhl, B. K., Lu, H.-I., Aycock, L. M., Genkina, D. & Spielman, I. B. Visualizing edge states with an atomic Bose gas in the quantum Hall regime. Science 349, 1514–1518 (2015).
18. Livi, L. et al. Synthetic dimensions and spin-orbit coupling with an optical clock transition. Phys. Rev. Lett. 117, 220401 (2016).
19. Kolkowitz, S. et al. Spin-orbit-coupled fermions in an optical lattice clock. Nature 542, 66–70 (2017).
20. Price, H. M., Ozawa, T. & Goldman, N. Synthetic dimensions for cold atoms from shaking a harmonic trap. Phys. Rev. A 95, 023607 (2017).
21. Gadway, B. Atom-optics approach to studying transport phenomena. Phys. Rev. A 92, 043606 (2015).
22. Meier, E. J., An, F. A. & Gadway, B. Atom-optics simulator of lattice transport phenomena. Phys. Rev. A 93, 051602 (2016).
23. Meier, E. J. et al. Observation of the topological Anderson insulator in disordered atomic wires. Science 362, 929–933 (2018).
24. An, F. A., Meier, E. J. & Gadway, B. Engineering a flux-dependent mobility edge in disordered zigzag chains. Phys. Rev. X 8, 031045 (2018).
25. Meier, E. J., An, F. A. & Gadway, B. Observation of the topological soliton state in the Su-Schrieffer-Heeger model. Nat. Commun. 7, 13986 (2016).
26. Xie, D., Gou, W., Xiao, T., Gadway, B. & Yan, B. Topological characterizations of an extended Su-Schrieffer-Heeger model. npj Quant. Inf. 5, 1–5 (2019).
27. Sundar, B., Gadway, B. & Hazzard, K. R. A. Synthetic dimensions in ultracold polar molecules. Sci. Rep. 8, 3422 (2018).
28. Sundar, B., Thibodeau, M., Wang, Z., Gadway, B. & Hazzard, K. R. A. Strings of ultracold molecules in a synthetic dimension. Phys. Rev. A 99, 013624 (2019).
29. Blackmore, J. A., Gregory, P. D., Bromley, S. L. & Cornish, S. L. Coherent manipulation of the internal state of ultracold 87Rb133Cs molecules with multiple microwave fields. Phys. Chem. Chem. Phys. 22, 27529–27538 (2020).
30. Browaeys, A., Barredo, D. & Lahaye, T. Experimental investigations of dipole-dipole interactions between a few Rydberg atoms. J. Phys. B At. Mol. Opt. Phys. 49, 152001 (2016).
31. Hummel, F., Eiles, M. T. & Schmelcher, P. Synthetic dimension-induced conical intersections in rydberg molecules. Phys. Rev. Lett. 127, 023003 (2021).
32. Su, W. P., Schrieffer, J. R. & Heeger, A. J. Solitons in polyacetylene. Phys. Rev. Lett. 42, 1698–1701 (1979).
33. Chiu, C.-K., Teo, J. C., Schnyder, A. P. & Ryu, S. Classification of topological quantum matter with symmetries. Rev. Mod. Phys. 88, 035005 (2016).
34. Cooper, N., Dalibard, J. & Spielman, I. Topological bands for ultracold atoms. Rev. Mod. Phys. 91, 015005 (2019).
35. Atala, M. et al. Direct measurement of the Zak phase in topological Bloch bands. Nat. Phys. 9, 795–800 (2013).
36. St-Jean, P. et al. Lasing in topological edge states of a one-dimensional lattice. Nat. Photon. 11, 651–656 (2017).
37. de Léséleuc, S. et al. Observation of a symmetry-protected topological phase of interacting bosons with Rydberg atoms. Science 365, 775–780 (2019).
38. DeSalvo, B. J. et al. Ultra-long-range Rydberg molecules in a divalent atomic system. Phys. Rev. A 92, 031403 (2015).
39. Camargo, F. et al. Creation of Rydberg polarons in a bose gas. Phys. Rev. Lett. 120, 083401 (2018).
40. Whalen, J. D. et al. Probing nonlocal spatial correlations in quantum gases with ultra-long-range Rydberg molecules. Phys. Rev. A 100, 011402 (2019).
41. Gallagher, T. F. Rydberg Atoms. Cambridge Monographs on Atomic, Molecular and Chemical Physics (Cambridge, 1994).
42. Autler, S. H. & Townes, C. H. Stark effect in rapidly varying fields. Phys. Rev. 100, 703–722 (1955).
43. Robinson, A. K., Artusio-Glimpse, A. B., Simons, M. T. & Holloway, C. L. Atomic spectra in a six-level scheme for electromagnetically induced transparency and Autler-Townes splitting in Rydberg atoms. Phys. Rev. A 103, 023704 (2021).
44. Gardiner, C. & Zoller, P. Quantum noise: a handbook of Markovian and non-Markovian quantum stochastic methods with applications to quantum optics (Springer Science & Business Media, 2004).
45. Barredo, D. et al. Coherent excitation transfer in a spin chain of three Rydberg atoms. Phys. Rev. Lett. 114, 113002 (2015).
46. Signoles, A. et al. Coherent transfer between low-angular-momentum and circular Rydberg states. Phys. Rev. Lett. 118, 253603 (2017).
47. Benalcazar, W. A., Bernevig, B. A. & Hughes, T. L. Quantized electric multipole insulators. Science 357, 61–66 (2017).
48. Goldman, N., Juzeliunas, G., Öhberg, P. & Spielman, I. B. Light-induced gauge fields for ultracold atoms. Rep. Prog. Phys. 77, 126401 (2014).
49. Roy, R. & Harper, F. Periodic table for Floquet topological insulators. Phys. Rev. B 96, 155118 (2017).
50. McGinley, M. & Cooper, N. R. Classification of topological insulators and superconductors out of equilibrium. Phys. Rev. B 99, 075148 (2019).
51. Gregoric, V. C. et al. Improving the state selectivity of field ionization with quantum control. Phys. Rev. A 98, 063404 (2018).
52. Ebadi, S. et al. Quantum phases of matter on a 256-atom programmable quantum simulator. Nature 595, 227–232 (2021).
53. Scholl, P. et al. Programmable quantum simulation of 2D antiferromagnets with hundreds of Rydberg atoms. Nature 595, 233–238 (2021).
54. Jensen, R., Susskind, S. & Sanders, M. Chaotic ionization of highly excited hydrogen atoms: comparison of classical and quantum theory with experiment. Phys. Rep. 201, 1–56 (1991).
55. Sundar, B., Gadway, B. & Hazzard, K. R. A. Synthetic dimensions in ultracold polar molecules. Sci. Rep. 8, 3422 (2018).
56. Stellmer, S., Schreck, F. & Killian, T. C. Degenerate quantum gases of strontium. In Annual Review of Cold Atoms and Molecules, vol. 2 of Annual Review of Cold Atoms and Molecules, 1–80 (2014).
57. de Escobar, Y. N. M. et al. Bose-Einstein condensation of 84Sr. Phys. Rev. Lett. 103, 200402 (2009).
58. Vaillant, C. L., Jones, M. P. A. & Potvliege, R. M. Long-range Rydberg-Rydberg interactions in calcium, strontium and ytterbium. J. Phys. B: At. Mol. Opt. Phys. 45, 135004 (2012).
59. Camargo, F. et al. Lifetimes of ultra-long-range strontium rydberg molecules. Phys. Rev. A 93, 022702 (2016).
## Acknowledgements
This research was supported by the AFOSR FA9550-17-1-0366 (T.C.K.), NSF PHY-1904294 (F.B.D.), NSF PHY-1848304 (K.R.A.H.), and the Robert A. Welch Foundation through grants C-0734 (F.B.D.), C-1844 (T.C.K.), and C-1872 (K.R.A.H.). The authors thank S. Yoshida for helpful conversations.
## Author information
Authors
### Contributions
T.C.K., K.R.A.H., and F.B.D. planned the research. S.K.K. constructed the millimeter-wave system. S.K.K. performed the measurements, with J.D.W. and Y.L. providing assistance. S.K.K. carried out the data analysis. K.R.A.H., M.Y., and S.D. derived the Hamiltonian and developed the decoherence models and numerically calculated spectra. T.C.K. and F.B.D. supervised the experimental, and K.R.A.H. supervised the theoretical work. All authors discussed the research and contributed to editing the manuscript.
### Corresponding authors
Correspondence to S. K. Kanungo, K. R. A. Hazzard or T. C. Killian.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
## Peer review
### Peer review information
Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Kanungo, S.K., Whalen, J.D., Lu, Y. et al. Realizing topological edge states with Rydberg-atom synthetic dimensions. Nat Commun 13, 972 (2022). https://doi.org/10.1038/s41467-022-28550-y
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41467-022-28550-y
|
|
## A community for students. Sign up today!
Here's the question you clicked on:
## AvAuya 2 years ago how do you convert angles from degrees to radians?
• This Question is Open
1. getusel
multiply by $\frac{ \pi }{ 180^{0} }$
2. AvAuya
thankyou very much
#### Ask your own question
Ask a Question
Find more explanations on OpenStudy
|
|
See by year See by month See by week See Today Search
##### Conference on "Mathematical properties of large quantum systems"Organizers: Maria J. Esteban (CNRS), Mathieu Lewin (CNRS), Robert Seiringer (McGill, Montréal) & Jan Philip Solovej (Copenhagen)
Jun Yin (Madison, USA): "Local law, Delocalization and Diffusion Profile for Random Band Matrices"
Thursday 20 June 2013, 14:30 - 15:15
Amphi Hermite
We consider Hermitian and symmetric random band matrices $H = (h_{xy})$ in $d geq 1$ dimensions. The matrix entries $h_{xy}$, indexed by $x,y in (bZ/LbZ)^d$, are independent, centred random variables with variances $s_{xy} = E |h_{xy}|^2$. We assume that $s_{xy}$ is negligible if $|x-y|$ exceeds the band width $W$. In this talk, we introduce some new results on the local law, delocalization and diffusion Profile of Random Band Matrices.
|
|
# Combination deworming (mass drug administration targeting both schistosomiasis and soil-transmitted helminths) - 2012 report
We have published a more recent review of this intervention. See our most recent report on combination deworming.
Published: December 2012
# In a nutshell
This page discusses the general case for mass deworming. In general, we focus our discussion on work similar to that of the Schistosomiasis Control Initiative.
Mass deworming means treating large numbers of people with parasite-killing drugs: praziquantel kills schistosomiasis, while albendazole kills soil-transmitted helminths. Treatment takes place in areas where these parasites are believed to be extremely common, and side effects of the drugs are believed to be minor; thus, everyone in a given population (sometimes schoolchildren; sometimes the community at large) is treated, without being individually tested for the presence of infections.
Mass deworming is generally very inexpensive on a per-person-treated basis (in the range of $0.50). The benefits are potentially major, but also debatable: because parasitic infections rarely cause mortality or other acute effects, the evidence on their impact on quality of life is thin. In brief: • There is strong evidence that administration of the drugs reduces worm loads, but weaker evidence on the causal relationship between reducing worm loads and improved life outcomes. • Evidence for the impact of deworming on short-term general health is thin. Most of the effects discussed above are relatively small, and there is little consistency across different reviews and approaches. We would guess that combination deworming does have some small impacts on general health, but are fairly confident that it does not have a large general health impact. • There are two prominent studies arguing that reducing worm infection loads during childhood can have a significant later impact on income. We find these studies to constitute evidence that is suggestive, though not conclusive or necessarily representative (one study was in an area of unusually high worm infections; the other was a study of hookworm eradication, which included many measures other than deworming drugs, in the Southern U.S. in the early 20th century). • Attempts to estimate the cost-effectiveness of deworming within the disability-adjusted life-year (DALY) framework have been problematic. In 2011, GiveWell found the figures published by the World Health Organization to be off by ~100x due to errors and flawed in other ways even once corrected. • We have performed our own cost-effectiveness analysis of deworming. It involves many judgment calls and an extremely wide range of values, and different people have significantly different intuitions about the right inputs into these figures, even within GiveWell staff. All things considered, we estimate a "cost per equivalent life saved" for deworming ranging from ~$1700-$3800. Cost-effectiveness wise, bednets and deworming are estimated to be in the same ballpark, with a slight edge for bednets. Deworming is estimated to be ~2-5x more cost-effective than unconditional cash transfers. We encourage readers who find formal cost-effectiveness analysis important to examine the details of our calculations and assumptions, and to try putting in their own. Previous versions of this page: ## What are the infections targeted by mass deworming, i.e., soil-transmitted helminths (STH) and schistosomiasis? "Schistosomiasis" and "soil-transmitted helminths (STH)" both describe chronic parasitic infections. We discuss each below. ### Schistosomiasis Schistosomiasis involves infection with parasites released by snails and transmitted through the skin when it is exposed to infested water.1 The most common species of schistosoma infections in humans are S. mansoni, S. haematobium, and S. japonicum.2 While S. japonicum is present primarily in Asia and the Pacific, S. mansoni and S. haematobium are found in much of the tropics, including Sub-Saharan Africa.3 There is disagreement about how common schistosomiasis infections are; estimates range from 200 million to more than 600 million people infected.4 Because the Schistosomiasis Control Initiative, the charity for which we have undertaken this review, works only in Africa, we focus here on the morbidity caused by S. mansoni and S. haematobium. S. japonicum is believed by some scholars to be more dangerous than the other strains of schistosomiasis.5 The pathology of schistosomiasis varies by species and infection intensity. S. mansoni tends to infect the intestines, liver, and spleen,6 while S. haematobium typically infects blood vessels near the bladder.7 People can be infected by many of the worms at once, but schistosomes cannot reproduce inside the body.8 The burden of infections is typically highest in children and gradually declines as people age (even without treatment).9 In general, the morbidity caused by schistosomiasis arises from the eggs that the parasite lays while it inhabits the human host.10 Symptoms can include: • Hematuria (blood in the urine) or dysuria (painful urination);11 anemia and other nutritional deficiencies.12 Because these symptoms can have many causes, particularly in very poor populations (where schistosomiasis tends to be most prevalent), it is difficult to pinpoint the extent to which schistosomiasis contributes to them; the best information we have on this question comes from studies of deworming, which we discuss below. • Urinary tract infections and other bladder problems at late stages of the disease, potentially leading to bladder cancer and kidney failure;13 bloody diarrhea, bloody stools, abdominal pain, and liver failure;14 death.15 As discussed below, death due to schistosomiasis is quite rare; we have no good quantification of the contribution of schistosomiasis to non-fatal problems along these lines. • Developmental effects. There is some evidence that schistosomiasis and/or STHs impair development and can lower quality of life over the long term. We discuss this further below. Some, but not all, symptoms of schistosomiasis are reversible with treatment.16 ### Soil-transmitted helminths (STH) Soil-transmitted helminths (STHs) include trichuriasis (or whipworm), hookworm, and ascariasis (or roundworm). Each is estimated to affect the following number of people worldwide:17 • Trichuriasis: 26 million people • Hookworm: 60 million people • Ascariasis: 58 million people The ascariasis and trichuriasis worms feed on the contents of the intestines, while hookworms feed on host blood in the small intestine, moving frequently and leaving small bleeding sores behind.18 People can be, and often are, infected by multiple worms (and multiple species of worms) at once, but worms cannot reproduce in the body.19 The prevailing understanding of soil-transmitted helminths is that many cases are asymptomatic and that morbidity depends on the intensity of infection.20 Symptoms can include: • Anemia and other nutritional deficiencies.21 Because these symptoms can have many causes, particularly in very poor populations (where STHs tend to be most prevalent), it is difficult to pinpoint the extent to which STHs contribute to them; the best information we have on this question comes from studies of deworming, which we discuss below. • Intestinal obstruction,22 inflammation of the colon,23 and death.24 As discussed below, these symptoms are generally rare and (with the exception of death) brief in duration. • Developmental effects. There is some evidence that schistosomiasis and/or STHs impair development and can lower quality of life over the long term. We discuss this further below. ## How does mass deworming work? Mass deworming is intended for areas with high prevalence of the infections discussed here; people are treated without being individually tested. • For STHs, the World Health Organization recommends treatment with albendazole or mebendazole, with frequency of administration varying with prevalence of infection.25 • For schistosomiasis, the World Health organization recommends treating school-aged children and other at-risk populations with praziquantel, with frequency of administration varying with prevalence of infection.26 Distributions sometimes target school-age children and sometimes target the population at large (for example, see our review of the Schistosomiasis Control Initiative). The Disease Control Priorities Report states that schools provide a strong infrastructure and that teachers can be trained to deliver drugs safely.27 A fuller picture of the process can be gained from our notes on our visit to a stakeholders meeting for a national deworming program and a demonstration deworming, as guests of the Schistosomiasis Control Initiative. ## Does mass deworming have a strong track record? Strong evidence suggests that mass deworming reduces the prevalence of the infections discussed here. • There are two Cochrane reviews of randomized control trials (high-quality studies) of mass drug treatments for schistosomiasis (primarily using praziquantel, though occasionally metrifonate or oxamniquine). They both conclude that praziquantel is effective in treating schistosomiasis.28 • A 2000 meta-analysis found that mass treatment with albendazole was effective in decreasing the prevalence of all three soil-transmitted helminths, though more so for ascariasis than hookworm or trichuriasis.29 • At our review of the Schistosomiasis Control Initiative, we review some evidence from national deworming campaigns. This evidence is less rigorous and has some issues that we note, but overall we feel it supports the idea that large-scale deworming can substantially reduce the prevalence of schistosomiasis and (in one case) hookworm. ## What are the benefits of mass deworming? As discussed above, the consequences of schistosomiasis/STHs fall into three categories: • Subtle general health impacts such as anemia and other nutritional deficiencies. Because these symptoms can have many causes, particularly in very poor populations, it is difficult to pinpoint the extent to which STHs and schistosomiasis contribute to them; the best information we have on this question comes from studies of deworming. • Severe symptoms such as intestinal obstruction and major organ damage. These occur in a small proportion of those infected. • Developmental effects. There is some evidence that schistosomiasis and/or STHs impair development and can lower quality of life over the long term. We address each of these separately below, and then discuss other possible effects about which less is known. ### Subtle general health impacts In this section, we first discuss the effect of deworming on haemoglobin levels (low haemoglobin indicates anemia).30 This is because anemia is the symptom for which we have, by far, the most and highest-quality evidence, and a discussion of this symptom is illustrative of some general problems with assessing the evidence regarding the benefits of deworming. We then discuss the other evidence we have seen on the immediate impacts of deworming; this evidence is very limited. #### Haemoglobin levels We have examined several literature reviews discussing the impact of deworming on changes in haemoglobin levels. We focus below on three reviews, two of them Cochrane reviews and one of them an especially transparent non-Cochrane review: • Danso-Appiah et al. 2008, one of the two Cochrane reviews discussing treatment of schistosomiasis. (The other, Saconato and Atallah 2009, does not address haemoglobin effects.) The impact of treatment on changes in haemoglobin is not discussed in the summary, but several sub-analyses address it: • One study of combination deworming (albendazole and praziquantel) found an effect of +2.4 g/L.31 • One study of praziquantel-only deworming found an effect of +1.1 g/L.32 • Two studies of deworming with metrifonate each found an effect of +3 g/L.33 Another literature review notes that metrifonate is also partially effective against hookworms, so these results cannot be attributed to schistosomiasis-only treatment; they may be more appropriately attributed to combination deworming.34 We have looked up each of these four studies and determined that each of them was pre-screened: each examined the impact of treatment on infected individuals as opposed to the impact of treatment on the general population.35 • Taylor-Robinson et al. 2012, the Cochrane review discussing treatment of STHs, found a statistically significant 3.7 g/L increase in haemoglobin levels in two small screened trials (i.e. treating only people known to have soil-transmitted helminths).36 However, no effect on haemoglobin status was observed in the 11 (typically larger) trials that treated the entire population.37 We believe that Taylor-Robinson et al. 2012 should have included the haemoglobin results from Miguel and Kremer 2004, a large and well-known cluster-randomized unscreened trial of deworming, but that would be unlikely to change the overall picture, since it also failed to find a statistically significant increase in haemoglobin.38 • Smith and Brooker 2010 investigates the question of whether haemoglobin effects are stronger in populations with higher prevalence of hookworm, and thus focuses on studies that can be used to help address this question.39 It does not appear to support its hypothesis that hookworm prevalence can predict the size of haemoglobin effects, but it does find a +1.89 g/L effect for STH-only deworming and a +2.37 g/L effect for combination deworming.40 The review provides some detail on the four studies going into the 2.37 g/L estimate: one or two are pre-screened,41 and all involved baseline hookworm prevalence of > 50% (weighted average 64%).42 We exclude results from Gulani 2007, King, Dickman, and Tisch 2005, and older Cochrane reviews on the grounds that the newer Cochrane reviews present a more exhaustive and transparent picture of the evidence.43 From these reviews, we conclude: • Estimated effects can vary significantly by context and by one's approach to selecting and combining studies. In particular, it appears that meta-analyses that include screened studies tend to find positive effects on haemoglobin, while meta-analyses that focus exclusively on mass (non-screened) treatment do not find statistically significant gains. (More on this pattern below.) • Combination deworming appears to have an impact on the change in haemoglobin level. Our best quantitative estimate of the average effect of combination deworming is +2.37 g/L, the effect found in Smith and Brooker 2010, but we would guess that that is an overestimate because of the high baseline hookworm prevalence and the fact that at least one, possibly two, of the constituent studies involved pre-screening. The relative contributions of albendazole (treating STHs) and praziquantel (treating schistosomiasis) are unclear. What is the significance of a +2.37 g/L effect? Of the above reviews, Smith and Brooker 2010 is most useful for getting context on the significance of this effect, because it gives means, standard deviations, and the percentage of people found to be under various thresholds for anemia for each of the relevant studies.44 Average concentrations range from ~100-130 g/L; standard deviations from 5-15 g/L; the absolute reduction in moderate anemia (threshold is 110 g/L)45 is ~9% in one study and ~14% in another (note that this is in line with the observed effects on anemia in the panel studies done by the Schistosomiasis Control Initiative). For another point of comparison, experimental evidence on insecticide-treated nets (ITNs) implies that children sleeping under ITNs experience +5.7 g/L compared to children sleeping under no nets.46 #### Other subtle general health impacts The evidence is quite thin regarding other immediate symptoms of these infections. • Taylor-Robinson et al. 2012, the Cochrane review discussing treatment of STHs (discussed in the previous section), finds little evidence that deworming is beneficial.47 • In three small studies with a total of 149 participants who were screened for STH infections prior to participation, deworming pills caused a statistically significant increase in weight of about .6 kilograms. In a few other small screened studies, deworming statistically significantly improved mid-upper arm circumference and skin fold thickness; similar studies found no effect on height, body mass index, or school attendance. • Two studies in one location in Kenya with extremely high worm prevalence found that a single deworming treatment caused weight gain, but seven more studies in different areas found no effect, and larger studies with multiple doses were even more inconclusive: two found large and significant results, while ten others found small statistically insignificant results (pgs 19-21). Taylor-Robinson et al. 2012 find essentially no evidence from studies of mass STH deworming to show that it improves height, cognitive test scores, or school performance 48 What does it mean if smaller programs with screened participants show effects, while larger programs of mass deworming do not? One possibility is that STH deworming does have some impact on nutrition in infected individuals, but that the effect is too small to pick up in unscreened population studies. Another possibility is that the effects seen in smaller programs are spurious. The Cochrane review highlights the latter possibility, stating that “the data on targeted deworming is limited (three small trials, n = 149); the quality of the evidence is ’moderate’ for weight and ’low’ for haemoglobin.” (The Cochrane review also points to a third possibility: “the intervention itself is different … having been screened, and then told they have worms, children are more likely to comply with treatment, and alter their behaviour.” We find this possibility least likely.) • Miguel and Kremer 2004 involved combination deworming, without prescreening, for some children and STH-only treatment for others (depending on the prevalence of schistosomiasis in the area).49 It found a ~25% reduction in school absenteeism, though no effect on test scores.50 It also found statistically significant impacts on self-reported sickness (both for "last week" and "often"), but not for height or weight.51 Followup studies implied potential developmental effects and externalities, discussed in later sections (along with our reservations about the study's representativeness). • King, Dickman, and Tisch 2005 lists measures of "the impact of schistosomiasis or heavy schistosomiasis" based on treatment studies, though as discussed above, we believe these studies may actually represent the impact of combination deworming. In addition to haemoglobin (discussed above), King 2005 lists statistically significant impacts on weight and skinfold thickness (64% and 66%, respectively, of a standard deviation; it isn't clear whether these refer to weight/thickness or the change in weight/thickness) and non-statistically-significant impacts on weight-for-height and height.52 King, Dickman, and Tisch 2005 also summarizes observational studies that examine people who are vs. aren't infected with schistosomiasis. It reports non-statistically significant differences in aerobic performance and unspecified measures of school performance and obstetric history, mixed differences (statistical significance depending on just how the comparison is made) in unspecified measures of pain history and exercise intolerance, and statistically significant effects in unspecified measures of exercise duration and diarrhea history.53 These observational studies could overstate the effects of infections (if infection is correlated with other things such as hygienic, economic or biological factors54) or understate the effects of infections (if "uninfected" people are infected after all, as King has argued elsewhere).55 Unfortunately, this review does not allow us to link individual studies from the bibliography to the particular meta-analysis outcomes reported. We do not consider it to meet our criteria for a high-quality literature review and cannot be confident in its claims. #### Bottom line on subtle general health impacts Overall, evidence for the impact of deworming on short-term general health is thin. Most of the effects discussed above are relatively small, and there is little consistency across different reviews and approaches. We would guess that combination deworming does have some small impacts on general health, but are fairly confident that it does not have large such impacts. ### Prevention of potentially severe effects In addition to subtle general health impacts, deworming may avert more severe symptoms. Schistosomiasis can cause organ damage, particularly to the bladder, liver and kidneys, sometimes resulting in death, though these deaths appear highly infrequent and we have little information about the extent of non-fatal organ damage. Severe symptoms of STHs appear infrequent and short in duration. In this section, we discuss schistosomiasis and STHs separately. #### Schistosomiasis We have only identified one review56 (referred to here as "Van der Werf et al.") that systematically attempts to attribute severe symptoms (organ damage/malfunction, death) to schistosomiasis. This review discusses symptoms such as the presence of abnormal amounts of blood in the urine, self-reported painful urination, enlarged liver/spleen, and various signs of kidney stress or malfunction.57 However, this review is problematic for several reasons: • We find its methodology for attributing mortality/morbidity to schistosomiasis to be highly problematic.58 • It estimates 280,000 annual deaths due to schistosomiasis;59 by contrast, the official burden of disease publication released by the World Health Organization in 2008 estimates 41,000 deaths.60 Because the latter cites Van der Werf et al., (2001)61 we take this as further evidence that the methodology used in Van der Werf et al. is problematic. • Nearly all of the schistosomiasis consequences discussed in Van der Werf et al. (a) can sometimes be attributable to factors other than schistosomiasis; (b) range in severity and do not necessarily have any detectable impact on quality of life.62 Outside of this review, the only quantified information we have found on severe consequences of schistosomiasis involves deaths. It appears to us that even attributing deaths to schistosomiasis is very difficult, and estimates vary widely.63 Nonetheless, we regard the World Health Organization's 2004 estimate (published in 2008) of 41,000 deaths as the most credible.64 By comparison, the same source estimates that malaria results in roughly 890,000 deaths a year, more than twenty times as many, the vast majority of which are in children under 5.65 We can also get a sense of how bad the non-death burden of schistosomiasis is estimated to be, using the World Health Organization's Disability-adjusted life-year (DALY) estimates. For each age group, we tabulated the ratio of DALYs (a measure of all manifestations of disease burden) to YLLs (comparable to DALYs but only addressing mortality).66 These figures estimate that death accounts for about 80-90% of the burden of disease in people age 15-29; 90%-94% in people age 30-44; 95%+ in people age 45+; and only 10-20% in people under 14. This picture is consistent with the idea that older people are more likely to die due to schistosomiasis, while younger people are more likely to suffer other consequences such as subtle general health impacts and developmental effects. #### STHs The serious effects of soil-transmitted helminth infections vary by species.67 Because these severe symptoms are quite rare, we have found characterizing their frequency and severity to be difficult. The most thorough and up-to-date overview that we are aware of is the chapter on intestinal nematode infections (another name for soil-transmitted helminths) in the World Health Organization's The Global Epidemiology of Infectious Disease publication. That chapter is the published version of the working paper that was used to generate the Disease Control Priorities report's cost-effectiveness estimate for soil-transmitted helminth treatment. For ascariasis, the most serious complication is short-term intestinal obstruction, sometimes resulting in death.68 We estimate that, each year, intestinal obstruction (which may require hospitalization and surgery and has an estimated duration of ~4 weeks) would occur in .026% of the school-age population of Sub-Saharan Africa (1 in ~3800 children per year) without treatment,69 resulting in about 3 deaths per million children per year of the school-age population.70 For trichuriasis, heavy infections can result in a "dysenteric form" involving a bleeding/inflamed colon, discomfort and bloody stools, with a duration of 12 months or more71; this is estimated to occur about once a year per ~700 children school-aged children in Sub-Saharan Africa each year.72 It is unclear whether trichuriasis continues to cause mortality, but if it does, this is quite rare.73 Hookworm's most serious symptom is believed to be anemia.74 We discuss this symptom above and conclude that there is not strong evidence for an impact of deworming treatment on anemia; any such impact is likely to be small. It appears that death estimates for STHs in general have recently been revised downward, substantially. The World Health Organization's Global Burden of Disease report for 2001 (published in 2006) listed 1,000 deaths per year in sub-Saharan Africa for each of ascariasis and trichuriasis and 2,000 for hookworm, for a total of 4,000;75 but the Global Burden of Disease report for 2004 (published in 2008) lists a total of 412 deaths in sub-Saharan Africa (for the 3 infections combined).76 ### Developmental impacts In our view, the most compelling case for deworming as a cost-effective intervention comes not from its subtle general health impacts (which appear relatively minor and nonspecific) nor from its potential reduction in severe symptoms of disease effects (which we believe to be rare), but from the possibility that deworming children has a subtle, lasting impact on their development, and thus on their ability to be productive and successful throughout life. Empirical evidence on this matter is very limited, resting on two relatively well-known and well-executed studies. Bleakley 2007 analyzes the Rockefeller Sanitary Commission's campaign to eradicate hookworm in the American South in the early 20th century, and concludes: Areas with higher levels of hookworm infection prior to the RSC experienced greater increases in school enrollment, attendance, and literacy after the intervention. No significant contemporaneous results are found for literacy or occupational shifts among adults, who had negligible prior infection rates. A long-term follow-up indicates a substantial gain in income that coincided with exposure to hookworm eradication.77 There are good reasons to be cautious in using this study as evidence relevant to deworming: • The program studied was very different from those that apply to modern-day deworming. The campaign involved not just deworming drugs but eradication efforts on multiple fronts, including major efforts toward improved sanitation and hygiene.78 • The context was very different from that of modern-day deworming: the campaign took place in the United States of the early 20th century America, where the practical consequences of worm infections could have been very different from the consequences in the modern-day developing world (and the infections themselves may have been quite different as well). • Bleakley (2007) is not an experimental study, but a retrospective one. That is, rather than setting out to answer a question by collecting new data, the author analyzed a large pre-existing data set, raising strong possibilities for publication bias. We believe it is unlikely that this paper would have been published in a major economics journal if it had simply concluded that there was no strong evidence for major benefits of hookworm eradication. We are generally very hesitant to use papers of this nature in our work. That said, we believe the paper merits some weight on the question of developmental benefits, because • It has a plausible strategy for separating the effects of hookworm infection from effects of other things (such as poverty) that may correlate with hookworm infection. Specifically, it exploits the fact that the eradication campaign caused a relatively rapid drop in hookworm infection rates, that the campaign targeted hookworm specifically, and that areas with higher initial hookworm prevalence saw greater falls in hookworm prevalence.79 Thus, it seems possible that a connection between the fall in hookworm prevalence and positive life impacts - coinciding with the timing of the campaign - could be attributed specifically to hookworm eradication, and not to other factors. • It uses graphs to illustrate a relationship between hookworm prevalence and later-in-life income that was negative and fairly constant from 1820-1900, then turned into "zero effect" (when adjusted using a set of controls) after 1920, coinciding well with the timing of the eradication campaign. It seems difficult to explain this pattern except by attributing the change to the drop in hookworm prevalence.80 • It addresses multiple alternate possible explanations for its observations, looking relatively thoroughly for changes in "health and health policy, educational resources, race and race relations, urbanization and land use, and parental background" that might confound the results, and finds little along these lines.81 • It covers a very large-scale campaign. While randomized controlled trials allow for a cleaner connection between a program and its effects, a large-scale study like this seems likely to be less dependent on idiosyncratic aspects of a particular mini-program designed to be studied. This paper hypothesizes that the fall in hookworm prevalence led to benefits primarily by improving people's ability to become literate and otherwise benefit from school.82 Baird et al. 2011 and Baird 2007 are followups to Miguel and Kremer 2004, discussed in the above sections, which involved combination deworming, without prescreening, for some children and STH-only treatment for others (depending on the prevalence of schistosomiasis in the area).83 The program was not technically randomized, but used a method we consider similar to randomization to determine who was treated.84 These studies analyzed data on young adults who had been involved in the deworming program in grade school, comparing children who started receiving deworming treatment earlier to those who started receiving it later. Baird et al. 2011 compared the first two groups of schools to receive deworming (as treatment group) to the final group (as control); the treatment group received 2.41 extra years of treatment on average.85 The study's headline effect is that those in the treatment group worked, and earned, substantially more,86 driven largely by a shift into the manufacturing sector.87 It also found a positive impact on meals consumed though not on overall consumption,88 small but non-statistically significant gains in school performance (though not on IQ),89 and gains on self-reported health though not on height or weight-for-height (and the treatment group had higher health expenditures).90 More at our 2012 blog post and full writeup on this paper. Baird 2007 analyzes a similar, early dataset from the same program, though using a different definition of "treatment group" from the later paper: while Baird et al. 2011 uses the first two groups of schools to receive treatment as its treatment group and the last as its control, Baird 2007 simply looks at the number of years of deworming assigned to each child.91 It finds contrasting, though also encouraging, results: some statistically significant impacts on height and weight92 but no such impacts on education and labor market outcomes,93 and small negative impacts on cognitive performance.94 There are reasons to be cautious in interpreting these studies: • The conditions at the time of treatment are unlikely to be representative of other contexts: there was extraordinary flooding in the study area due to the El Nino climate pattern, leading to abnormally high prevalence of heavier worm infections.95 • In general, we hesitate to place too much weight on a finding from a single study (or a small number of studies) because of the potential for publication bias, as well as possible alternative explanations for the findings. For example, efforts to encourage students to attend school in order to receive treatment might have bled over to later days, increasing attendance in treatment schools over the following years. The particular piece of data that led us to examine this possibility is that within schools, there is no statistically significant difference in attendance rates for treated and untreated students (the effects only appear across schools). (The authors assume that this phenomenon occurred due to the presence of within-school externalities.) In 2012, the authors behind these studies shared their data and code so that we could thoroughly examine them. Our examination resulted in an overall higher level of confidence in the studies than we had had previously, though the above concerns remain. More at our 2012 blog post and full writeup on the topic. #### Bottom line on developmental effects We find it highly plausible that deworming has subtle but significant developmental effects that improve quality of later life. The two studies we know of on this topic each have substantial issues, and we do not consider them conclusive evidence for the presence (much less for the size) of these effects, but we do consider them suggestive evidence, enough to take the possibility of developmental effects seriously. We further discuss how strong we find the case for developmental effects of deworming to be, in comparison with the case for long-term effects of other interventions, in a 2012 blog post. ### Externalities for the untreated It is possible that deworming benefits people who do not receive treatment, by reducing overall prevalence of worm infections in a community. The only evidence we have seen for this idea comes from the same Kenya program discussed above. Miguel and Kremer 2004 found significant impacts on children in nearby but untreated schools.96 In addition, Ozier 2011 reviewed later data on younger children, and concluded: Community deworming before a child’s first birthday brings about a 0.2-standard-deviation improvement in performance on Raven’s Matrices, a decade after the intervention. Estimated effects on vocabulary measures are similar in magnitude, but not always as significant; effects on memory are not statistically distinguishable from zero. A summary measure, the first principal component of all six cognitive measurements, also shows a roughly 0.2-standard-deviation effect. These effects are equivalent to between 0.5 and 0.8 additional grades in school … The effect of community deworming spillovers on height, height-for-age, and stunting all appear statistically indistinguishable from zero.97 For reasons discussed above, we believe these studies are likely to overstate the impact of deworming, and that this overstatement is particularly likely to be an issue for externalities due to the unusual flooding and elevated level of infections. In addition, we note that the unusual design of Miguel and Kremer 2004 (in which nearby schools were assigned to be dewormed earlier or later in an essentially arbitrary way) is unlikely to be representative of large-scale school-based deworming campaigns such as those of the Schistosomiasis Control Initiative. That said, we think it is worth noting the possibility of externalities, as this could both (a) increase the cost-effectiveness of deworming beyond what we estimate below; (b) imply that some of the research on deworming reviewed above may understate the impact of deworming. ### Possible negative/offsetting impact We have not identified many possible negative/offsetting impacts of deworming. • Side effects of drugs. Albendazole and praziquantel may both have side effects, including headache, abdominal pain, upset stomach, nausea, vomiting, and fever.98 These effects seem better-understood, and potentially painful but temporary, for praziquantel. There are some more serious claims for albendazole (adverse effects on growth; "reports of elevated liver enzymes, headaches, loss of hair, low levels of white blood cells (neutropenia), fever, and itching if taken at higher doses and/or for a long period of time");99 we do not have evidence on these, but note that albendazole treatment (as discussed above) seems to generally have effects that are positive or else not statistically significant. • Possible development of drug resistance. The widespread use of deworming drugs may cause resistance to emerge.100 We know little about the significance of this concern. ## Different versions of the intervention ### How often do people need to be treated? The World Health Organization recommends annual treatment for schistosomiasis and twice-annual treatment for soil-transmitted helminths for school-aged children in areas with prevalence above 50%.101 In areas with lower prevalence, the World Health Organization recommends less frequent deworming.102 We have not been able to fully understand the rationale for the World Health Organization's recommendations. The underlying principle is that reinfection rates are higher in areas with greater prevalence, so more frequent deworming will be necessary to avoid morbidity in those areas, but we have not found evidence that shows the superiority of annual treatment (as compared with, e.g., semi-annual or bi-annual treatment). The World Health Organization's recommendations about which groups of people to treat also vary with prevalence; in low-prevalence areas, treatment is restricted to school-aged children and certain categories of adults.103 We do not know how the cost-effectiveness of treating adults in high-prevalence areas compares with the cost-effectiveness of treating children. The main reason for focusing deworming treatment on children is that, with the exception of hookworm, the prevalence of the helminths and schistosomes is highest in childhood.104 Although deworming in childhood does not prevent adults from being reinfected with new worms, the theory underlying mass treatment is that it does prevent the developmental effects of having worms in childhood and the serious organ damage that can result from long-term high-intensity infections.105 ## How cost-effective is mass deworming? The cost-effectiveness of deworming is very difficult to summarize, because (a) mortality and other clear, major life impacts are rare; (b) the benefits (particularly subtle general health impacts and developmental benefits) appear subtle and hard to quantify. In addition, the cost-effectiveness of deworming may vary substantially depending on: • Prevalence of different infections where deworming takes place. We have little information allowing us to separate the effects of different infections, though schistosomiasis appears more severe than STH infections. • Frequency of treatment. • Whether treatment is successfully delivered consistently enough to prevent reinfection. • Whether treatment is provided to children or adults (the latter seem to us to be much less likely to experience later-in-life developmental benefits). We have attempted to quantify short-term health impacts using information from the Disease Control Priorities Report. Doing so has raised substantial challenges. We have found significant errors in this report's analysis that stood uncorrected in the several years between the report's publication and our 2011 investigation, and estimates of the frequency and severity of symptoms are difficult to interpret and appear to be grounded in very little in the way of empirical data. Details on how we have used this information to estimate short-term health impacts are available at our 2011 deworming report. We have attempted to quantify longer-term developmental impacts by working off of the effects found in the Kenya studies discussed above, and making adjustments for what we perceive as unrepresentativeness or other questions around these studies. Details of our cost-effectiveness analysis, which incorporates both of these, are discussed in a 2012 blog post. In a nutshell: • Estimating a "cost per equivalent life saved" figure for deworming involves many judgment calls and an extremely wide range of values, from$31 to $25,000 per “equivalent life saved.” Different people have significantly different intuitions about the right inputs into these figures, even within GiveWell staff. We provide estimates based on the guesses of the three staff members who have worked most intensively on cost-effectiveness. These estimates diverge widely from each other, with "cost per equivalent life saved" for deworming ranging from ~$1700-\$3800; in all cases, bednets and deworming are estimated to be in the same ballpark, with a slight edge for nets.
• We have also attempted to compare deworming to unconditional cash transfers using a different framework, which is based on estimating the financial benefits of deworming and comparing them to those of cash transfers, then adjusting for non-monetized health benefits. Again, the outcome of the comparison depends on many significant judgment calls, and different staff members produce very different figures. The general picture here is that deworming is between 2 and 5 times as cost-effective as cash-transfers.
• We encourage readers who find formal cost-effectiveness analysis important to examine the details of our calculations and assumptions, and to try putting in their own. To the extent that we have intuitive preferences and biases, these could easily be creeping into the assumption- and judgment-call-laden work we’ve done in generating our cost-effectiveness figures, and we’re not entirely confident that the figures themselves are adding substantial information beyond the intuitions we have from examining the details of them.
More, including links to our spreadsheets, at our 2012 discusssion of cost-effectiveness for deworming and other interventions.
## Is there room for more funding in deworming?
We believe there is substantial room to do more deworming in suitable countries. Details at our review of the Schistosomiasis Control Initiative.
## Sources
Document Source
Baird 2007 Unpublished
Baird 2011 Source
Beasley et al. 1999 Unpublished
Bennett and Guyatt 2000 Unpublished
Bleakley 2007 Unpublished
Bundy et al. 2004 Source
Bundy et al. Intestinal nematode infections Source
Danso-Appiah et al. 2008 Source (archive)
Duflo et al. 2012 Source (archive)
GiveWell. Schistosomiasis mortality analysis Source
Gulani et al. 2007 Source
Hotez et al. 2006 Source
Charles H King, Professor of International Health at Case Western Reserve University, email to GiveWell, November 10, 2011 Source
King 2010 Source
King and Dangerfield-Cha 2008 Unpublished
King, Dickman, and Tisch 2005 Unpublished
Lengeler 2004 Source (archive)
Mathers, Ezzati, and Lopez 2007 Source
Mathers, Lopez, and Murray 2006 Source
Miguel and Kremer 2004 Source
National Institutes of Health. Vomiting blood Source (archive)
Olds et al 1999 Unpublished
Ouma et al. 2005 Unpublished
Ozier 2011 Source
Saconato and Atallah 2009 Source (archive)
Smith and Brooker 2010 Unpublished
Stephenson et al. 1985a Unpublished
Stephenson et al. 1985b Unpublished
Taylor-Robinson, Jones, and Garner 2007 Source (archive)
Taylor-Robinson et al. 2012 Source (archive)
Van der Werf and de Vlas 2001 Source
Van der Werf et al. 2003 Source
World Health Organization. Preventive chemotherapy in human helminthiasis Source
World Health Organization. Schistosomiasis fact sheet Source (archive)
World Health Organization. Worldwide prevalence of anaemia 1993–2005: WHO global database on anaemia Source
World Health Organization. The global burden of disease: 2004 update Source
World Health Organization. Global Burden of Disease 2004: Deaths by Age, Sex, and Cause for the year 2004 Source
### Supplementary Sources
Document Source
GiveWell. New Cochrane Review of the Effectiveness of Deworming Source
King, Charles H. 2011. Schistosomiasis: challenges and opportunities Source
Stephenson, L.S., et al. 1989. Single dose metrifonate or praziquantel treatment in Kenyan children. II. Effects on growth in relation to Schistosoma haematobium and hookworm egg counts Unpublished
World Health Organization. Schistosomiasis strategy Source (archive)
• 1.
"People become infected when larval forms of the parasite – released by freshwater snails – penetrate their skin during contact with infested water.... There are two major forms of schistosomiasis – intestinal and urogenital – caused by five main species of blood fluke (see table)." World Health Organization. Schistosomiasis fact sheet
• 2.
King and Dangerfield-Cha 2008, Pg 67.
• 3. King, Dickman, and Tisch 2005, Pg 1564.
• 4.
• 5.
“I do think that the same effects [as with S. japonicum, the strain in the Coutinho et al. 2006 study,] obtain with S. mansoni and S. haematobium. It is possible that their effect size will be smaller than that seen with S. japonicum—there is a school of thought which suggests that anti-S. japonicum inflammation is more intense because of its higher egg output and its more recent switch to parasitism of the human host. I don’t know if this is true.” Charles H King, Professor of International Health at Case Western Reserve University, email to GiveWell, November 10, 2011.
• 6.
• 7.
Danso-Appiah et al. 2008, Pg 2.
• 8.
"Each worm’s establishment in a host is the result of a separate infection event, and the number of infective stages shed (the infectiousness of the host) is a function of the number of worms present. In population dynamic terms this implies that the individual worm is the unit of transmission for helminths, while the individual host is the unit for microparasites (Anderson & May 1982)... Since the size of the worm burden varies considerably between individuals, and infection implies only that worms are present, a population of "infected" people will exhibit considerable variation in the severity of disease manifestations." Bundy et al. 2004, Pgs 245-46.
• 9.
“The age-dependent patterns of infection prevalence are generally similar among the major helminth species, exhibiting a rise in childhood to a relatively stable asymptote in adulthood (figure 24.1). Maximum prevalence of A. lumbricoides and T. trichiura is usually attained before five years of age, and the maximum prevalence of hookworm and schistosome infections is usually attained in adolescence or in early adulthood. The nonlinear relationship between prevalence and intensity has the consequence that the observed age-prevalence profiles provide little indication of the underlying profiles of age intensity (age in relation to worm burden). Because intensity is linked to morbidity, the age-intensity profiles provide a clearer understanding of which populations are vulnerable to the different helminths (figure 24.1). For A. lumbricoides and T. trichiura infections, the age-intensity profiles are typically convex in form, with the highest intensities in children 5 to 15 years of age (Bundy 1995). For schistosomiasis, a convex pattern is also observed, with a similar peak but with a plateau in adolescents and young adults 15 to 29 years of age (Kabatereine and others 1999). In contrast, the age-intensity profile for hookworm exhibits considerable variation, although intensity typically increases with age until adulthood and then plateaus (Brooker, Bethony, and Hotez 2004). In East Asia it is also common to find the highest intensities among the elderly. However, more generally, children and young adults are at higher risk of both harboring higher levels of infection (thus greater levels of morbidity) and becoming reinfected more quickly. Both may occur at vital stages in a child’s intellectual and physical development.” Hotez et al. 2006, Pgs 469-470.
• 10.
“The eggs of S. haematobium have a terminal spine and must traverse the bladder tissues towards the lumen of the bladder and urinary tract for elimination via urine. In the process, a considerable number become trapped in the bladder walls and surrounding tissues to initiate immune- induced inflammatory reactions, which subsequently lead to morbidity. It is important to note that eggs trapped in the tissues cause disease rather than the worms themselves.” Danso-Appiah et al. 2008, Pg 3.
• 11.
• "Haematuria (blood in urine) and dysuria (painful urination) are the main early symptoms of the disease." Danso-Appiah et al. 2008, Pg 3.
• “At the time of our follow-up examinations, prevalence of hematuria was 40% among the previously treated group and 39% among un- treated subjects (P = not significant). Moderate-to-severe hematuria (visible hematuria = 2+ or 3+ by dipstick) was more common among untreated subjects (prevalence = 24%) than among those previously treated (prevalence = 14%), and this difference had borderline statistical significance (McNemar’s S = 3.6, P = 0.058).” Ouma et al. 2005, Pg 361.
• 12.
"Sustained heavy infection leads to iron deficiency anaemia and other nutritional deficiencies, especially in children (Awasthi 2003; [King, Dickman, and Tisch 2005]. The disease often results in retarded growth, reduced physical activity, and impaired cognitive function in children (Stephenson 1993; Nokes 1999; PCD 1999; Jukes 2002; WHO 2002)." Danso-Appiah et al. 2008, Pg 3.
• 13.
"Late-stage complications are insidious and include calcification of the bladder wall, bladder stones, and secondary bacterial infection (Jordan 1993). Tissue damage caused by trapped eggs can lead to diffuse or localized wall thickening of the bladder and the distal ureter hydronephrosis orhydroureter, which may eventually lead to kidney failure (Kardorff 2001; WHO 2002; Van der Werf et al. 2003). Elevated urine albumin levels and reported pain upon micturition by children have a strong correlation with S. haematobium infection (Rollinson 2005). An important long-term consequence of infection is squamous cell carcinoma of the bladder (Jordan 1993; King 2005; Shiff 2006). A recent review points out that bladder carcinoma is the seventh most common cancer worldwide in men and that the highest incidence rate among men is found in Egypt (37.1 per 100,000 person-years) (Murta-Nascimento 2007), which might be related to S. haematobium infection and morbidity (Jordan 2000)." Danso-Appiah et al. 2008, Pg 3.
• 14.
"Schistosomiasis infects the intestine, liver, and spleen. It can cause bloody diarrhoea, bloody stools, and abdominal pain (Gryseels 1992; WHO 1993). Infection of the liver and spleen causes liver fibrosis and portal hypertension that are generally irreversible in the late stages and kill patients, sometimes as a result of haemorrhage from varices (WHO 1993). Liver failure may also occur, especially when S. mansoni infection is associated with viral hepatitis (Pereira 1994)." Saconato and Atallah 2009, Pg 2.
• 15.
There is substantial disagreement about the number of annual fatalities due to schistosomiasis-caused non-functioning kidneys and hematemesis (estimates range from under 27,000 to 280,000).
"WHO (2002) estimates that 27,000 people die annually from STH infections and schistosomiasis (case fatality rate of 0.0014 percent). Many investigators, however, believe that this figure is an underestimate. Crompton (1999) estimated that 155,000 deaths annually occur from these infections (case fatality rate of 0.08 percent), whereas Van der Werf et al. (2003), using the limited data available from Africa, estimated the schistosomiasis mortality alone at 280,000 per year (case fatality rate of 0.014 percent) because of nonfunctioning kidneys (from S. haematobium) and hematemesis (from S. mansoni). Therefore, the difference between estimates for helminth-associated mortality is more than 10-fold." Hotez et al. 2006, Pg 470.
• 16.
“Schistosomiasis is, in its very essence, a chronic inflammatory disease. This is because parasite eggs must pass through host tissues from the circulation to the lumen of bowel or bladder in order to leave the human body. However, this is an inefficient process — during the course of infection, over half of the eggs produced by female schistosomes never leave the human body and remain trapped in host tissues. There, the eggs induce a granulomatous response that progresses to focal areas of fibrosis within the affected organ. Over many years, the cumulative impact of tissue damage can lead to organ failure and consequent severe clinical morbidity and mortality. Early in infection, some of the tissue damage is reversible, but as fibrosis progresses, the cumulative damage caused by infection becomes irreversible. Even when infection is over, which may occur as adults reach their 30s and 40s, irreversible parasite-mediated organ damage may continue to affect patient health status.” King and Dangerfield-Cha 2008, Pg 67.
• 17.
• 18.
"A. lumbricoides is the large roundworm (15 cm) and lies free in the human duodenum where it feeds on lumenal contents (see Crompton, Nesheim & Pawlowski (1989) for further details of the biology of this parasite). Like all the other nematodes considered here, the worms are dioecious, which is to say that they exist as male and female. The female produces some 100 000 eggs per day which pass out in the faeces of the host and embryonate externally at a rate determined by local environmental factors. The eggs hatch on ingestion, releasing a larva that undergoes a tissue migration involving the cardiovascular and pulmonary systems. The larva moults as it migrates and ultimately is coughed up from the lungs, swallowed, and becomes established as the adult in the small intestine. The cycle from egg deposition to female patency, when the female is able to produce eggs, has a duration of some 50 days.
T. trichiura, the human whipworm, is a much smaller worm (25 mm) and inhabits the colon (Bundy & Cooper 1989). The anterior two-thirds of the worm is thin and thread-like and is laced through the mucosal epithelium, upon which the worm is believed to feed, leaving the blunt posterior projecting into the colonic lumen for excretion and oviposition. The female produces some 2000 eggs per day which pass out in the host faeces and embryonate externally. The infectious eggs hatch on ingestion and undergo a specifically local migration, via the crypts of Lieberkühn to the mucosal surface. The development cycle takes some 60 days.
The two major hookworm species, which are of similar magnitude to the whipworm, inhabit the small intestine, where they attach to villi with biting mouthparts (Schad & Warren 1990). The worms feed on host blood and move frequently to new sites, leaving multiple, bleeding petechial haemorrhages on the mucosal surface." Bundy et al. 2004, Pgs 244-45.
• 19.
"Each worm’s establishment in a host is the result of a separate infection event, and the number of infective stages shed (the infectious- ness of the host) is a function of the number of worms present. In population dynamic terms this implies that the individual worm is the unit of transmission for helminths, while the individual host is the unit for microparasites (Anderson & May 1982)... Since the size of the worm burden varies considerably between individuals, and infection implies only that worms are present, a population of "infected" people will exhibit considerable variation in the severity of disease manifestations." Bundy et al. 2004, Pgs 245-46.
• 20.
"The size of the worm burden (the intensity of infection) is therefore a central determinant of helminth transmission dynamics, and is also the major determinant of morbidity since the pathology is related to the size of the worm burden, usually in a non-linear fashion (Stephenson 1987, Cooper & Bundy 1987, 1988). Since the size of the worm burden varies considerably between individuals, and infection implies only that worms are present, a population of “infected” people will exhibit considerable variation in the severity of disease manifestations. The intuitive assumption that all infections are equal may help to explain the historical confusion over the pathogenicity and public health significance of helminth infection (Bundy 1988, Cooper and Bundy 1989). From these considerations it is apparent that an under- standing of helminth epidemiology centres around an understanding of the patterns of infection intensity." Bundy et al. 2004, Pgs 245-46.
• 21.
• 22.
"There is a general acceptance of the simple view that very intense infection results in illness, a view that reflects both clinical experience of overwhelming infection and, perhaps equally importantly, an atavistic repugnance at the insidious invasion of the body by large numbers of worms. Such extremes of infection result in the severe anaemia of necatoriasis and the intestinal obstruction of ascariasis (Stephenson 1987), and the chronic colitis of classical trichuris dysentery syndrome (Cooper & Bundy 1988)." Bundy et al. 2004, Pg 248.
• 23.
"There is a general acceptance of the simple view that very intense infection results in illness, a view that reflects both clinical experience of overwhelming infection and, perhaps equally importantly, an atavistic repugnance at the insidious invasion of the body by large numbers of worms. Such extremes of infection result in the severe anaemia of necatoriasis and the intestinal obstruction of ascariasis (Stephenson 1987), and the chronic colitis of classical trichuris dysentery syndrome (Cooper & Bundy 1988)." Bundy et al. 2004, Pg 248.
• 24.
"Ascariasis is the best documented helminthiasis in terms of mortality. There are numerous studies of case fatality rates in hospitals (reviewed by Pawlowski & Davies 1989). These indicate that the outcome of acute complications of ascariasis is modified by the general health status of the patient, the intensity of infection and the medical procedure (see Pawlowski & Davies 1989). In one hospital in Sao Paulo, for example, the case fatality rate was 1.35 per cent for conditions which could be managed conservatively, and 26.1 per cent in patients undergoing surgery (Okumura et al. 1974). These studies confirm that death is a not infrequent outcome of complications of ascariasis, but provide little insight into mortality rates in the community. An extrapolation from central hospital data in Myanmar suggests there are 0.008 deaths per 1000 infections per year (Thein-Hliang 1987), but this is considered to be a considerable underestimate since only a small proportion of children with severe complications are likely to have access to the hospital (Pawlowski & Davies 1989). Only two population based estimates are available: one for the Darmstadt epidemic (0.1 deaths per 1000 infected per year) (Krey 1949) and one for Japan prior to national control efforts (0.061 deaths per 1000 infected per year) (Yokogawa 1976). Based on the present estimate of global infection, these rates suggest that between 8 000 and 14 000 children die each year.
The final estimate of global mortality due to ascariasis in the Global Burden of Disease project was 11,000 concentrated in the age group 5–14 years (Murray and Lopez 1996). Mortality was distributed by region in proportion to the region-specific population at risk (above higher threshold) and to the region-specific probability of dying between birth and 4 years (World Bank 1993). This last quantity was added to take account of regional variations in access to acute medical services.
No population-based mortality estimates have been published for T. trichiura infection. Prior to the advent of safe and effective therapy for T. trichiura infection in the late 1970s a number of reports described paedi- atric inpatients with Trichuris Dysentery Syndrome who, despite clinical efforts, died as a result of profuse haemorrhage and secondary anaemia (Wong and Tan 1961, Fisher and Cremin 1970) or of intussusception (Reeder, Astacio & Theros 1968). Although there continue to be reports of the syndrome, a fatal outcome in a clinical setting today would suggest inappropriate management. The picture in the community, however, may be rather different since, in the absence of specific diagnosis, the aetiology of chronic bloody dysentery may be unrecognized. Nevertheless, mortality is undoubtedly a rare consequence of trichuriasis.
The profound anaemia of hookworm infection is life-threatening and has been estimated, although the means of estimation are not described, to result in 65 000 deaths per year (World Health Organization 1992). Again there is a lack of empirical data, presumably in this case because of the difficulty in identifying the etiology of anaemia-related deaths. A figure of 4 300 deaths was used by Murray and Lopez (1996) and was distributed to ascribe the highest proportion of mortality to women of childbearing age (15–44 years) and to older age groups. The distribution of deaths between regions was divided in the same way as for the other infections.
In including these estimates of mortality we recognize that they are unsupported by vital registration statistics. But it should also be recognized that intense infection is most prevalent in the poorest regions of the poorest countries. In such areas mortality may be most likely because of limited access to appropriate management, while both the diagnosis of cause of death and its registration may be least reliable. There is clear evidence that deaths do occur. What is unclear is the extent of this mortality." Bundy et al. 2004, Pgs 281-82.
• 25.
• 26.
• 27.
"Schools offer a readily available, extensive, and sustained infrastructure with a skilled workforce that is in close contact with the community. With support from the local health system, teachers can deliver the drugs safely. Teachers need only a few hours of training to understand the rationale for deworming and to learn how to give out the pills and keep a record of their distribution." Hotez et al. 2006, Pg 473.
• 28.
• 29.
“The CRs [cure rates] and ERRs [egg reduction rates] for the recommended doses of albendazole and mebendazole in treating A. lumbricoides, T. trichiura and hookworm are shown in Fig. 1. For A. lumbricoides, all drug regimens are highly effective, with median CRs of 95–97% and median ERRs of 99–100%. More importantly, there is little variation in the drug efficacies between the individual studies for any given drug regimen. In contrast, the patterns for T. trichiura and hookworm are more variable. For both of these parasites, there is marked heterogeneity both in the drug efficacy between drug regimens and the drug efficacy between individual studies administering a given regimen. This is particularly evident for T. trichiura, for which both single-dose albendazole and mebendazole show poor levels of drug efficacy, particularly in terms of CR. For example, the median CR for a single dose of 400 mg albendazole is only 38%, with individual studies reporting CRs ranging from 4.9% to 99.3%.” Bennett and Guyatt 2000, Pg 72.
• 30.
World Health Organization. Worldwide prevalence of anaemia 1993–2005: WHO global database on anaemia. See Table 2 (page 4) for haemoglobin thresholds by age.
• 31.
Danso-Appiah et al. 2008, Analysis 5.2 (Pg 60).
• 32.
Danso-Appiah et al. 2008, Analysis 2.2 (Pg 58).
• 33.
Danso-Appiah et al. 2008 Analysis 1.2 (Pg 57).
• 34.
"Metrifonate is partially effective against hookworm infection (Kurz et al. 1986), and therefore, inclusion of studies using metrifonate would potentially overestimate the impact of praziquantel treatment." Smith and Brooker 2010, Pg 791.
• 35.
• "Children were invited to participate in the study if they were infected with both S. haematobium and at least one species of geohelminth." Beasley et al. 1999, Pg 745.
• "Daily urinary iron loss and physical fitness were determined in Kenyan primary school children who had low-medium (16-177 eggs/10 ml adj) or high (200-1,194 eggs/10 ml adj) S. hematobium egg counts compared with a matched group of control or uninfected children before and after antischistosomal treatment with metrifonate. The 3 groups did not differ significantly before treatment in age, sex ratio, anthropometry or prevalence of other parasite infections. Before treatment, mean iron loss in the high egg count group (n = 14) was 652 micrograms/24 hr and was significantly higher than losses in the low-medium and control groups (losses = 278, 149 micrograms; n = 19, 12 respectively). Iron loss in infected children was correlated with egg count (r = 0.40) and log of egg count (r = 0.56, P less than 0.0003). After treatment iron loss decreased in the infected groups and post-treatment iron losses did not differ significantly. Physical fitness scores, measured with the Harvard Step Test, showed that the control group (score 81) was significantly more fit than the high egg count group (score 69) before treatment. Fitness scores improved significantly in both infected groups after treatment, and post-treatment fitness scores did not differ significantly between the 3 groups. This study provides evidence that relatively heavy infections of S. hematobium can cause urinary iron loss which, if it persists, is great enough to produce iron deficiency anemia and can also reduce physical fitness of children, but that both of these negative effects are reversible with treatment." Stephenson et al. 1985a, abstract.
• "Relationships of S. haematobium, hookworm and malarial infections to growth 6 months after metrifonate treatment were studied in Kenyan primary school children in an area where poor growth, S. haematobium and hookworm were common and malaria was endemic. All children with light-moderate S. haematobium infections (1-500 eggs/10 ml adj) in 4 schools were examined (Exam 1), allocated at random to either placebo (MIP, n = 198) or metrifonate treatment (MIT, n = 201) groups, treated, and examined again 6 months later (Exam 2). An additional 19 heavily infected children (HIT group greater than 500 eggs/10 ml adj) were treated immediately after Exam 1 and also followed. The MIT and HIT groups exhibited more rapid growth between Exam 1 and 2 than did the placebo group. The MIT group gained significantly (P less than 0.001) more than the MIP group in weight (0.8 kg), percent weight for age (2.3 percentage points), weight for height squared (0.04 units), arm circumference (0.4 cm), percent arm circumference for age (1.7 percentage points) and in triceps and subscapular skinfold thicknesses. In addition, the placebo group showed statistically significant decreases between exams in percentage weight for age, percent arm circumference for age, both skinfold thicknesses for age and no significant increase in percent height for age while the MIT group exhibited highly significant increases in all anthropometric parameters …"Stephenson et al. 1985b, abstract.
• "We studied the growth of comparable groups of children with light to moderate Schistosoma haematobium infections who received a single dose of metrifonate (MT, 10 mg/kg), praziquantel (PR, 40 mg/kg), or a placebo (PL). Children were re-examined 8 months later. The MT and PR groups gained significantly more than the placebo group in weight, percent weight for age, percent weight for height, arm circumference, and in triceps and subscapular skinfold thicknesses. The MT and PR groups did not differ significantly from each other. The placebo group showed statistically significant decreases or no change between exams in percent weight for age, percent weight for height, percent arm circumference, and both skinfold thicknesses; the MT and PR groups exhibited highly significant increases in these parameters (P less than 0.0002). The intensity of S. haematobium infection had decreased significantly in both the MT and PR groups, but especially in the PR group. Multiple regression analyses showed that a decrease in the intensity of S. haematobium infection was by far the most important predictor of growth rate after treatment for all 5 anthropometric measures tested; decreases in the intensity of hookworm infection was also significant for 2 of the 5 measures." Stephenson et al. 1985b, abstract.
• 36.
“[Only infected children included, Single dose (comparison 1):] For haemoglobin, the mean value was slightly higher at the end of the study with deworming (mean difference 0.37 g/dL, 95% CI 0.10 to 0.64; 108 participants, two trials; Analysis 1.7).” Taylor-Robinson et al. 2012, Pg 19.
• 37.
Taylor-Robinson et al. 2012. Pgs 19-20:
• “[Whole population treated, Single dose (comparison 3):] For haemoglobin, three studies were in moderate prevalence areas, and two in low prevalence areas. No effect was demonstrable in individual studies or on meta analysis (mean difference 0.02 g/ dL, 95% CI -0.05 to 0.09; 1992 participants, four trials; Analysis 3.6).”
• “[Whole population treated, Multiple doses, less than a year of follow up (comparison 4):] For haemoglobin, four trials reported this, with no difference between intervention and control apparent (Analysis 4.6).”
• “[Whole population treated, Multiple doses, follow up of one year or more (comparison 5):] For haemoglobin, deworming drugs did not increase haemoglobin compared with control (mean difference -0.02 g/dL, 95% CI -0.30 to 0.27; 1365 participants, two trials; Analysis 5.3).”
Examining the three separate analyses leads us to believe that combining the underlying studies together into a single meta-analysis would not lead to a statistically significant effect, but we have not carried out such a calculation.
• 38.
• “Although Group 1 pupils had higher hemoglobin concentrations than Group 2 pupils in early 1999, the difference is not statistically different than zero. Recall that anemia is the most frequently hypothesized link between worm infections and cognitive performance (Stoltzfus et al. (1997)). Severe anemia is relatively rare in Busia: fewer then 4 percent of pupils in Group 2 schools (comparison schools in 1998) fell below the Kenya Ministry of Health anemia threshold of 100 g/L in early 1999 before deworming treatment. This is low relative to many other areas in Africa, of which many have substantial helminth problems: a recent survey of studies of anemia among school children in less developed countries (Hall and Partnership for Child Development (2000)) indicates that there is considerably less anemia in Busia than in samples from Ghana, Malawi, Mali, Mozambique, and Tanzania.” Miguel and Kremer 2004, Pg 174.
• Taylor-Robinson et al. 2012 exclude Miguel and Kremer 2004's haemoglobin results because the method of selection for the subpopulation was not described:
• “For haemoglobin, weight and height the outcomes appear to have been measured on a sub-sample of the quasi-randomized pop- ulation. For haemoglobin this was approximately 4% (778/20,000) - it is unclear how these were selected.” Pg 69.
• “Miguel 2004 (Cluster) demonstrated no significant effect on weight-for-age z score, height-for-age z score, and haemoglobin (only 4% of quasi-randomized participants followed up for haemoglobin outcome; the proportion followed up for nutritional outcomes is unclear).” Pg 20.
However, further exposition from the authors has specified that haemoglobin data was collected from a random subsample of trial participants: “The reason for the smaller Hb sample is that it was only collected for a random subsample, information that is readily available from the authors.” Duflo et al. 2012.
• 39.
"In 2007, a systematic review of randomised controlled trials (RCTs) investigating the impact of anthelmintic treatment reported an increase in haemoglobin concentration (Hb) of 1.71 g ⁄ l after treatment (Gulani et al. 2007). But this review did not distinguish between different helminth species or account for intensity of infection, which may have underestimated the true treatment effect (Awasthi & Bundy 2007); the effect of treatment is likely to be greatest where hookworm is most prevalent and intense … The present work aims to quantify the impact of hookworm infection and anthelmintic treatment using benzimidazoles, albendazole and mebendazole, among non-pregnant populations in hookworm-endemic areas. Specifically, we review available data from cross-sectional studies that investigated the relationship between intensity of hookworm infection and Hb. We also summarise available data from RCTs and pre- and post-intervention
observational studies that compared the effects of benzimidazole treatment, either alone or in combination with the anti-schistosomal drug praziquantel, on Hb and anaemia levels. Finally, based on the value of combining deworming with micronutrient supplementation in children, we evaluate the impact of treatment in combination with iron supplementation (Hall 2007)." Smith and Brooker 2010, Pg 777.
• 40.
"Among RCTs using albendazole, impact of treatment corresponded to a 1.89 g ⁄ l increase (95%CI: 0.13–3.63) in mean Hb while mebendazole had no impact. There was a positive impact of 2.37 g ⁄ l (95%CI: 1.33–3.50) on mean Hb when albendazole was co-administered with praziquantel, but no apparent additional benefit of treatment with benzimidazoles combined with iron supplementation. " Smith and Brooker 2010, abstract.
• 41.
Smith and Brooker 2010. Table 2, pg 784.
• "Children were invited to participate in the study if they were infected with both S. haematobium and at least one species of geohelminth." Beasley et al. 1999, Pg 745.
• “Multi-helminth chemotherapy was given to children found infected with any of the common geohelminths or S. mansoni with albendazole and praziquantel, respectively.” Friis et al. 2003, Pg 574. Despite this quotation, it is not clear whether or not Friis et al involved pre-screening; the combined prevalence of the three STHs and schistosomiasis was nearly 200%: “The prevalence of malaria parasitaemia was 59%, whereas it was 14% for A. lumbri- coides, 45% for T. trichiura, 55% for hookworm and 71% for S. mansoni.” Friis et al. 2003, Pg 575.
• 42.
Smith and Brooker 2010. Table 2, pg 784. (92.9+59.4+51.9+51.4)/4 = 63.9.
• 43.
We have particular worries about Gulani et al. 2007 and King, Dickman, and Tisch 2005:
• Gulani 2007. This review implies that the reviewed studies were unscreened (treating the population as a whole), saying, "The results from these largely heterogeneous data derived from randomised controlled trials show that deworming without previous screening marginally improves haemoglobin concentration." However, this is the only statement that appears in the review regarding pre-screening, and at least one of the studies in the review did in fact involve pre-screening. "A single dose had no statistically significant effect on haemoglobin levels (646 children, 4 trials, Analysis 2.7). Removing the one trial that screened children for infection, Adams 1994, did not alter this result (Analysis 8.2: subgroup 8)." Taylor-Robinson, Jones, and Garner 2007, Pg 21. Accordingly, we prefer to rely on the Taylor-Robinson et al. 2012 Cochrane review of STH treatment, which distinguishes clearly between screened and unscreened studies.
• King, Dickman, and Tisch 2005. The meta-analysis of five studies of the effect of schistosomiasis treatment haemoglobin status is less helpful for us than the Danso-Appiah et al. 2008 Cochrane review of schistosomiasis treatment because:
• One of the studies—with the largest effect—is of S. japonicum, which we exclude from this analysis.
• Three of the studies (the first three listed) are all present in Danso-Appiah et al. 2008, discussed above in this section; all involved pre-screened treatment, and at least two (if not three - the reference is ambiguous) involve combination deworming, not treatment for schistosomiasis specifically.
• The fifth study also involves S. japonicum and combination deworming, not treatment for schistosomiasis specifically: "A double-blind placebo-controlled study of the concurrent administration of albendazole and praziquantel was conducted in>1500 children with high prevalences of geohelminths and schistosomiasis. The study sites were in China and the Philippines, including 2 strains of Schistosoma japonicum, and 2 different regions of Kenya, 1 each with endemic Schistosoma mansoni or Schistosoma haematobium." Olds et al 1999, abstract. The study found statistically significant benefits on haemoglobin levels in the schistosomiasis treatment arms but not in the STH arms. It is not obvious to us why Olds et al. 1999's schistosomiasis haemoglobin results are excluded from the Danso-Appiah et al. 2008 Cochrane review.
• 44.
See Smith and Brooker 2010, Table 2 (Pgs 784-785).
• 45.
Mathers, Lopez, and Murray 2006, Pg 113.
• 46.
• "Anaemia: expressed in mean packed cell volume (PCV); it is equivalent to the percentage haematocrit. Results given in g/decilitre were converted with a standard factor of 3:1, that is, 1 g/decilitre equals 3%PCV." Lengeler 2004, Pg 4.
• " The nine trials that measured anaemia were conducted in areas of stable malaria; six trials compared treated to untreated nets (Appendix 13), and three trials compared treated nets to untreated nets (Appendix 14).
Overall, the packed cell volume of children in the ITN group was higher by 1.7 absolute packed cell volume per cent compared to children not using nets. When the control group used untreated nets, the difference was 0.4 absolute packed cell volume per cent." Lengeler 2004, Pg 4.
• 0.4 packed cell volume converts to (0.4/3 * 10) = 1.3 g/L using the conversion factor provided (and multiplying by 10 to convert from g/dL to g/L). Similarly, 1.7 packed cell volume converts to (1.7 / 3 * 10) = 5.7 g/L.
• 47.
• 48.
Taylor-Robinson et al. 2012, Pg 4-5
• 49.
"Following World Health Organization recommendations (WHO (1992)), schools with geohelminth prevalence over 50 percent were mass treated with albendazole every six months, and schools with schistosomiasis prevalence over 30 percent were mass treated with praziquantel annually. All treatment schools met the geohelminth cut-off in both 1998 and 1999. Six of twenty-five treatment schools met the schistosomiasis cut-off in 1998 and sixteen of fifty treatment schools met the cut-off in 1999." Miguel and Kremer 2004, Pgs 168-169.
• 50.
" Intestinal helminths—including hookworm, roundworm, whipworm, and schistoso- miasis—infect more than one-quarter of the world’s population. Studies in which med- ical treatment is randomized at the individual level potentially doubly underestimate the benefits of treatment,missing externality benefits to the comparison group from re- duced disease transmission, and therefore also underestimating benefits for the treat- ment group. We evaluate a Kenyan project in which school-based mass treatment with deworming drugs was randomly phased into schools, rather than to individuals, allow- ing estimation of overall program effects. The program reduced school absenteeism in treatment schools by one-quarter, and was far cheaper than alternative ways of boost- ing school participation. Deworming substantially improved health and school partic- ipation among untreated children in both treatment schools and neighboring schools, and these externalities are large enough to justify fully subsidizing treatment. Yet we do not find evidence that deworming improved academic test scores." Miguel and Kremer 2004, abstract.
• 51.
Miguel and Kremer 2004, Table V, Pg 173.
• 52.
King, Dickman, and Tisch 2005, Table 1, Pg 1565.
• 53.
King, Dickman, and Tisch 2005, Table 1 and Table 2, Pg 1565.
• 54.
"The use of observational studies and the inclusion of select subpopulation surveys (eg, school-age children) allows possible confounding effects on the observed results, thus obscuring the assessment of attributable risk due to schistosomiasis. Schistosomiasis is inevitably associated with other potential causes for morbidity and disease, especially with restricted access to safe water supplies and with co-infection by other parasites. Several studies have reported that age, sex, socioeconomic status, and diet can significantly modify the risk for schistosomiasis-associated morbidity. In nine of ten individual surveys that adjusted for some or all of these cofactors, the effect of schistosomiasis on measured disability outcomes has remained significant. However, details of these potentially modifying factors were not available in most of the studies included in our analysis, and so adjustment was not attempted in the estimation of our summary statistics." King, Dickman, and Tisch 2005, Pg 1566-1567.
• 55.
“Many of our gold-standard' field diagnostic tests are too insensitive for such research. That is, the standard tests routinely misdiagnose as uninfected' those individuals who actually carry light infections (Wilson et al., 2006). In the case of schistosomiasis, the Kato-Katz stool test used to detect and quantify intensity of intestinal schistosome infections is only 40% – 60% sensitive when performed on a single stool specimen (de Vlas et al., 1993, Carabin et al., 2005). While this microscopic exam may variously detect eggs of hookworm, Ascaris, Trichuris, and other intestinal worms, it is not particularly sensitive for detection of hookworm, and will often miss the other STH species if their infection intensities happen to be light. As a result, this standard field screening approach will significantly misclassify (by underdiagnosis) the clinical burden of single and mixed parasite infection.
Inadequate testing for Schistosoma or STH results in misclassification bias that substantially reduces our power to detect numerically small, but clinically relevant infection-related differences in health outcomes, including anaemia (reductions in haemoglobin level), or stunting and chronic undernutrition (age-related height and weight deficits). Misdiagnosis, when combined with co-morbid conditions that are competing causes of disease (e.g., hookworm or malaria), effectively limits our appreciation of both the individual and the combined health impacts of common human parasitic infections.” King 2010, Pg 2.
• 56.
We primarily refer here to Van der Werf and de Vlas 2001, the working paper version (with more detail) than Van der Werf et al. 2003.
• 57.
Van der Werf and de Vlas 2001, Pgs 3-5.
• 58.
Van der Werf and de Vlas 2001, Pgs 13-15 discusses the method of generating estimates of the relationship between schistosomiasis prevalence and the prevalence of the various sequelae discussed. Charts throughout the remainder of the review show the curve that is fitted to the data points provided by the studies reviewed. We find that the attribution method (a) ignores the possibility of confounders, i.e., the possibility that non-schistosomiasis factors (such as poverty and general hygiene) are correlated both with schistosomiasis and with the sequelae discussed; (b) is prone to exaggerate the strength of any given relationship (as we believe can be seen by examining the charts).
• 59.
Van der Werf et al. 2003, Pg 132.
• 60.
• 61.
"Van der Werf MJ, de Vlas SJ. Morbidity and infection with schistosomes or
soil-transmitted helminths. Rotterdam, Erasmus University, 2001." World Health Organization. The global burden of disease: 2004 update, Pg 139.
• 62.
One of the most severe symptoms discussed is severe hematemesis (vomiting blood). But Van der Werf et al. rely on self-reported survey data of hematemesis, (Van der Werf and de Vlas 2001, Pgs 62-63) which is problematic because vomiting blood can easily be confused with coughing blood or even a nosebleed (National Institutes of Health. Vomiting blood).
• 63.
• "The course of a schistosome infection complicates the prediction of mortality figures. Often there is a long interval between infection and death, most patients that will die of schistosomiasis die more than 10 years after the initial infection. Only a few people will die during or just after the invasion stage of the infection (Katayama fever). Another complicating factor is that the direct cause of death is liver failure or hematemesis. These sequelae can also be induced by other causes, which are prevalent in countries endemic for schistosomiasis, such as hepatitis B infection or excessive alcohol use. Both the long time span between initial infection and mortality and the aspecific direct cause of death complicate the predictions of mortality." Van der Werf and de Vlas 2001, Pg 65.
• "The GBD 2002 estimated that schistosomiasis was responsible for around 15,000 deaths globally (excluding attributable cancer deaths), although others have argued that the figure should be much higher. [Van der Werf and de Vlas 2001], using limited data from Africa, estimated that schistosomiasis caused 210,000 deaths annually. A literature review found limited data from studies with small sample sizes, limiting ability to extrapolate to population level. In the absence of usable studies, a back-calculation method was employed to estimate approximate case fatality rates for two populations with significant numbers of schistosomiasis deaths recorded in death registration data … [the] revised global estimate for deaths due to schistosomiasis (excluding cancers caused by schistosomiasis) [is] 41,000 for 2004." World Health Organization. The global burden of disease: 2004 update, Pg 110.
• 64.
• 65.
“Estimated total malaria deaths for 2004 were 0.89 million, of which 771 000 were in children aged under five years. These estimates are lower than those in the GBD 2002 (1.27 million deaths, of which 1.15 million deaths were of children aged under five years).” World Health Organization. The global burden of disease: 2004 update, Pg 109.
• 66.
GiveWell. Schistosomiasis mortality analysis "YLL per DALY" rows on "From GBD" sheet. For more on these terms, see our discussion of DALYs.
• 67.
Bundy et al. 2004, Pgs 269-273.
• 68.
• “There are also more serious consequences of infection, largely associated with obstruction of ducts and intestinal lumen by these large worms. Systematic data on these acute complications are lacking, but the numerous reports based on inpatient records suggest that ascariasis is an important cause of hospitalization in endemic areas (reviewed by Pawlowski & Davies 1989). Ascariasis was the cause of 2.6 per cent of all hospital admissions in Kenya in 1976, and 3 per cent in a children’s hospital in Myanmar between 1981 and 1983 (Stephenson, Lathan & Oduori 1980, Thein-Hlaing 1987). Complications resulting from ascariasis accounted for 0.6 per cent of all admissions to a paediatric surgery department in South Africa in 1987, 5.8 per cent of emergency admissions to a hospital in Mexico in 1975, 10.6 per cent of admissions for acute abdominal emergency to a children’s hospital in Myanmar, and between 0.8 and 2.5 per cent of admissions in a survey of hospitals in China (Flores & Reynaga 1978, World Health Organization 1987, Thein-Hlaing et al. 1990). The most common abdominal emergencies presenting are intestinal obstruction and biliary ascariasis, the proportions varying geographically, perhaps because of differences in diagnostic procedures (Maki 1972). The classical surgical presentation is in patients between 3 and 10 years of age, although adults also may be affected (Davies and Rode 1982, Chai et al. 1991). Laparotomy attributable to ascariasis was the second most common cause of all laparotomies in 2–4 year old children in Durban, Lishiu and Sao Paulo, and the fifth or sixth cause in adults in Myanmar, China and Nigeria (World Health Organization 1987). Reports using unstandardized indicators indicate that between 0.02 and 0.9 per cent of infections may require hospitalization (Pawlowski & Davies 1989), the proportion presumably varying with the local intensity of infection. Thus of the 4.5 per cent of infected children under 10 years of age who are here estimated to exceed the higher threshold of infection, between 0.4 and 20 per cent are likely to require hospitalization. These children will suffer a severely disabling condition, which may be life-threatening (see below), but which can be alleviated by appropriate clinical management. If it is assumed that such cases are managed appropriately, then the duration of disability is likely to be a few weeks. Complicated ascariasis has a reported history of over 10 days followed by 5 days of management, while the management of biliary ascariasis involves 4–6 weeks of observation before opting for surgical intervention (Davies & Rode 1982). The disability therefore is considered, for the present analyses, to be contemporaneous with infection, to have a duration of 4 weeks, to have a severity of Class III (Murray & Lopez 1996), and to affect 5 per cent of children under 15 years of age with burdens exceeding the higher threshold. Note that this is a conservative assumption since it excludes the documented, though rare, occurrence of complications in adults. Furthermore, the case rates for children are based on records from tertiary facilities to which a substantial proportion of the most disadvantaged and heavily infected children may have limited access.” Bundy et al. 2004, Pgs 269-270.
• “[Mortality] is the weakest area of DALY estimation because of the lack of empirical data. Ascariasis is the best documented helminthiasis in terms of mortality. There are numerous studies of case fatality rates in hospitals (reviewed by Pawlowski & Davies 1989). These indicate that the outcome of acute complications of ascariasis is modified by the general health status of the patient, the intensity of infection and the medical procedure (see Pawlowski & Davies 1989). In one hospital in Sao Paulo, for example, the case fatality rate was 1.35 per cent for conditions which could be managed conservatively, and 26.1 per cent in patients undergoing surgery (Okumura et al. 1974). These studies confirm that death is a not infrequent outcome of complications of ascariasis, but provide little insight into mortality rates in the community. An extrapolation from central hospital data in Myanmar suggests there are 0.008 deaths per 1000 infections per year (Thein-Hliang 1987), but this is considered to be a considerable underestimate since only a small proportion of children with severe complications are likely to have access to the hospital (Pawlowski & Davies 1989). Only two population based estimates are available: one for the Darmstadt epidemic (0.1 deaths per 1000 infected per year) (Krey 1949) and one for Japan prior to national control efforts (0.061 deaths per 1000 infected per year) (Yokogawa 1976). Based on the present estimate of global infection, these rates suggest that between 8 000 and 14 000 children die each year.
The final estimate of global mortality due to ascariasis in the Global Burden of Disease project was 11 000 concentrated in the age group 5–14 years (Murray and Lopez 1996). Mortality was distributed by region in proportion to the region-specific population at risk (above higher threshold) and to the region-specific probability of dying between birth and 4 years (World Bank 1993). This last quantity was added to take account of regional variations in access to acute medical services.”Bundy et al. 2004, Pgs 281-282.
• 69.
• “The disability therefore is considered, for the present analyses, to be contemporaneous with infection, to have a duration of 4 weeks, to have a severity of Class III (Murray & Lopez 1996), and to affect 5 per cent of children under 15 years of age with burdens exceeding the higher threshold.” Bundy et al. 2004, Pgs 269-270.
• Table 9.9 reports that 525 per 100,000 5-14 year olds in Sub-Saharan Africa have worm burdens exceeding the higher threshold. Bundy et al. 2004, Pg 268.
• 5% of 525 per 100,000 is 0.0002625.
• 70.
Unfortunately, the published version (Bundy et al. 2004) does not contain regionally disambiguated mortality estimates, so we use the figure from the unpublished working paper that was used to generate the DCP's cost-effectiveness estimate, of .3 per 100,000 population. Bundy et al. Intestinal nematode infections, Table 9, Pg 72.
• 71.
“Particularly large burdens of T. trichiura may result in the “classical” dysenteric form of trichuriasis, synonymous with Trichuris Dysentery Syndrome (Ramsey 1962) and Massive Infantile Trichuriasis (Kouri & Valdes Diaz 1952). This typically occurs in children between 3 and 10 years of age and is associated with burdens involving at least several hundreds of worms carpeting the colonic mucosa from ileum to rectum. The colon is inflamed, oedematous and friable, and often bleeds freely (Venugopal et al. 1987). Reviews of case histories suggest that the mean duration of disease at the time of presentation is typically in excess of 12 months and that relapse after treatment frequently occurs (Gilman et al. 1983, Cooper et al. 1990, Callender et al. 1994). The probability of relapse, and of a child experiencing multiple episodes, is greatly enhanced because a proportion of heavily infected children are predisposed to reacquire heavy infection even after successful treatment (Bundy et al. 1987a, 1987b). The typical signs of the syndrome (see Bundy & Cooper 1989a for a review of 13 studies involving 697 patients) are rectal prolapse, tenesmus, bloody mucoid stools (over months or years), growth stunting, and a profound anaemia, which may lead to a secondary anaemia. The complete spectrum of clinical features associated with the syndrome occurs in some 30 per cent of children with intense trichuriasis. Many of the major clinical effects are reversible by appropriate therapy (Cooper, Bundy & Henry 1986, Gilman et al. 1983), hence the disability is considered here to be a contemporaneous consequence of infection.” Bundy et al. 2004, Pgs 271-273.
• 72.
• “For the present analyses it is assumed that the disability is contemporaneous with infection, has a duration of over 12 months, has a severity of Class II (Murray & Lopez, 1996), and affects 20 per cent of children under 15 years of age experiencing the higher threshold of intensity.” Bundy et al. 2004, Pg 273.
• In Sub-Saharan Africa, 680 per 100,000 school-aged children have worm burdens above the higher threshold of intensity. Bundy et al. 2004, Table 9.10, Pg 270.
• 20% of 680 per 100,000 is 0.00136.
• 73.
“No population-based mortality estimates have been published for T. trichiura infection. Prior to the advent of safe and effective therapy for T. trichiura infection in the late 1970s a number of reports described paedi- atric inpatients with Trichuris Dysentery Syndrome who, despite clinical efforts, died as a result of profuse haemorrhage and secondary anaemia (Wong and Tan 1961, Fisher and Cremin 1970) or of intussusception (Reeder, Astacio & Theros 1968). Although there continue to be reports of the syndrome, a fatal outcome in a clinical setting today would suggest inappropriate management. The picture in the community, however, may be rather different since, in the absence of specific diagnosis, the aetiology of chronic bloody dysentery may be unrecognized. Nevertheless, mortality is undoubtedly a rare consequence of trichuriasis.” Bundy et al. 2004, Pg 282.
• 74.
• “With hookworm the major consequence of infection is anaemia (see Schad & Banwell 1984 and Crompton & Stephenson 1990 for reviews of the extensive literature in this area). Anaemia is associated with: reduced worker productivity; reduced adult and child fitness; reduced fertility in women; reduced intrauterine growth rate, prematurity and low birth weight; and cognitive deficits (Fleming 1982, Stephenson et al.1993, Pol- litt et al. 1986, Boivin et al. 1993) (Tables 9.12a and 9.12b). Since the higher threshold for the intensity of hookworm infection was selected on the basis of the development of anaemia, it is here assumed that 100 per cent of those exceeding this threshold suffer at least Class I disability. As discussed elsewhere (World Bank 1993), the consequences of anaemia will be more serious for a subset of the affected population, resulting in Class II and Class III disability. The disability weight distribution for anaemia was used for the present analyses, 70 per cent in Class II, 24 per cent in Class III and 6 per cent in Class IV.” Bundy et al. 2004, Pg 273.
• “The profound anaemia of hookworm infection is life-threatening and has been estimated, although the means of estimation are not described, to result in 65 000 deaths per year (World Health Organization 1992). Again there is a lack of empirical data, presumably in this case because of the difficulty in identifying the etiology of anaemia-related deaths. A figure of 4 300 deaths was used by Murray and Lopez (1996) and was distributed to ascribe the highest proportion of mortality to women of childbearing age (15–44 years) and to older age groups. The distribution of deaths between regions was divided in the same way as for the other infections.” Bundy et al. 2004, Pg 282.
• 75.
Mathers, Lopez, and Murray 2006, Pg 162.
• 76.
World Health Organization. Global Burden of Disease 2004: Deaths by Age, Sex, and Cause for the year 2004, sheet 'AFR', cells G39+G40+G41.
• 77.
"This study evaluates the economic consequences of the successful eradication of hookworm disease from the American South, which started circa 1910. The Rockefeller Sanitary Commission (RSC) surveyed infection rates and found that 40 percent of school-aged children in the South were infected with hookworm. The RSC then sponsored treatment and education campaigns across the region. Follow-up studies indicate that this campaign substantially reduced hookworm disease almost immediately. Areas with higher levels of hookworm infection prior to the RSC experienced greater increases in school enrollment, attendance, and literacy after the intervention. No significant contemporaneous results are found for literacy or occupational shifts among adults, who had negligible prior infection rates. A long-term follow-up indicates a substantial gain in income that coincided with exposure to hookworm eradication. I also find evidence that the return to schooling increased with eradication." Bleakley 2007, abstract.
• 78.
"First, the RSC sent teams of health-care workers to counties to administer and dispense deworming treatments free of charge. RSC dispensaries visited a large and mostly contiguous fraction of the South and the campaign treated over 400,000 individuals with deworming medication.1 Second, the RSC sought to educate doctors, teach- ers, and the general public on how to recognize the symptoms of . hookworm disease so that fewer cases would go untreated. An- other part of this publicity campaign included education about the importance of hygiene, especially with regard to the use of sanitary privies. In this period, oftentimes even public buildings such as schools and churches did not have such hygienic facilities. Follow-up surveys conducted afterward showed a substantial de- cline in hookworm infection [RSC 19151. Although the stated goal of eradication was not achieved, the hookworm-infection rate of the region did drop by more than half, and fewer extreme cases of the disease went unnoticed and untreated." Bleakley 2007, Pg 77.
• 79.
"The anti-hookworm campaign achieved considerable progress against the disease in less than a decade. This is a sudden change on historical time scales. Moreover, I examine outcomes over a fifty-year time span, which is unquestionably long relative to the five-year RSC intervention … How realistic is the assumption that areas with high infec- tion rates benefited more from the eradication campaign? Re- surveys found a decrease in hookworm infection of 30 percentage points across the infected areas of the South. Such a dramatic drop in the region's average infection rate, barring a drastic reversal in the pattern of hookworm incidence across the region, would have had the supposed effect of reducing infection rates more in highly infected areas than in areas with moderate infec- tion rates. Figure I presents data on this issue.6 The basic as- sumption of this section that areas where hookworm was highly endemic saw a greater drop in infection than areas with low infection rates is born out across states and across counties." Bleakley 2007, 80-81.
• 80.
See Bleakley 2007, Figure III, Pg 100. A similar approach is taken for school attendance - see Figure II, Pg 88.
• 81.
"The finding that highly infected counties experienced surges in school attendance is not sensitive to controlling for a variety of alternative hypotheses. I contrast these hypotheses with the effect of hookworm and the RSC by starting with (1) and (2) and adding plausible proxies for the supposed confounds. The control variables enter into the specification inter- acted with Postt . These results are found in Panel B of Table III. In every case, the added control variables are jointly sig- nificant at conventional confidence levels. The new controls include variables for health and health policy, educational resources, race and race relations, urbanization and land use, and parental background. (See the Appendix for a complete list of controls and their sources.)" Bleakley 2007, Pg 90.
• 82.
"I also consider the role played by the quantity of and returns to schooling in the wage results. Controlling directly for education does not significantly change the estimated effect of hookworm treatment. Additionally, I can easily reject, for conventional re- turns to schooling, the hypothesis that the wage effect is due entirely to a rise in education.2o However, the fact that I estimate increases in literacy without concomitant rises in the quantity of schooling suggests an alternative hypothesis: changes in quality. In particular, it may be that students spend the same number of years in school, but the time is better spent. For example, there might be less absenteeism, or students might be better equipped to absorb the material while in school. As was shown above, students were less likely to work while in school and were more more likely to be literate, following hookworm eradication. This suggests that the return to schooling was raised by the hookworm intervention." Bleakley 2007, Pg 97.
• 83.
"Following World Health Organization recommendations (WHO (1992)), schools with geohelminth prevalence over 50 percent were mass treated with albendazole every six months, and schools with schistosomiasis prevalence over 30 percent were mass treated with praziquantel annually. All treatment schools met the geohelminth cut-off in both 1998 and 1999. Six of twenty-five treatment schools met the schistosomiasis cut-off in 1998 and sixteen of fifty treatment schools met the cut-off in 1999." Miguel and Kremer 2004, Pg 168-169.
• 84.
"The 75 schools involved in this program were experimentally divided into three groups (Groups 1, 2, and 3) of 25 schools each: the schools were first stratified by administrative sub-unit (zone), listed alphabetically by zone, and were then listed in order of enrollment within each zone, and every third school was assigned to a given program group; supplementary appendix A contains a detailed description of the experimental design. The groups are well-balanced along baseline demographic and educational characteristics, both in terms of mean differences and distributions, where we assess the latter with the Kolmogorov-Smirnov test of the equality of distributions (Table 1). The same balance is also evident among the subsample of respondents currently working for wages (see Supplementary Appendix Table A1)." Baird 2011, Pg 6-7.
• 85.
• "Children in Group 1 and 2 schools thus were assigned to receive 2.41 more years of deworming than Group 3 children on average (Table 1), and these early beneficiaries are what we call the deworming treatment group below. We focus on a single treatment indicator rather than separating out effects for Group 1 versus Group 2 schools since this simplifies the analysis, and because we find few statistically significant differences between Group 1 and 2 (not shown)." Baird 2011, Pg 7.
• "We focus on the KLPS-2 data, rather than KLPS-1, in this paper since it was collected at a more relevant time point for us to assess adult life outcomes: the majority of sample respondents are adults by 2007-09 (with median age at 22 years as opposed to 18 in KLPS-1), have completed their schooling, many have married, and a growing share are engaging in wage employment or self- employment, as shown graphically in Figure 2." Baird 2011, Pg 10.
• 86.
"The question of whether – and how much – child health gains improve adult living standards is of major intellectual interest and public policy importance. We exploit a prospective study of deworming in Kenya that began in 1998, and utilize a new dataset with an effective tracking rate of 83% over a decade, at which point most subjects were 19 to 26 years old. Treatment individuals received two to three more years of deworming than the comparison group. Among those with wage employment, earnings are 21 to 29% higher in the treatment group, hours worked increase by 12%, and work days lost to illness fall by a third. A large share of the earnings gains are explained by sectoral shifts, for instance, through a doubling of manufacturing employment and a drop in casual labor. Small business performance also improves significantly among the self-employed. Total years enrolled in school, test scores and self-reported health improve significantly, suggesting that both education and health gains are plausible channels." Baird 2011, abstract.
• 87.
• "These labor market gains are accompanied by marked shifts in employment sector for the treatment group, with more than a doubling of well-paid manufacturing jobs (especially among males) and declines in both casual labor and domestic services employment. Changes in the subsector of employment account for nearly all of the earnings gains in deworming treatment group in a Oaxaca-style decomposition." Baird 2011, Pg 3.
• "The most striking impacts are a large increase in manufacturing work for deworming treatment individuals, with a point estimate of 0.072 (s.e. 0.024, Table 4), signifying a tripling of manufacturing employment overall. The gains among males are particularly pronounced at 0.090(s.e. 0.030). The two most common types of manufacturing jobs in our sample are in food processing and textiles, with establishments ranging in size from small local corn flour mills up to large blanket factories in Nairobi. On the flip side, casual labor employment falls significantly (-0.038, s.e. 0.018), as does domestic service work for females (-0.174, s.e. 0.110), although this latter effect is only marginally significant … One explanation for this pattern that ties into our earlier labor supply findings is that child health investments improve individuals’ capacity to carry out physically demanding, characterized by long work weeks and little tolerance of absenteeism, and thus allow them to access higher paid jobs such as those in manufacturing." Baird 2011, Pg 24-25.
• 88.
"Deworming treatment individuals consume 0.096 more meals (s.e. 0.028, significant at 99% confidence, Panel C) than the control group, and the externality impact is also large and positive (0.080, s.e. 0.023, 99% confidence). This suggests that deworming led to living standard gains in the full sample." Baird 2011, Pg 17. See Pg 41 for full set of related results, including overall household consumption.
• 89.
"We examine school enrollment and attendance using two different data sources in Table 9. In Panel A, the dependent variable is school enrollment as reported by the respondent in the KLPS-2 survey, which equals one if the individual was enrolled for at least part of a given year. These show consistently positive effects from 1999 to 2007 both on the deworming treatment indicator and the externalities term, and the total increase in school enrollment in treatment relative to control schools over the period is 0.279 years (s.e. 0.147, significant at 90% confidence). Note that there is no treatment effect estimate for 1998 since all students were enrolled at some point in 1998, as a criterion for inclusion in the KLPS sample. The treatment effect estimates are largest during 1999- 2003 before tailing off during 2004-07, as predicted in the optimal educational investment framework above since the current opportunity cost of time is rising relative to the later benefits of schooling as individuals age.
The data in Panel B is school participation, namely, being found present in school by survey enumerators on the day of an unannounced school attendance check. This is our most objective measure of actual time spent at school, and was a main outcome measure in Miguel and Kremer (2004). The enrollment measure in Panel A misses much of the attendance variation captured in this measure. However, two important limitations of the school participation data are that it was only collected during 1998-2001, and only at primary schools in the study area; the falling sample size between 1998 to 2001 is mainly driven by students graduating from primary school. School participation rates also rise significantly in the deworming treatment group, by 0.074 (s.e. 0.023) and 0.068 (s.e. 0.023) in 1998 and 1999, respectively, before dropping off somewhat in later years (particularly in 2000). Total school participation gains are 0.129 of a year of schooling (s.e. 0.064, significant at 95% confidence). Given that the school enrollment data misses out on attendance impacts, which are sizeable, a plausible lower bound on the total increase in time spent in school induced by the deworming intervention is the 0.129 gain in school participation from 1998-2001 plus the school enrollment gains from 2002-2007, which works out to 0.304 years of schooling." Baird 2011, Pg 21-22.
"Test score performance is another natural way to assess deworming impacts on human capital and skills. While the impact of deworming on primary school academic test score performance in 1999 is positive but not statistically significant (Table 10, Panel B), there is suggestive evidence that the passing rate did improve on the key primary school graduation exam, the Kenya Certificate of Primary Education (point estimate 0.046, s.e. 0.031). There is also some evidence that English vocabulary knowledge (collected during the 2007-09 survey) is somewhat higher in the deworming treatment group (impact of 0.076 standard deviations in a normalized distribution, s.e., 0.055). The mean effect size of the 1999 test score, the indicator for passing the primary school leaving exam, and the English vocabulary score in 2007-09 taken together does yield a normalized point estimate of 0.112 that is statistically significant at 90% confidence (s.e. 0.067), providing suggestive evidence of moderate human capital gains in the treatment group. As expected, there is no effect on the Raven’s Matrices cognitive exam, which is designed to capture general intelligence rather than acquired skills (Panel B). While many would argue that nutritional gains in the first few years of life could in fact generate improved cognitive functioning as captured in a Raven’s exam – as [Ozier 2011] indeed does find among younger siblings of the deworming beneficiaries – it was apparently already 'too late' for such gains among the primary school age children in our study." Baird 2011, Pg 22.
• 90.
"There is evidence that adult health also improved as a result of deworming. Respondent self-reported health (on a normalized 0 to 1 scale) improved by 0.041 (s.e. 0.018, significant at 95% confidence, Table 11, panel A). Many studies have found that self-reported health reliably predicts actual morbidity and mortality even when other known health risk factors are accounted for (Idler and Benyamini 1997, Haddock et al. 2006, Brook et al. 1984). Note that it is somewhat difficult to interpret this impact causally since it may partially reflect health gains driven by the higher adult earnings detailed above, in addition to the direct health benefits of earlier deworming. Yet the fact that there were similar positive and statistically significant impacts on self-reported health in earlier periods, namely, in surveys administered in 1999 before most in sample individuals were working (see Table 11, panel C and Miguel and Kremer 2004), suggests that at least part of the effect is directly due to deworming. In terms of other health outcomes, there is no evidence that deworming improved self- reported happiness or wellbeing or reduced major health shocks. Total health expenditures by the respondent in the last month are significantly higher in the treatment group (91.1 Shillings, s.e. 30.0), suggesting that they may have greater ability or willingness to make health investments, but interpretation is again complicated by the fact that such spending also reflects health needs. Despite the finding that the number of meals consumed is larger for deworming treatment individuals (in Table 8), deworming did not lead to higher body mass index (Table 11, Panel B). Nor are there detectable height gains, and these non-impacts hold even when we restrict attention to younger individuals (those in grades 2-4 in 1998, regression not shown)." Baird 2011, Pg 22-23. See Pg 44 for full related results, including height, weight-for-height (body mass index) and health spending.
• 91.
"The most important factor in the econometric identification strategy is the randomized assignment of pupils to deworming treatment through the deworming program. Group 1 schools received free deworming treatment starting in 1998; Group 2 schools began in 1999, while Group 3 schools began receiving treatment in 2001. All school re- ceived treatment from 2001-2003. Thus in 1998, Group 1 schools were treatment schools, while Group 2 and 3 were comparison schools. In 1999, Group 1 and Group 2 schools were treatment schools and Group 3 schools were control schools. Table 4.1 shows the PSDP treatment schedule. Our sample consists of pupils that were in grades 2-7 in 1998 (primary school goes through grade 8). This variation in group, as well as variation in initial grade, provides us with between zero and six years of treatment for each individ- ual. For example, a Group 1 pupil in grade 2 in 1998 would be assigned six years of treatment, while a Group 3 pupil in grade 6 in 1998 would be assigned only one." Baird 2007, Pg 106-107.
• 92.
Baird 2007, Table 4.8, Pg 138.
• 93.
"Along with health outcomes we also look at the impact of years of deworming on a number of education and labor market outcomes. Given that we see benefits to health from deworming, we might expect that these gains would translate into educational gains. The education results are shown in Table 4.12. We do see some evidence that deworming treatment decreases your likelihood of dropping out for the overall sample, which is driven by benefits for males and those in low infection areas. Increasing treatment from zero to six years decreases your likelihood of dropping out by 6 percent and is significant at the 10% level. These results are similar to those found by Johnson and Schoeni (2007) for the US, who find that low birth weight (their measure of health status) does not influence results for highest grade attended, but does affect dropout rates. It is interesting that our significant education results are for different sub-groups than our health results. This observation suggests that our education gains are not necessarily being driven by health gains. Overall, we see very little in terms of educational or labor market gains when looking at our overall sample or our sub-samples. The coefficients are generally positive, but insignificant. With our sample still in a very transitional stage with many still in school, and others still moving between school and the labor force, it is hard to draw conclusive results from this evidence. We will have clearer results (or absence ofresults) for the impact of deworming on education and labor force participation once we collect the third round of data." Baird 2007, Pg 122.
• 94.
"We do find consistent negative impacts of deworming on cognitive performance,
although for the most part these results are insignificant and small. Looking at the mean effect result for the entire sample, one finds that increasing treatment by one year decreases, albeit insignificantly, the average cognitive score by 0.005 standard deviations. These negative coefficients may result from the fact that treatment leads more vulnerable pupils to stay in school, pupils who are likely to have lower test scores to begin with." Baird 2007, Pg 122.
• 95.
See the "How well would the studies generalize to other settings?" section of our 2012 blog post on the studies.
• 96.
Deworming substantially improved health and school participation among untreated children in both treatment schools and neighboring schools, and these externalities are large enough to justify fully subsidizing treatment." Miguel and Kremer 2004, abstract.
More at our 2012 blog post and full writeup on the topic.
• 97.
Ozier 2011, Pg 9.
• 98.
Praziquantel:
• "Praziquantel is administered orally at a standard dose of 40 mg/kg body weight. The most common adverse effects are gastrointestinal, including abdominal pain, nausea, vomiting and diarrhoea, and are usually mild and last less than 24 hours." Danso-Appiah et al. 2008. 2008, Pg 4.
• "Praziquantel and metrifonate were both found to be efficacious with few adverse events, although adverse outcomes were poorly assessed." Danso-Appiah et al. 2008, abstract.
• Saconato and Atallah 2009, Pg 20 lists clinical side effects from praziquantel based on the studies it reviews. Those whose 95% confidence intervals do not include zero are statistically significant effects: headache, nausea, abdominal pain, fever.
• Notes from our site visit to the Schistosomiasis Control Initiative (SCI) include reports of fainting and vomiting at a stakeholders' meeting. SCI's representative stated that these can be prevented by feeding children prior to treatment.
Albendazole:
• "Albendazole is administered orally (usually as single 400 mg dose), and reported adverse effects include gastrointestinal upsets, headaches, and dizziness, while rash, fever, elevated liver enzymes, and hair loss occur less frequently. There have been reports of elevated liver enzymes, headaches, loss of hair, low levels of white blood cells (neutropenia), fever, and itching if taken at higher doses and/or for a long period of time." Danso-Appiah et al. 2008, Pg 4.
• "Two trials looked at adverse events, but the trials were small. Further research is needed … Only two trials provided this information (Michaelsen 1985; Fox 2005). Fox 2005 found no serious adverse events (albendazole 0/46 versus placebo 0/43). Myalgia and cough were reported significantly more frequently in the placebo group compared to al- bendazole. Michaelsen 1985, which used tetrachlorethylene, re- ported that 17%(19/119: results not given for separate trial arms) of the children suffered adverse effects (nausea and ataxia) that began one and a half hours after treatment. All symptoms disappeared within four hours. This drug is not in current use as a deworming drug." Taylor-Robinson, Jones, and Garner 2007, abstract and Pg 21-22.
• "It has been suggested that resistance to deworming drugs may be a factor that limits the effectiveness of periodic deworming (Hotez et al. 2006). Some people have also questioned whether albendazole itself can adversely affect growth (Hotez et al. 2006)." Taylor-Robinson, Jones, and Garner 2007, Pg 27.
• 99.
Ibid.
• 100.
"A concern about the feasibility of sustainable control with BZAs is the possible emergence of drug resistance among human STHs. BZA resistance occurs because of the spread of point mutations in nematode-tubulin alleles. This phenomenon has already resulted in widespread BZA drug resistance among STHs of ruminant livestock. There is still no direct evidence for BZA resistance among human STHs, although such resistance could account for an observed failure of mebendazole for human hookworm in southern Mali, as well as a diminished efficacy against hookworm in Zanzibar following frequent and periodic use of mebendazole (Albonico and others 2003). PZQ resistance must also be considered, especially as it begins to be widely used in Sub-Saharan Africa (Hagan and others 2004). Should PZQ resistance develop, there will be new demands for antischistosomal drugs. Recently, the artemisins have shown activity against schistosomulae and were successful in protecting against S. japonicum in China (Hagan and others 2004)." Jamison et al. 2006, Pg 479.
• 101.
• 102.
• 103.
• 104.
“The age-dependent patterns of infection prevalence are gen- erally similar among the major helminth species, exhibiting a rise in childhood to a relatively stable asymptote in adulthood (figure 24.1). Maximum prevalence of A. lumbricoides and T. trichiura is usually attained before five years of age, and the maximum prevalence of hookworm and schistosome infec- tions is usually attained in adolescence or in early adulthood. The nonlinear relationship between prevalence and intensity has the consequence that the observed age-prevalence profiles provide little indication of the underlying profiles of age inten- sity (age in relation to worm burden). Because intensity is linked to morbidity, the age-intensity profiles provide a clearer understanding of which populations are vulnerable to the different helminths (figure 24.1). For A. lumbricoides and T. trichiura infections, the age-intensity profiles are typically convex in form, with the highest intensities in children 5 to 15 years of age [Bundy et al. 2004]. For schistosomiasis, a convex pattern is also observed, with a similar peak but with a plateau in adolescents and young adults 15 to 29 years of age (Kabatereine and others 1999). In contrast, the age-intensity profile for hookworm exhibits considerable variation, although intensity typically increases with age until adulthood and then plateaus (Brooker, Bethony, and Hotez 2004). In East Asia it is also common to find the highest intensities among the elderly. However, more generally, children and young adults are at higher risk of both harboring higher levels of infection (thus greater levels of morbidity) and becoming reinfected more quickly. Both may occur at vital stages in a child’s intellectual and physical development.” Hotez et al. 2006, Pgs 469-470.
• 105.
|
|
# Why are there infinitely many connections on a Riemannian manifold?
I've just started learning some Riemannian manifold stuff, and I'm getting confused about the concept of connection. A connection $\nabla: \Gamma(T\mathcal{M})\times \Gamma(T\mathcal{M}) \rightarrow \Gamma(T\mathcal{M})$ basically defines rules of differentiation $\nabla_XY$ on the tensor field of a manifold, but how can there be infinitely many connections? Does it mean that we may define arbitrary differentiation rules (as long as they satisfy the linear and product axioms) on a tensor field? Of course, one special connection is the Levi-Civita connection, but I don't see how we may arbitrarily define $\Gamma_{i,j}^k$ to generate different connections.
Also, isn't the covariant derivative defined by projecting the usual directional derivative onto the same tangent space? If so it seems the rule of differentiation on a vector field can already be determined, so why there exist other forms of connections?
Thanks!
• This answer might be helpful to gain more intuition. – guy-in-seoul Sep 10 '14 at 0:53
## 1 Answer
I'm turning my comment into an answer because it got too long.
You may want to look at this question and answer, in which I give a way of constructing a connection. The many choices involved in the construction should make it clear that many connections exist.
When you say "projecting the usual directional derivative onto the tangent space", I assume you are defining a connection on a submanifold $M$ of $\mathbb{R}^n$, in which case there is a natural Riemannian metric $g$ on $M$ (induced from the ambient metric on $\mathbb{R}^n$). One can show the connection you describe is the Levi-Civita connection associated to $g$, and so it's the natural connection in that context. But there exist other connections, and of course if we put a different metric on $M$ (e.g., associated to a different embedding), we would get a different Levi-Civita connection.
When I first learned about connections, like you, I wondered what the point of considering connections other than the Levi-Civita connection was. I now realize that on a Riemannian manifold, people almost always seem to use the Levi-Civita connection on the tangent bundle. But on vector bundles other than the tangent bundle, there does not necessarily exist a canonical choice of connection analogous to the Levi-Civita connection, and there are often many connections that one could choose to get the job done (whatever that job may be).
• Thanks for your answer! But why can we specify $\Gamma_{i,j}^k$ freely? Since $\Gamma_{i,j}^k$ determines $\nabla_{E_i}E_j$, does it mean that we may define the covariant derivatives of frames arbitrarily? – thinkbear Sep 10 '14 at 15:36
• I think after all we are free to determine the differentiation rules as long as the axioms are satisfied, and this allows defining arbitrary linear maps between two adjacent tangent spaces, as well as their corresponding $\Gamma_{i,j}^k$. Levi-Civita connection corresponds to the map that preserves inner products, whose $\Gamma_{i,j}^k$ turns out to be a function of $g_{i,j}$. Hopefully this understanding is correct. – thinkbear Sep 10 '14 at 22:52
• You can almost specify the $\Gamma$'s freely, except that you have to make sure that the connection you construct is well-defined (i.e., independent of the choice of coordinates on overlaps of coordinate charts). This is why I used a partition of unity in the answer I linked to...But locally at least, yes, you can basically specify the $\Gamma$'s arbitrarily. – Phillip Andreae Sep 12 '14 at 3:34
|
|
+0
# calculus help
0
133
10
https://prnt.sc/pfqre3
thank you very much any help is appreciated
Oct 7, 2019
edited by Melody Oct 7, 2019
#1
+2417
0
Sorry blocked on my browser
Oct 7, 2019
#2
+2417
+2
Thanks melody!
CalculatorUser Oct 7, 2019
#3
+2417
+3
This is basically what the question is asking
Lets say the sun is located at the focus of the left side of the image
____________________________________________________
So the distance between the perihelion and aphelion should be 46 + 70 = 116 million miles??
I haven't learned much in calculus yet this is as much as I can give you
Oct 7, 2019
#4
+2417
+2
2)
We can try to find the distance of the perihelion to the center of the ellipse.
116 / 2 = 58 million miles
58 - 46 (distance from perhelion to sun) = 12 million miles between sun and center of ellipse
CalculatorUser Oct 7, 2019
#5
+1
thank you both so much!!
Oct 7, 2019
#6
+2417
0
Welcome!
CalculatorUser Oct 7, 2019
#7
0
im sorry to trouble you i have one more question.
im not sure how to set this up
"Write the equation of the elliptical orbit of Mercury, where the major axis runs horizontally. Allow a and b to be measured in millions of miles. Use the origin as the center of the ellipse."
Oct 7, 2019
#8
+2417
+1
Do you know how to write ellipse equations?
Standard form $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$$
With $$a>b$$.
The value of A I think is just the distance of the origin to the endpoint of the horizontal axis
The value of B is the distance from the origin to the endpoint of the vertical axis.
CalculatorUser Oct 8, 2019
#9
0
what is my origin/center?
Oct 9, 2019
#10
0
nvm its (0,0)
Guest Oct 9, 2019
|
|
filtration F
I would like to write a super curly F as in measure space(omega, F).
How can I do that?
The styles I can find in the dropdown box such as sans self etc. are not what I am looking for.
Lars
You can easily use
You can easily use \mathcal{F} in order to generate a stylized F for the filtration!
Good luck!
Maria
Being a new user... how do I
Being a new user... how do I insert raw tex command like that in my document?
|
|
金融代写|利率建模代写Interest Rate Modeling代考|MATH4513
Vasicek利率模型一词是指一种对利率的运动和演变进行建模的数学方法。它是一种基于市场风险的单因素短利率模型。瓦西克利率模型常用于经济学中,以确定利率在未来的移动方向。
statistics-lab™ 为您的留学生涯保驾护航 在代写利率建模Interest Rate Modeling方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写利率建模Interest Rate Modeling代写方面经验极为丰富,各种代写利率建模Interest Rate Modeling相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
金融代写|利率建模代写Interest Rate Modeling代考|Yield to Maturity
In a free market, the price of a bond is determined by supply and demand. Due to discounting, the full price is normally smaller than the total notional value of coupons plus the principal. Denote the full price of a bullet bond as $B^{c}$. Suppose that all cash flows are discounted by a uniform rate, $y$, of compounding frequency $\omega$. Then $y$ should satisfy the following equation:
$$B^{c}=\operatorname{Pr} \cdot\left(\sum_{i=1}^{n} \frac{c \Delta T}{(1+y \Delta t)^{i \Delta T / \Delta t}}+\frac{1}{(1+y \Delta t)^{n \Delta T / \Delta t}}\right)$$
where $n$ is the number of coupons and $\Delta t=1 / \omega$. In bond mathematics, the compounding frequency is taken to be $\omega=1 / \Delta T$ by default, when there is $\Delta t=\Delta T$. This discount rate, which can be easily solved by a trial-and-error procedure using Equation 3.12, is defined to be the yield to maturity (YTM), as well as the internal rate of return (IRR) of the bond, and it is often simply called the bond yield.
As the function of the yield (for $\omega=1 / \Delta T$ ), the formula for a general time, $t \leq T$, is
$$B_{t}^{c}=\operatorname{Pr} \cdot\left(\sum_{i ; i \Delta T>t}^{n} \frac{c \Delta T}{(1+y \Delta T)^{(i \Delta T-t) / \Delta T}}+\frac{1}{(1+y \Delta T)^{(n \Delta T-t) / \Delta T}}\right)_{(3.13)}$$
Assuming that $t \in\left(T_{j}, T_{j+1}\right]$, and introducing
$$q=\frac{t-T_{j}}{T_{j+1}-T_{j}}=\frac{t-T_{j}}{\Delta T}$$
we then can write
$$t=T_{j}+\Delta T q=(j+q) \Delta T \text { and } i \Delta T-t=(i-j-q) \Delta T, \quad \forall i .$$
It follows that
\begin{aligned} B_{t}^{c} &=\operatorname{Pr} \cdot\left(\sum_{i=j+1}^{n} \frac{c \Delta T}{(1+y \Delta T)^{i-j-q}}+\frac{1}{(1+y \Delta T)^{n-j-q}}\right) \ &=\operatorname{Pr} \cdot(1+y \Delta T)^{q}\left(\sum_{i=1}^{n-j} \frac{c \Delta T}{(1+y \Delta T)^{i}}+\frac{1}{(1+y \Delta T)^{n-j}}\right) \end{aligned}
Given the bond price at any time, $t$, the bond yield is implied by Equation 3.14. A rough way to compare the relative cheapness/richness of two bonds with the same coupon frequency is to compare their yields. Intuitively, a bond with a higher yield is cheaper and thus may be more attractive.
There is a one-to-one price-yield relationship, as shown in Figure 3.2. Because of this relationship, a bond price is also quoted using its yield in the industry. As we can see in Figure $3.2$, a bond price is a convex function of the yield. Such a feature will be used later for convexity adjustment related to futures trading.
The price-yield relationship of a zero-coupon bond simplifies to
$$P=\operatorname{Pr} \cdot(1+y \Delta T)^{-(T-t) / \Delta T} .$$
金融代写|利率建模代写Interest Rate Modeling代考|Par Bonds, Par Yields, and the Par Yield Curve
The summation in Equation $3.12$ can be worked out so that
\begin{aligned} B^{c} &=\Delta T \cdot c \cdot \operatorname{Pr} \sum_{i=1}^{n}(1+y \Delta T)^{-i}+\operatorname{Pr}(1+y \Delta T)^{-n} \ &=\operatorname{Pr}\left[1-\left(1-\frac{c}{y}\right)\left(1-\frac{1}{(1+y \Delta T)^{n}}\right)\right] \end{aligned}
From the above expression, we can tell when the price is smaller, equal to, or larger than the principal value.
1. When $c<y, B^{c}<$ Pr. In such a case, we say that the bond is sold at discount (of the par value).
2. When the coupon rate is $c=y$, then $B^{c}=\operatorname{Pr}$, that is, the bond price equals the par value of the bond. In such a case, we call the bond a par bond, and the corresponding coupon rate a par yield.
3. When $c>y, B^{c}>\operatorname{Pr}$. In such a case, we call the bond a premium bond (it is traded at a premium to par).
Par yields play an important role in today’s interest-rate derivatives market. As we shall see later, there are many derivatives based on the par yields.
金融代写|利率建模代写Interest Rate Modeling代考|Yield Curves for U.S. Treasuries
A bond issuer may routinely issue bonds of various maturities, and, in a market, there can be many bonds of the same issuer being traded. For various reasons, some bonds are more liquid than others. The most liquid ones are often called benchmark bonds for the issuer. Their yields reflect the level of borrowing costs the market demands from the issuer. Moreover, the prices of the benchmark bonds imply a discount curve for cash flows from the issuer, and the discount curve can be used to gauge the relative cheapness/expensiveness of the issuer’s other bonds. If a relatively cheaper or more expensive bond is found, one may trade against this bond using the benchmark bonds and thus take an arbitrage profit. Hence, the prices or yields of the benchmark bonds carry essential information for the arbitrage pricing of the issuer’s other bonds, and they are treated as a summary of the status quo of all bonds offered by the same issuer.
In the U.S. Treasury market, newly issued bills and notes/bonds are called on-the-run Treasury securities. Traditionally, the on-the-run issues enjoy higher liquidity and are thus treated as benchmarks. Table $3.1$ provides the closing price quotes of the on-the-run issues for July 3 , 2008. As can be seen in the table, the on-the-run issues have maturities of 3 months, 6 months, 2 years, 3 years, 5 years, 10 years, and 30 years. When we connect the yields of the benchmark bonds through interpolation, we obtain a so-called yield curve. Since bond yields vary from day-to-day so does the yield curve. Figure $3.3$ shows the yield curves for the U.S. Treasuries constructed by linear interpolation for April 28 and May 1, 2006 , two consecutive trading days.
A yield curve is constructed based on yields of on-the-run issues using the interpolation technique. It provides a rough idea of the level of yields for various maturities. Further, the Treasury yield curve implies a discount curve, namely, the collection of prices of all zero-coupon bonds. The discount curve is used for pricing off-the-run Treasury securities, or marking to market Treasury portfolios. Moreover, the discount curve is also essential for pricing future cash flows of any security, either deterministic or stochastic. To price a portfolio of interest-rate derivatives, we may model the dynamics of the entire yield curve, in contrast to modeling the dynamics of a stock price for stock options. In the next section, we describe the technique for “backing” out the discount curve from the yield curve.
金融代写|利率建模代写Interest Rate Modeling代考|Yield to Maturity
q=吨−吨j吨j+1−吨j=吨−吨jΔ吨
金融代写|利率建模代写Interest Rate Modeling代考|Par Bonds, Par Yields, and the Par Yield Curve
1. 什么时候C<是,乙C<公关。在这种情况下,我们说债券以(面值的)折扣价出售。
2. 当票面利率为C=是, 然后乙C=公关,即债券价格等于债券面值。在这种情况下,我们称该债券为面值债券,相应的票面利率为面值收益率。
3. 什么时候C>是,乙C>公关. 在这种情况下,我们称该债券为溢价债券(以面值溢价交易)。
有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
|
|
# Max/min with constraints
## Homework Statement
Find max/min of x^2+y^2+z^2 given x^4+y^4+z^4=3
## Homework Equations
Use of gradient vectors related by LaGrange Multiplier
## The Attempt at a Solution
$$\begin{gathered} f\left( {x,y,z} \right) = {x^2} + {y^2} + {z^2};g\left( {x,y,z} \right) = {x^4} + {y^4} + {z^4} - 3 = 0 \\ \vec \nabla f = \left\langle {2x,2y,2z} \right\rangle ;\vec \nabla g = \left\langle {4{x^3},4{y^3},4{z^3}} \right\rangle \\ \left\langle {2x,2y,2z} \right\rangle = \lambda \left\langle {4{x^3},4{y^3},4{z^3}} \right\rangle \\ 2{x^2} = 2{y^2} = 2{z^2} \to x = \pm y = \pm z \\ 3{x^4} - 3 = 0 \to {x^4} = 1 \to x = \pm 1 \to y = \pm 1,z = \pm 1 \\ \max = f\left( {1,1,1} \right) = f\left( {1,1, - 1} \right) = f\left( {1, - 1,1} \right) = f\left( {1, - 1, - 1} \right) = \\ f\left( { - 1,1,1} \right) = f\left( { - 1,1, - 1} \right) = f\left( { - 1, - 1,1} \right) = f\left( { - 1, - 1, - 1} \right) = 3 \\ \end{gathered}$$
So I found the maximum but does the minimum exist?
Ray Vickson
Homework Helper
Dearly Missed
## Homework Statement
Find max/min of x^2+y^2+z^2 given x^4+y^4+z^4=3
## Homework Equations
Use of gradient vectors related by LaGrange Multiplier
## The Attempt at a Solution
$$\begin{gathered} f\left( {x,y,z} \right) = {x^2} + {y^2} + {z^2};g\left( {x,y,z} \right) = {x^4} + {y^4} + {z^4} - 3 = 0 \\ \vec \nabla f = \left\langle {2x,2y,2z} \right\rangle ;\vec \nabla g = \left\langle {4{x^3},4{y^3},4{z^3}} \right\rangle \\ \left\langle {2x,2y,2z} \right\rangle = \lambda \left\langle {4{x^3},4{y^3},4{z^3}} \right\rangle \\ 2{x^2} = 2{y^2} = 2{z^2} \to x = \pm y = \pm z \\ 3{x^4} - 3 = 0 \to {x^4} = 1 \to x = \pm 1 \to y = \pm 1,z = \pm 1 \\ \max = f\left( {1,1,1} \right) = f\left( {1,1, - 1} \right) = f\left( {1, - 1,1} \right) = f\left( {1, - 1, - 1} \right) = \\ f\left( { - 1,1,1} \right) = f\left( { - 1,1, - 1} \right) = f\left( { - 1, - 1,1} \right) = f\left( { - 1, - 1, - 1} \right) = 3 \\ \end{gathered}$$
So I found the maximum but does the minimum exist?
Is the feasible set S = {(x,y,z): x^4 + y^4 + z^4 = 3} compact? Is the function f(x,y,z) = x^2 + y^2 + z^2 continuous on S? Have you heard of Weierstrass' Theorem?
RGV
Is the feasible set S = {(x,y,z): x^4 + y^4 + z^4 = 3} compact? Is the function f(x,y,z) = x^2 + y^2 + z^2 continuous on S? Have you heard of Weierstrass' Theorem?
RGV
I'm not quite sure what you mean by compact or Weierstrass' Theorem but I think that the function is continuous
Ray Vickson
Homework Helper
Dearly Missed
I'm not quite sure what you mean by compact or Weierstrass' Theorem but I think that the function is continuous
RGV
RGV
So....
"A subset S of a topological space X is compact if for every open cover of S there exists a finite subcover of S."
Not quite sure what that means exactly, but perhaps its compact if there can be a finite subset of the points defined by the function?
And....
There seems to be two different Theorems, one about estimating functions with polynomials and another about sequence convergence...
Ray Vickson
Homework Helper
Dearly Missed
So....
"A subset S of a topological space X is compact if for every open cover of S there exists a finite subcover of S."
Not quite sure what that means exactly, but perhaps its compact if there can be a finite subset of the points defined by the function?
And....
There seems to be two different Theorems, one about estimating functions with polynomials and another about sequence convergence...
If you keep searching you will eventually find a document in which all this is put into the context of ordinary 3-D space with the usual distance measure. In that case there is a theorem saying that a set is compact if and only if it is closed and bounded. So, is the set S closed (i.e., contains all its limit points)? Is it bounded? Then there is a theorem of Weierstrass saying that a continuous function on a compact set assumes both its maximum and its minimum. (These are theorems that are proven in advanced Calculus classes, well before 'topology'.) So, in your case the answer is YES: S is compact, and f has a minimum on S, as well as a maximum. None of this helps you *find* the minimum, but it does tell you that the search makes sense.
RGV
|
|
# Predicting with random effects in mgcv gam
I am interested in modeling total fish catch using gam in mgcv to model simple random effects for individual vessels (that make repeated trips over time in the fishery). I have 98 subjects, so I thought I would use gam instead of gamm to model the random effects. My model is:
modelGOM <- gam(TotalFish ~ factor(SetYear) + factor(SetMonth) + factor(TimePeriod) +
s(SST) + s(VesselID, bs = "re", by = dum) + s(Distance, by = TimePeriod) +
offset(log(HooksSet)), data = GOM, family = tw(), method = "REML")
I have coded the random effect with bs = "re" and by = dum (I read that this would allow me to predict with the vessel effects at their predicted values or zero). "dum" is a vector of 1.
The model runs, but I am having problems predicting. I picked one of the vessels for the predictions (Vessel21) and average values for everything else except the predictor of interest for predictions (Distance).
data.frame("Distance"=seq(min(GOM$Distance),max(GOM$Distance),length = 100),
"SetYear" = '2006',
"SetMonth" = '6',
"TimePeriod" = 'A',
"SST" = mean(GOM$SST), "VesselID" = 'Vessel21', "dum" = '0', #to predict without vessel effect "HooksSet" = mean(GOM$HooksSet))
pred_GOM_A_Swordfish <- predict(modelGOM, grid.bin.GOM_A_Swordfish, type = "response",
se = T)
The error that I'm getting is:
Error in Predict.matrix.tprs.smooth(object, dk$data) : NA/NaN/Inf in foreign function call (arg 1) In addition: Warning message: In Ops.factor(xx, object$shift[i]) : - not meaningful for factors
I think this is being called because VesselID is a factor, but I'm using it a smooth for the random effects.
I have been able to successfully predict using gam without the simple random effects (bs = "re").
Can you provide any advice on how to predict this model without the VesselID term (but still include it in fitting)?
Thank you!
From version 1.8.8 of mgcv predict.gam has gained an exclude argument which allows for the zeroing out of terms in the model, including random effects, when predicting, without the dummy trick that was suggested previously.
• predict.gam and predict.bam now accept an 'exclude' argument allowing terms (e.g. random effects) to be zeroed for prediction. For efficiency, smooth terms not in terms or in exclude are no longer evaluated, and are instead set to zero or not returned. See ?predict.gam.
library("mgcv")
require("nlme")
dum <- rep(1,18)
b1 <- gam(travel ~ s(Rail, bs="re", by=dum), data=Rail, method="REML")
b2 <- gam(travel ~ s(Rail, bs="re"), data=Rail, method="REML")
head(predict(b1, newdata = cbind(Rail, dum = dum))) # ranefs on
head(predict(b1, newdata = cbind(Rail, dum = 0))) # ranefs off
head(predict(b2, newdata = Rail, exclude = "s(Rail)")) # ranefs off, no dummy
> head(predict(b1, newdata = cbind(Rail, dum = dum))) # ranefs on
1 2 3 4 5 6
54.10852 54.10852 54.10852 31.96909 31.96909 31.96909
> head(predict(b1, newdata = cbind(Rail, dum = 0))) # ranefs off
1 2 3 4 5 6
66.5 66.5 66.5 66.5 66.5 66.5
> head(predict(b2, newdata = Rail, exclude = "s(Rail)")) # ranefs off, no dummy
1 2 3 4 5 6
66.5 66.5 66.5 66.5 66.5 66.5
## Older approach
Simon Wood has used the following simple example to check this is working:
library("mgcv")
require("nlme")
dum <- rep(1,18)
b <- gam(travel ~ s(Rail, bs="re", by=dum), data=Rail, method="REML")
predict(b, newdata=data.frame(Rail="1", dum=0)) ## r.e. "turned off"
predict(b, newdata=data.frame(Rail="1", dum=1)) ## prediction with r.e
Which works for me. Likewise:
dum <- rep(1, NROW(na.omit(Orthodont)))
m <- gam(distance ~ s(age, bs = "re", by = dum) + Sex, data = Orthodont)
predict(m, data.frame(age = 8, Sex = "Female", dum = 1))
predict(m, data.frame(age = 8, Sex = "Female", dum = 0))
also works.
So I would check the data you are supplying in newdata is what you think it is as the problem may not be with VesselID — the error is coming from the function that would have been called by the predict() calls in the examples above, and Rail is a factor in the first example.
• Thank you, Gavin, for the examples! In working through those, I figured it out. You were correct - the error was in the newdata data frame. Once I removed the quotation marks around '0' for the "dum" by variable, I was able to predict without any errors. Rookie mistake, but I had been struggling with it all day and thought it was a problem with the VesselID factor being a smooth. Thank you so much! – Meagan Jan 4 '15 at 2:07
• How can one specify more than one random effect to exclude with exclude? I tried using c() but it doesn't seem to work. – Stefano Jan 1 '18 at 21:17
• Using a vector of terms to exclude works for me: exclude = c("s(x0)", "s(x2)") say from the following model b<-gam(y~s(x0)+s(I(x1^2))+s(x2)+offset(x3),data=dat) from ?predict.gam examples. You need to specify the strings in the vector passed to exclude with the notation used by summary() when displaying the information about each smooth term – Gavin Simpson Jan 1 '18 at 21:54
|
|
Verified Algorithms for Solving Markov Decision Processes
Title: Verified Algorithms for Solving Markov Decision Processes Authors: Maximilian Schäffeler (schaeffm /at/ in /dot/ tum /dot/ de) and Mohammad Abdulaziz Submission date: 2021-12-16 Abstract: We present a formalization of algorithms for solving Markov Decision Processes (MDPs) with formal guarantees on the optimality of their solutions. In particular we build on our analysis of the Bellman operator for discounted infinite horizon MDPs. From the iterator rule on the Bellman operator we directly derive executable value iteration and policy iteration algorithms to iteratively solve finite MDPs. We also prove correct optimized versions of value iteration that use matrix splittings to improve the convergence rate. In particular, we formally verify Gauss-Seidel value iteration and modified policy iteration. The algorithms are evaluated on two standard examples from the literature, namely, inventory management and gridworld. Our formalization covers most of chapter 6 in Puterman's book "Markov Decision Processes: Discrete Stochastic Dynamic Programming". BibTeX: @article{MDP-Algorithms-AFP, author = {Maximilian Schäffeler and Mohammad Abdulaziz}, title = {Verified Algorithms for Solving Markov Decision Processes}, journal = {Archive of Formal Proofs}, month = dec, year = 2021, note = {\url{https://isa-afp.org/entries/MDP-Algorithms.html}, Formal proof development}, ISSN = {2150-914x}, } License: BSD License Depends on: Gauss_Jordan, MDP-Rewards
|
|
Change symbol used for anti-windup gain
Issue #56 resolved
Richard Murray repo owner created an issue
On 14 Nov 2018, at 6:53 , Karl Johan Astrom karl_johan.astrom@control.lth.se wrote:
Richard,
Have made a slight modification of stability of anti-windup schemes and example, one small thing I would have a slight preference to change $k_\text{f}$ to $k_\text{t}$ because it is called a tracking gain, but I let you decide, global change is necessary.
Karl
1. reporter
See commit 319315e.
2. reporter
Reverted back t o$k_\text{t}$ in commit 8cee139.
We are already using k_f for the feedforward gain in the state feedback work, so I think we probably want to avoid that particular symbol.
Having said that, I am not that big a fan of k_t since it doesn’t really link to anything (we don’t use the term “tracking gain” in the text => the ’t’ is not that natural). A couple of options:
• We could use some other symbol, like k_a or k_{aw} (for antiwindup). This might be in conflict with what is used in the standard PID literature, though.
• We could change to k_f and just note the conflict with the state space notation. Is there something that ‘f’ stands for that we should point out so that students can remember it?
• We could leave as is (k_t).
For now I am included to either leave as k_t or change to k_a (but only if you think this won’t cause confusion).
3. reporter
There is now a macro \awsub that should be used for the anti-windup subscript. This currently expands to \text{aw}.
4. reporter
Subscript changed to \awsub in all locations.
|
|
# Math Insight
### Applet: Doubling time and half life
If a population size $P_T$ as a function of time $T$ can be described as an exponential function, such as $P_T=0.168 \cdot 1.1^T$, then there is a characteristic time for the population size to double or shrink in half, depending on whether the population is growing or shrinking. The green line shows the population size $$P_T = P_0 \cdot b^T.$$ You can change the initial population size $P_0$ by dragging the green point and change the base $b$ by typing a value in the box. If $b \gt 1$, then the population is exhibiting exponential growth; if $0 \lt b \lt 1$, then the population is exhibiting exponential decay. The blue crosses and lines highlight points at which the population size has double or shrunk in half; you can move these points by dragging the blue points.
The population exhibits exponential growth if $b \gt 1$ and exhibits exponential decay if $0 \lt b \lt 1$. If $b \gt 1$, then the population size doubles after a time of $$T_{\text{double}}=\frac{\log 2}{\log b}.$$ If $0 \lt b \lt 1$, then the population size halves after a time of $$T_{\text{half}} = \frac{\log 1/2}{\log b}.$$ Three doubling times $T_{\text{double}}$ or half-lives $T_{\text{half}}$ are illustrated by the blue crosses and lines. You can drag the blue crosses to change the intervals. You can click the arrows to change the scales of the graph.
Applet file: doubling_time_half_life_discrete.ggb
|
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Electroceutical Treatment of Pseudomonas aeruginosa Biofilms
## Abstract
Electroceutical wound dressings, especially those involving current flow with silver based electrodes, show promise for treating biofilm infections. However, their mechanism of action is poorly understood. We have developed an in vitro agar based model using a bioluminescent strain of Pseudomonas aeruginosa to measure loss of activity and killing when direct current was applied. Silver electrodes were overlaid with agar and lawn biofilms grown for 24 h. A 6 V battery with 1 kΩ ballast resistor was used to treat the biofilms for 1 h or 24 h. Loss of bioluminescence and a 4-log reduction in viable cells was achieved over the anode. Scanning electron microscopy showed damaged cells and disrupted biofilm architecture. The antimicrobial activity continued to spread from the anode for at least 2 days, even after turning off the current. Based on possible electrochemical ractions of silver electrodes in chlorine containing medium; pH measurements of the medium post treatment; the time delay between initiation of treatment and observed bactericidal effects; and the presence of chlorotyrosine in the cell lysates, hypochlorous acid is hypothesized to be the chemical agent responsible for the observed (destruction/killing/eradication) of these biofilm forming bacteria. Similar killing was obtained with gels containing only bovine synovial fluid or human serum. These results suggest that our in vitro model could serve as a platform for fundamental studies to explore the effects of electrochemical treatment on biofilms, complementing clinical studies with electroceutical dressings.
## Introduction
Biofilms are aggregates of microorganisms with high cell densities embedded in a self-produced extracellular polymeric substance (EPS) that are adherent to each other and/or a surface1. Bacteria growing in biofilms cause a wide range of chronic infections2 and biofilms in chronic wounds impact over 90% of the infections, which significantly hinders wound healing. Chronic wounds affect over 6.5 million patients with an estimated \$25 billion in healthcare costs annually3. Standard care approaches of antibiotic treatment and innate immune response are often insufficient for biofilm infection mitigation4,5.
While long-term antimicrobial therapy with multiple antibiotics can be effective in some cases6,7,8, treatment failure due to antibiotic resistance7, systemic negative effects on the host, and the cost of medical and surgical treatment due to the presence of biofilms are on the rise9,10,11. Development of novel strategies for treatment is necessary, especially given these challenges. Recently, there have been several novel approaches to developing electroceutical dressings, that use either electric fields or currents to remediate biofilms while accelerating wound healing12,13,14. These alternates to antibiotic treatments are in various stages of commercialization but lack a fundamental understanding of the underlying principles behind their efficacy and limitations. The Procellera and Arthrex dressings are examples of electroceuticals that apply electric fields but have no current flow12. The patterned electroceutical dressing (PED) is one example (Fig. 1) of the use of direct current (DC) to enhance or replace existing antibiotic regimens13. DC has been demonstrated to have killing or eradication efficacy against planktonic bacteria in static and flowing systems14,15,16,17,18 with effects dependent on previously used electrode materials such as stainless steel (SS)19,20, carbon, platinum, and gold2,21,22 and on the composition of the medium23.
Costerton and co-workers first reported enhancement of antibiotic efficacy against microbial biofilms by application of current which they termed the “bioelectric effect”22,24. For instance, a previous research report showed use of DC at a current density of nearly 2.1 mA/cm2 for enhanced elimination of bacterial biofilms in combination with various industrial biocides14. In the same work, it was reported that switching the polarity of the flowing current had little or no effect on the survival of Pseudomonas aeruginosa (PA) biofilms; however, the combination of direct current and quaternary ammonium biocide significantly increased killing of biofilms grown on SS studs14. More recently, hydrogen peroxide produced from an electrochemical scaffold in a liquid bacterial culture medium was used to eradicate PA biofilms14,25. In an effort to closely mimic the chemical environment to treat biofilms in vivo, Sandvik et al.23 showed that in an aqueous solution of physiological saline, hypochlorous acid (HOCl) is produced electrochemically and reduced the surface concentration of Staphylococcus epidermidis biofilms from approximately 107 to 102 CFU/cm2. While most of the previous studies have been done in liquid media that are less applicable to wound biofilms, our current study focuses on the use of an in vitro agar based model as it mimics conditions similar to soft tissues in terms of providing a soft surface and a diffusion dominated environment. Mass transfer is an important consideration since in an aqueous environment, mixing of products produced at the electrodes may be rapidly diluted in the bulk fluid such that beneficial effects may only be observed at the electrode. Conversely, in systems with relatively small volumes, products may be produced in sufficiently high concentrations where beneficial effects are observed everywhere in the system. A diffusion dominated environment allows gradients to develop over time, mimicking the environment in tissues.
Wound biofilms usually impact soft tissue in vivo, and most in vitro electrochemical methods have evaluated interactions of electric current on biofilms in liquid media (e.g. Sandvik 2013). Therefore, a significant knowledge gap remains in how electrical current may impact bacterial biofilms in soft tissue. In the present work, the bioluminescent P. aeruginosa Xen41 strain was used to form lawn biofilms on agar plates that serve as in vitro platforms to mimic soft tissue. Two rectangular, silver foil electrodes were embedded at the bottom of the agar plate and were not in direct contact with the overlying biofilm. The influence of an applied current on the activity and killing of PA biofilm bacteria was assessed by bioluminescence, viable cell counts, and scanning electron microscope (SEM) imaging.
## Results
### Agar based in vitro biofilm model
We developed an in vitro model for assessing the efficacy of electroceutical treatment on PA biofilms (Fig. 2). After 24 h, the bacteria formed a relatively uniform lawn over the agar. The cell surface density was 4.8 ± 2.4 × 109 CFU/cm2 (mean ± 1 SD, n = 3). Bioluminescent intensity was relatively constant over the lawn but diminished over longer periods of time after discontinuing current (i.e. well in excess of 2 days or 48 h), presumably as the bacteria started to become nutrient depleted. The bacterial lawn in this model satisfies the criteria for being considered a biofilm as it is an immobile community of bacteria attached to a living surface (agar) and embedded in an extracellular polymeric matrix (EPS) that they have produced. Moreover, these lawns do not show significant reduction in CFUs at concentrations of the antibiotic tobramycin, which does significantly reduce the planktonic cell count (CFUs) as shown in Fig. S1. Such antibiotic tolerance of these lawns of P. aeruginosa is one of the hallmark characteristics of biofilms.
### P. aeruginosa biofilms are killed by DC current
Lawn biofilms of PA-Xen41 grown for 24 h when treated with DC generated from a 6 V source (with 1 kΩ ballast), showed inactivation as observed by the decrease in bioluminescent biofilm bacteria over the anode monitored over a 48 h time window (Figs 3 and S2). Some inactivation at the proximal region of the anode was detected at 24 h with greater inactivation over the whole anode at 48 h. Visual inspection showed that the biofilm had been physically cleared from the anode area as evidenced by a reduction in opacity, presumably reflecting cell lysis in this region (Fig. 3). This was confirmed with In Vivo Imaging System (IVIS) imaging and CFU counts.
Agar plugs from the treated biofilms over the anode revealed a ~4 log reduction at 48 h and 72 h (p < 0.05 at 48 h and 72 h), whereas there was no statistically significant reduction in cell count at the cathode and electrically untreated control biofilms at these time points (Fig. 4). SEM imaging of sections of the lawn biofilms over the anode depicted healthy rod shaped bacterial cells embedded within the dense biofilms before the current was applied (Fig. 5). However, 24 h post current treatment, it was evident that the cells had begun to lyse. At 48 h, no evidence of distinct individual cells could be found and only bacterial debris were apparent (Fig. 5). In the SEM images of the treated biofilms, there appear to be no healthy individual cells in contrast with the CFU data (Fig. 4), which shows that some viable cells are present. The SEM characterizes a very small region (<50 µm × 50 µm, c.a. 0.0025 mm2) directly over the anode, while the punch represents a larger area (12.6 mm2) and likely included some live cells at the margin of the killing zone. In this case, the CFU data might actually underestimate the log reduction in the region directly above the anode. Furthermore, since the CFU showed a 4-log reduction, we would expect to see 1 healthy cell out of 10,000, or 1 per average field of view which may be difficult to detect or be buried within the debris of the dead biofilm.
### Silver containing compounds are not responsible for biofilm inhibition
In order to better understand the nature of the electrochemical changes that may be occurring in response to the electric current, EDS analysis was performed across the thickness of the agar over the anode at 48 h (Fig. S3). From the elemental analysis it was evident that Ag is only present in the black layer that forms on the surface of anode and no Ag is detected within the agar or near the biofilm. In contrast, chlorine (Cl) was detected throughout the agar and also in the black layer on the surface of the anode. The atomic percentage of both Ag and Cl is found to be approximately 20% in the black layer that formed on the anode surface and suggests that this black layer acted as a passivating layer of AgCl by stoichiometric proportion. While no Ag is detected above the anode in the agar, killing was observed on the anode. The above results show that an anti-biofilm compound was produced at the anode after applying current in addition to the electrochemical production of AgCl.
### Electric current generates compounds at anode that destroys P. aeruginosa biofilms
We hypothesize that the cidal agent generated at the anode in our experiments is HOCl. Direct detection of electrochemically produced HOCl is not possible since it reacts with components of the growth media as well26,27,28. Therefore, we assayed for 3-chlorotyrosine, a complex generated by the reaction of HOCl or chloramines with cytoplasmic proteins29,30, using western blot analysis of PA lawn biofilms treated with electric current for 24 h. Three independent experiments showed the presence of two bands of chlorotyrosine, as shown in Fig. 6A which displays a representative image. Densitometry quantification of the western blot showed significantly higher content of 3-chlorotyrosine in samples taken at the anode (after exposure to current) as compared to the control (Fig. 6B). The SEM-EDS results along with this chlorotyrosine data serve to further bolster the hypothesis that chlorine chemistry is centrally involved in the observed killing of PA lawn biofilms in this work.
### Growth and killing of P. aeruginosa biofilms on agar containing biological fluids
PA-Xen41 was grown on 1.5% (wt/vol) agar supplemented with either 40% human serum (HS) or 40% bovine synovial fluid (BSF). Compared to the biofilm lawns grown on TSA, relatively greater killing was observed with BSF and HS as can be seen from the reduction in bioluminescence over the anode (Figs 7 and 8). Interestingly, we also observed a reduction in activity over the cathode area with the biofilm lawn on the agar supplemented with HS. Remarkably, the area of the zone of inactivation around the anode continued to increase up to 72 h after the current was turned off (Fig. 8).
### Inhibitory compounds are generated rapidly and are stable over time
To determine the characteristic time scale of generation and stability of the inhibitory compounds, current was driven through the TSA for 1 h at 37 °C and then turned off. A planktonic culture of PA-Xen41 was then spread onto the agar surface and incubated for 24 h at 37 °C. After the incubation period, inhibition of bacterial growth around the anode was visibly evident (Fig. 9A(a–c)). This demonstrates that the antibacterial product was produced at inhibitory concentrations within at most 1 h of electroceutical treatment and was stable well after the current was turned off, when there should be no further generation of electrochemical products.
Similarly, 24 h lawn biofilms (grown prior to electroceutical treatment and subsequently) treated for 1 h with DC, showed a zone of killing that had formed sometime between 7 and 19 h after the current had been turned off (Fig. 9B). As with the observations reported above, the zone of killing around the anode continued to increase over time (Fig. 9B). This observed lag before the biofilm is destroyed is likely due to a combination of the time taken to produce cidal concentrations of the antimicrobial product and the time taken for this cidal substance to diffuse from the anode at the bottom of the agar and culture plate to the lawn biofilm on the agar surface.
### Changes in pH near the electrodes
In order to determine whether current flow causes changes in local pH in the agar, a quantitative measurement of pH was performed using a pH microelectrode (Microelectrodes Inc, USA) over the agar surfaces above the anode, cathode and locations 2 cm laterally away from the electrodes. These measurements at different time points with current flow revealed that the pH is in the range of 6.7 ± 0.1 over the anode, 8.4 ± 0.1 over the cathode and 7.1 ± 0.1 away from the electrodes, confirming the qualitative colorimetric determination (Supplementary Figure S3). It is of note that biofilms are not affected by such variations of pH alone31,32.
### Biofilm killing with gold electrodes
Cidal results obtained with Ag electrodes were also observed with Au, and with a larger zone of inhibition over the anode (Fig. S4). However, with Au anodes and cathodes, a zone of inactivation was also observed around the cathode unlike what was observed with Ag electrodes. These differences suggest that electroceutical effects depend on electrode material. The implications of these results are further discussed in the following section.
## Discussion
Electroceutical wound dressings are being developed to treat biofilm related infections12,33,34. The class of electroceutical wound dressings that drive a current through the wound contain Ag as at least one of the main functional components, due to cost and due to the previously known antimicrobial properties of Ag35,36,37. A systematic evaluation of the bactericidal effects of electroceutical wound dressing is presently lacking, and this study using an in vitro agar model, is a significant advancement in understanding the extensive parametric and mechanistic aspects of why electroceutical dressings are effective in eliminating bacterial biofilms. The PA biofilms used in this work have been demonstrated to be resistant to tobramycin, where corresponding concentrations as high as 16 µg/ml have profound effects in reducing the populations of planktonic bacteria to well below clinical infection thresholds (see Fig. S1). Most previous works have evaluated effects of electrical stimulation of bacteria in liquid media, for example, antibacterial activity has been reportedly observed against Escherichia coli in salt solutions38; E. coli, Proteus species and Klebsiella pneumoniae in synthetic urine21; and E. coli, Staphylococcus aureus and Bacillus subtilis in water39,40. In this work, an agar based in vitro model relevant for soft tissue wounds has been developed to determine the effect of electrical current on PA biofilms, and is an important step between laboratory and clinical practice.
Using reported wound drainage rates and making reasonable assumptions regarding wound size and wound thickness, the mass transfer characteristics of a typical wound environment were estimated (see Supplemental Information). As described in detail in the Supplementary Data, a typical soft tissue wound was determined to be diffusion limited for HOCl (assuming that it is the active cidal product produced by the electroceutical treatment) with estimates quantified by using the non-dimensional ratio of advection to diffusion over a relevant spatial length scale, i.e., the Peclet number (Pe). Under diffusion limited transport conditions, the transport of compounds such as HOCl generated in the gel after applying current, is controlled by diffusion in the porous medium with little influence from advection. Therefore, any electrochemical reactions generating products at the anode and cathode (and any products resulting from subsequent intermediate reactions with components in the media) will generate gradients from the electrodes and extend into the TSA over length scales determined by chemical from which they were producedspecies-specific diffusivities in a gel or any tissue-like medium, and likely extending from a few millimeters to centimeters. Thus, antimicrobial activity may be apparent at distances from the electrodes over extended periods of time as the generated cidal species continue to diffuse away from the electrodes in the vicinity from which they were produced.
Our work builds on previous work showing the cidal effects of electric current on planktonic bacteria (P. aeruginosa, S. epidermidis, S. gordonii, E. coli, S. aureus) in solutions and in bacterial biofilms11,13,22,37,38,39,40,41. Where our work differs from these earlier works is in (1) developing a useful in vitro agar based platform on which scientific studies may be conducted to determine operating mechanisms for silver-based electroceutical dressings in clinical settings, (2) limiting the current driven through the agar medium, and (3) in determining that the bactericidal effects are due to dissemination of cidal species produced at the electrode. Our agar based in vitro model can therefore serve to identify key mechanisms and parameters that need to be optimized for the laboratory-to -clinic translation of electroceutical bactericidal technology.
The key findings of this work on biofilms are summarized in Figs 35, where a 4-log reduction in PA biofilms was observed after 48 h of electrical treatment. The rate of reduction in biofilms over 48 h was greater than between 48 h and 72 h. This result may be caused by the possibility that the production of antimicrobial agent at the anode was limited to early time points by passivation AgCl formation as evidenced by the observed black deposit and the EDS data. This suggests that silver as an electrode choice may be self-limiting and might not be best suited for extended treatment periods, since remaining viable cells have the potential to re-establish the infection. However, present clinical practice typically requires dressings to be changed every 3 days so that application of subsequent treatments in intervals of 3 days would reduce or eliminate re-establishment of the infection. Moreover, we anticipate such an electroceutical based technology would be used in conjunction with antibiotic therapy. In future work we will be exploring the use of non-reactive electrode materials such as graphite which will have potential for extended and repeated antimicrobial production. Figures 7 and 8 show the results of PA-Xen41 biofilms grown on gels formed with physiologically relevant fluids such as HS and BSF (both of which contain NaCl at physiological salt concentrations). As can be seen from these figures, PA-Xen41 biofilms were eradicated on agar containing HS and BSF. This bodes well for in vivo treatment of such biofilms since chloride ions are known to be present under physiological conditions. Perhaps the most significant result presented in this work is captured in Fig. 9, where the electroceutical treatment is applied for 1 h, then the current shut-off, after which the bacterial lawn was incubated for 24 h. It can be seen from Fig. 9 that cidal effects are evident well after the electric current treatment is off, implying that the observed effect is chemical and diffusional in nature. In other words, a stable antimicrobial agent must have been produced at the anode and subsequently diffused through the agar to later kill the lawn biofilms immediately above the anode.
A reasonable hypothesis from our results as well as from existing work is that the applied current generates an antimicrobial agent or agents at the anode that are stable and able to diffuse to the surface of the agar, whereupon they destroy the biofilm. Sandvik et al.23 have suggested a possible production of HOCl when current is driven through aqueous media, a potent disinfectant responsible for eradication of biofilms. The experimental results presented in this work cannot unequivocally identify the bactericidal agent(s). Nevertheless, our results do indirectly point to a potential candidate, namely, HOCl. From known electrochemistry of aqueous media containing containing Cl salts22, it is possible to identify the following reactions at the anode and cathode:
Anode:
$$Ag(s)+C{l}^{-}\to AgCl(s)+{e}^{-}$$
$$C{l}^{-}+2{H}_{2}O\to {H}_{3}{O}^{+}+HOCl+2{e}^{-}$$
accompanied by
$${H}_{3}{O}^{+}+OC{l}^{-}\rightleftharpoons HOCl+{H}_{2}O$$
where the first two reactions, are surface-mediated and the electrons are injected into the conduction band of the electrode. The third reaction above can proceed in the forward or reverse direction, but is known to progress in the forward direction at pH values below about 7.541. A corresponding reaction at the cathode is:
Cathode
$$2{e}^{-}+2{H}_{2}O\to {H}_{2}+2O{H}^{-}$$
Note that H3O+ is produced at the anode in the process, while OH is produced at the cathode. Also noteworthy is the fact that the production of HOCl requires two electrons transferred but only generates a single H3O+ unlike the reaction at the cathode which produces an OH for every electron transferred. This is due to the fact that the third reaction at the anode can be driven left or right depending on the pH of the solution. The pKa of HOCl is 7.53, therefore the reaction should favor production of HOCl and not OCl since the initial pH of the media is ~7.141. This would result in a smaller change in pH at the anode as compared to the change at the cathode. Indeed, this is precisely what was observed in our experiments (Table 1).
The cathodic and anodic reactions in the electroceutical treatment described in this work indicate localized changes with decrease in pH at the anode and a corresponding increase at the cathode. This is consistent with what has been previously reported in current driven systems23 and also with the data reported in the supporting information in this work (Table 1). It is unlikely that the pH changes at the electrodes are responsible for the observed remediation of the PA biofilms since even planktonic PA are known to be unaffected by such excursions in pH on either side of neutral. HOCl is a likely candidate for the observed biofilm inhibition, since biological fluids contain chlorides, speciation of free chlorine is pH dependent, and HOCl is predominantly generated at pH < 7.5 while hypochlorite (OCl) is expected to be produced at pH > 7.5. In addition, HOCl is known to be 100 times more reactive than the hypochlorite ion and a stronger disinfectant, and therefore may explain the bactericidal effects observed at the anode in contrast to the cathode in our experiments23.
HOCl can also react with proteins in the media to produce more stable protein chloramines (Fig. 6)26,42. Taurine chloramines are produced when the amino acid taurine reacts with HOCl43. Such products can be produced from interaction between HOCl and bacteria26,44. As shown in Fig. 6, the presence of 3-chlorotyrosine with current along with the observed changes in pH, as well as reported bactericidal effects reported in the literature, bolster the hypothesis that HOCl is present and is likely the cidal agent responsible for the killing of PA biofilms observed in this work.
This work has focused on the use of Ag electrodes in the electroceutical treatment of biofilms. The electrode material plays a significant role in the electrochemical generation of antimicrobial agents. Davis et al. (1989) reported the electrode material to be critical in studies applying electric current on bacterial samples in a liquid medium17. A gold cathode with carbon and platinum anodes were reported to be the most effective against the urinary tract planktonic pathogens E. coli and P. mirabilis, while silver, nickel, or copper electrodes corroded to the point of breaking during experiments17. In another study, the authors showed application of direct current through platinum electrodes resulted in an increase in the production of reactive oxygen species (ROS) responsible for killing of S. aureus, S. epidermidis and P. aeruginosa biofilms45. It is certainly possible that other cidal agents besides HOCl may be active in the PA biofilm remediation observed in our results. In other work, the release of metals from the electrodes has been intentionally used to deliver Ag for treatment46,47. It is interesting to note that our EDS analysis at different locations through the agar in our in vitro model, showed no presence of Ag beyond the electrode surface (Fig. S3), so that the bactericidal effects of silver alone can be ruled out in our experiments. In the results reported here, both Ag (Fig. 3) and Au (Fig. S4) electrodes were effective in generating electrochemical species leading to eradication of PA biofilms. However, (1) with Au electrodes, killing is observed additionally on the cathode and (2) no chloride layer equivalent to AgCl forms on Au electrodes. Therefore, it is possible that current driven systems using electrodes other than Ag could see excursions in pH well away from neutral and larger than those values measured in this work and potentially involve multiple cidal species.
A drawback of the in vitro agar based model introduced in this work is that both voltage and current vary through the course of the electroceutical treatment (Fig. S5), and it is likely that this will occur in vivo as well without active real-time sensing and control. Figure S5 shows the voltage measured across the Ag electrodes on the agar side, downstream of the 1 kΩ ballast resistor. In other words, in Fig. S5, $$V=(6-1000I)$$, where V is in Volts and I is in Amperes. As can be seen from the equation, as the current (I) approaches 0 A, the voltage across the agar approaches 6 V. At the instant current flow begins, it is relatively large (but limited to no more than 6 mA by the ballast), corresponding to the voltage being small across the agar between the electrodes. As the current flows and the electrochemical reactions at the electrode surface and in the agar proceed, the voltage begins to rise (as the AgCl layer forms at the anode and the electrochemical products change the resistance of the agar medium) and the current falls. In Fig. S5, the initial rapid changes in both current and voltage indicate formation of the AgCl layer at the anode, in concert with production of the cidal species that much later in the transient, destroys the biofilm at the agar surface. From known principles of electrochemistry, the magnitude of the voltage at the anode determines which electrochemical reactions are driven (i.e. which species are produced) and the magnitude of the current determines the concentration of species produced. There is therefore a significant opportunity in electroceutical treatments for active control of both current and voltage with real-time sensing of the electrical characteristics of a wound.
Overall, the eradication of biofilms in this study appears to be impacted by a multi-step mechanism, including generation of reactive chlorinated species, changes in pH, and migration of stable reactive species away from the electrodes over extended time periods even after the current has been turned off. Clearly, future work must focus on identifying specific reactive species and migration characteristics to determine efficacy of electrical stimulation in vivo. The agar based in vitro model presented here can be useful in translation of a laboratory technique to clinical practice. We have identified HOCl as the key cidal agent in electroceuticals based on supporting indirect evidence. HOCl is known as an inorganic bactericidal compound of innate immunity and is effective against a broad range of microorganisms48,49,50. Its presence, however, does raise the potential of cytotoxicity on host cells including immune cells. It is pertinent to emphasize that HOCl is produced by neutrophils and is a natural part of host immune response to pathogens51. It is well known that stimulated neutrophils produce HOCl/OCl at a rate of 12.3 ± 0.8 nmol per 5 × 106 neutrophils (or 2.46 fmol ± 0.2 fmol per stimulated neutrophil)51. In vitro cytotoxicity profile (L929 cells) and the in vivo safety profile of HOCl (at pH 4.0) in various animal models (rabbit and guinea pig) have shown HOCl to be a safe antimicrobial agent with lack of animal toxicity48. Minimal bactericidal concentration (MBC) of HOCl against P. aeruginosa is reported to be 0.35 µg/mL48. With respect to developing the technology as a therapeutic, there is little data on HOCl concentrations and exposure times required to kill P. aeruginosa, and whether these can be achieved within tolerable cytotoxicity limits. Sakarya et al. (2014) suggested the concentration of HOCl to inactivate P. aeruginosa biofilms was 27.5 µg/mL52. After 24 h incubation they showed complete killing (a 5 log reduction). However, Chena and Stewart (2000) reported that 15 µg/mL at pH 6.4 only caused a 1-log reduction53. Based on these data it appears that the susceptibility is likely system dependent and further in vitro and in vivo studies are required to determine whether we can operate within a therapeutic window. However, we point out that due to the medical complications in treating chronic biofilm wound infections recent expert opinion suggests early and aggressive treatment10, and as such a certain degree of cytotoxicity might outweigh the benefit of controlling the biofilm. HOCl could also be effective in combination with antibiotics where HOCl (at lower than MBC concentrations) could permeabilize the bacterial cells and complete eradication could be possible along with antibiotic treatment. This work lays the foundation for understanding the mechanisms behind the bactericidal effects of electroceutical treatments and can lead to better clinical interventions for antibiotic resistant biofilm infections perhaps even without the need for antibiotics.
## Methods
### Bacterial strain and culture conditions
A bioluminescent strain of Pseudomonas aeruginosa (PA) Xen41 (PerkinElmer, USA) was used in all the experiments reported here. PA-Xen41 is a PAO1 luminescent strain that harbors the luxCDABE cassette inserted in a constitutively expressed manner54,55. The bacterium was cultured in tryptic soy broth (TSB; Sigma Aldrich, USA). The glycerol stock cultures were stored at −80 °C and streaked onto fresh tryptic soy agar (TSA) plates containing 1.5% agar in TSB, that were incubated for 24 h at 37 °C in 5% CO2. The isolated colonies from the TSA plate were then transferred to 20 mL of TSB and incubated overnight on an incubator shaker set at a temperature of 37 °C and speed of 200 rpm.
### Agar based wound model for in vitro electroceutical testing
Silver electrodes were cut from 0.1 mm thick, 99.9% pure Ag foil (Sigma-Aldrich, USA). All electrodes were 3 mm wide and 10 cm in length. A pair of electrodes was laid flat at a distance 3 cm apart in a 150 mm diameter polystyrene Petri dish (Fisher Scientific, USA) with both electrodes extending 8 cm from the walls of the dish. The remaining 2 cm length of the Ag electrodes extended outside the Petri dish and were connected to a 6 V battery pack. The battery pack comprised two 3 V batteries (CR2032, Energizer, USA) in series with a switch and a 1kΩ ballast resistor (Fig. S2) to limit the total current to below 6 mA when the switch is turned on (Fig. 2). TSA (55 mL) was poured on the flattened electrodes placed in the Petri dishes. The agar was allowed to solidify to room temperature. The measured current was observed to decrease from 3.5 ± 0.38 mA (where ± indicates range of two independent observations) at time t = 0 when the switch was turned on, to 0.15 ± 0.06 mA (150 μA ± 60 μA) over a period of 24 h and therefore all experiments were limited to no more than 24 h current passage (Fig. S5).
### Preparing lawn biofilms of P. aeruginosa
Lawn biofilms of PA-Xen 41 were generated by spreading the overnight culture on TSA with the Ag electrodes embedded underneath the agar layer, which is approximately 3.6 mm thick. Briefly, 100 µL of the overnight PA-Xen41 culture grown in TSB was mixed with 9.9 mL TSB to make a 1:100 dilution. 400 µL of the diluted culture was spread onto the TSA with embedded electrodes contained in a polystyrene petri-dish (150 mm × 15 mm, Fisher Scientific, USA). The petri-dishes were incubated at 37 °C in 5% CO2 for 24 h to develop lawn biofilms of PA-Xen41. The PA lawns were verified as being biofilms by measuring their response to the antibiotic, tobramyacin (Fig. S1).
### Killing of P. aeruginosa biofilms by electric current
The bioluminescent strain of PA-Xen41 was used in the data reported here since it enables easy monitoring of growth and metabolic activity based upon changes in intensity56, where red indicates active cells and blue/black represents less active or dead cells (Fig. 3; IVIS images). Cell death of course is confirmed by CFU counts. The lawn biofilms of PA-Xen41 formed for 24 h on the TSA surface were subjected to electroceutical treatment by driving electric current through the embedded Ag electrodes for 24 h. As described earlier, a 6 V battery pack with a 1 kΩ ballast resistor connected in series was used to drive the current. The Petri dishes were kept at 37 °C for 24 h in 5% CO2 after which the battery was disconnected and the petri-dishes were incubated for an additional 24 h. Control samples were lawn biofilms formed for 24 h and incubated further for 48 h without electroceutical treatment.
### Colony forming units of lawn biofilms at different time intervals
The colony forming units (CFU/cm2) of lawn biofilms were measured (i) before applying current at time t = 0 h, (ii) when current was stopped at 24 h, and (iii) after incubation without current till the 48 h time point. Agar cylindrical plugs (4 mm in diameter) from different areas of the plates (over anode and cathode) were removed using biopsy punches with the plunger system (Integra Miltex, Fisher Scientific, USA). The agar plugs were placed in sterile PBS, vortexed, serially diluted and plated onto TSA medium. The petri-dishes were incubated for 24 h at 37 °C and colonies were enumerated. Control samples were lawn biofilms formed for 24 h and incubated further for 48 h without electroceutical treatment.
### Scanning electron microscope (SEM) imaging
Lawn biofilms of PA-Xen41 grown on TSA medium were imaged using a FEI Nova Nano SEM 400 (FEI™, Hillsboro, OR) with a field-emission gun electron source, at an accelerating voltage of 5 kV. Biopsy punch samples (~4 mm diameter) were collected from the anode region at different time points (0, 24, and 48 h). The punched out agar blocks with the biofilms were then placed in 24 well microtiter plates and the biofilm was fixed in a glutaraldehyde buffer for 48 h at 4 °C followed by dehydration with graded ethanol. The samples were then chemically dried overnight with hexamethyldisilazane (HMDS, Ted Pella Inc.). Before imaging, the samples were mounted on an aluminum stub and sputter coated with Au-palladium to minimize sample charging when exposed to the electron beam57,58,59.
### Energy dispersive x-ray spectroscopic (EDS) analysis
EDS elemental analysis was performed to determine the abundance of the elements contained in the inhibitory compound produced after applying current and to determine whether or not Ag leaches out from the electrodes into the agar. EDS analysis was performed using a Hitachi S-3000H SEM with EDAX Falcon EDS detection system model 132–10. The agar punches were obtained as mentioned above in the section on SEM analysis, and the samples were sliced (Fig. S2) to obtain depth profiles of elemental composition. The images of the EDS sample were taken without any coating, and using the SEM operating at 5 kV equipped with an Everhart-Thornley secondary electron detector.
### Detection of chemical nature of bactericidal compound produced by electroceutical process using western blot analysis
To investigate our hypothesis that HOCl may be a key agent in the observed killing of PA biofilms, we performed western blot analysis using a primary antibody (Hycult Biotech Inc., PA) which recognizes 3-chlorotyrosine protein adducts formed when HOCl reacts with proteins27,60,61,62. Briefly, the PA lawn biofilms were formed on TSA and electroceutical treatment was applied for 24 h. The biofilm over the anode was scraped off using sterile disposable inoculating loop (Fisherbrand, USA) and collected in a sterile eppendorf tube containing 1 mL of PBS and labelled (as anode). The control eppendorf tube contained 1 mL PBS with scraped 24 h lawn biofilm of PA from TSA (without electroceutical treatment). Both the eppendorf tubes were centrifuged at 10,000 rpm for 5 min. The supernatant was discarded and the resulting pellet was used for analysis. After labelling with the primary antibody, the signals were visualized using corresponding HRP-conjugated secondary antibody (1: 2,000; GE Healthcare Life Sciences, PA) and ECL Plus™ Western Blotting Detection Reagents (GE Healthcare Life Sciences, PA). Membranes were stripped and re-probed with anti-Flagellin B which served as the loading control28.
### Killing of lawn biofilms grown on agar surface with biological fluids
To determine whether or not biological fluids affect the killing of PA lawn biofilms using electric current, two additional sets of agar media were prepared using human serum (HS, Type AB, Atlanta Biologicals, GA) and bovine synovial fluid (BSF, Lampire Biological Laboratories, USA). HS or BSF containing agar was prepared with the biological fluid (400 mL/L) in distilled water and the solidifying agent (agar, 1.5% wt./vol.). The biological fluids were added to pre-sterilized media cooled to 40 °C to avoid degradation or coagulation of proteins and to preserve the constituents of the fluids. PA-Xen41 grown overnight was diluted (1:100 in TSB) and 400 µL of the diluted culture was spread onto the respective media containing Ag foils, as already described above for TSA media. Direct current was applied to the pre-grown lawn biofilms for 24 h. The battery was removed at 24 h and the petri-dishes were incubated further for another 24 h. Images (white light and IVIS) were taken at t = 0 (when current was turned on), 24 (current off), and 48 h (24 h after the current was turned off), respectively. Zone of inhibition measurements were performed on IVIS images. The widths of inhibition zones at distances of 2 cm (from electrode entering the agar), 4 cm (midway) and 6 cm (towards tip of the anode) beginning from top of the anode, were measured and the average width of the three measurements was plotted at each time point.
### Effect of varying current on killing of P. aeruginosa Xen41 biofilms
We anticipated that the magnitude of the current could impact the concentration of cidal specie(s) electrochemically produced and affect the extent of killing of the biofilm. In order to explore this possible effect, different ballast resistances were used and the killing efficiency quantified using IVIS imaging. With the different ballast resistors (6 kΩ, 1 kΩ and 500Ω) in series with the 6 V battery pack, initial currents were measured to be 0.65 ± 0.01 mA (650 ± 10 μA), 3.50 ± 0.38 mA (3500 ± 380 μA), and 5.85 ± 0.08 mA (5850 ± 80 μA) respectively (mean ± 1 SD, n = 2). With current flowing for 24 h, the final current recorded was 0.09 ± 0.002 mA (90 ± 2 μA), 0.15 ± 0.06 mA (150 ± 60 μA), and 0.157 ± 0.007 mA (157 ± 7 μA) respectively. Although qualitatively higher eradication of the biofilm was observable at 500Ω (Fig. S2), visible changes were also observed in the agar. In contrast, killing with the 1 kΩ ballast was relatively greater than with the 6 kΩ ballast with no degradation or visible change in the agar. Consequently, the 1 kΩ ballast resistor connected in series with the 6 V batteries was used in all the experiments reported here.
### Effect of short term electroceutical treatment on generation of bactericidal compound
Ag electrodes were embedded under TSA in 150 mm Petri dishes and current was applied for 1 h using 6 V batteries connected in series with 1 kΩ ballast. The current was then turned off after 1 h, and 12 h grown culture of PA-Xen41 (diluted 1:100) was spread on the surface of TSA. The Petri dishes were incubated for 24 h at 37 °C with 5% CO2.
### Changes in pH of the agar medium in the presence of direct current
To monitor changes in pH, the microelectrode (MI-710, Microelectrodes, Inc, NH, USA) was used. Direct current was applied as shown in Fig. 2, using 6 V with a 1 kΩ ballast. The pH was measured over the anode, cathode and different locations 2 cm laterally away from the electrodes by placing the microelectrode on the surface of agar. The pH of TSA medium was measured before (t = 0) and after (t = 24, 48 h) applied current.
### Effect of electrode material on killing of P. aeruginosa Xen41 biofilms
In order to determine whether the electrode material has any influence on killing of PA biofilms, Ag electrodes were replaced by Au (99.987% pure, Alfa Aesar) electrodes. The TSA medium was poured over the Au foils in Petri dishes, allowed to gel and lawn biofilms were then grown on top for 24 h, as described previously. Current was driven through the Au electrodes for 24 h. Plates were incubated for a further 48 h (24 h without DC, as with all Ag foil electrode experiments) and IVIS images were taken at that time.
### Statistical analysis
All experiments were performed in triplicates, unless otherwise indicated for specific experiments. Control and treated samples were compared by paired, two test distribution using Student’s t-test where P < 0.05 was considered a statistically significant difference. Data represented in bar graphs are plotted as the mean ± standard error (SE) from the mean.
## References
1. 1.
Flemming, H.-C., Wingender, J., Szewzyk, U., Steinberg, P. & Rice, A. S. Biofilms: an emergent form of bacterial life. Nature Reviews Microbiology 14, 563–575 (2016).
2. 2.
del Pozo, J. L. & Patel, R. The challenge of treating biofilm-associated bacterial infections. Clinical pharmacology and therapeutics 82, 204–209, https://doi.org/10.1038/sj.clpt.6100247 (2007).
3. 3.
Sen, C. K. et al. Human skin wounds: a major and snowballing threat to public health and the economy. Wound Repair and Regeneration 17(6), 763–771 (2009).
4. 4.
Wu, H., Moser, C., Wang, H.-Z., Høiby, N. & Song, Z.-J. Strategies for combating bacterial biofilm infections. International journal of oral science 7, 1 (2015).
5. 5.
Koo, H., Allan, R. N., Howlin, R. P., Stoodley, P. & Hall-Stoodley, L. Targeting microbial biofilms: current and prospective therapeutic strategies. Nature reviews. Microbiology (2017).
6. 6.
Konig, D. P., Schierholz, J. M., Munnich, U. & Rutt, J. Treatment of staphylococcal implant infection with rifampicin-ciprofloxacin in stable implants. Archives of orthopaedic and trauma surgery 121, 297–299 (2001).
7. 7.
Pavoni, G. L. et al. Conservative medical therapy of prosthetic joint infections: retrospective analysis of an 8-year experience. Clinical microbiology and infection: the official publication of the European Society of Clinical Microbiology and Infectious Diseases 10, 831–837, https://doi.org/10.1111/j.1469-0691.2004.00928.x (2004).
8. 8.
Zimmerli, W., Widmer, A. F., Blatter, M., Frei, R. & Ochsner, P. E. Role of rifampin for treatment of orthopedic implant-related staphylococcal infections: a randomized controlled trial. Foreign-Body Infection (FBI) Study Group. Jama 279, 1537–1541 (1998).
9. 9.
Darouiche, R. O. Treatment of infections associated with surgical implants. The New England journal of medicine 350, 1422–1429, https://doi.org/10.1056/NEJMra035415 (2004).
10. 10.
Schultz, G. et al. Consensus guidelines for the identification and treatment of biofilms in chronic non-healing wounds. Wound repair and regeneration: official publication of the Wound Healing Society and the European Tissue Repair Society, https://doi.org/10.1111/wrr.12590 (2017).
11. 11.
Snyder, R. J. et al. Wound Biofilm: Current Perspectives and Strategies on Biofilm Disruption and Treatments. Wounds: a compendium of clinical research and practice 29, S1–s17 (2017).
12. 12.
Banerjee, J. et al. Silver-zinc redox-coupled electroceutical wound dressing disrupts bacterial biofilm. PloS one 10, e0119531, https://doi.org/10.1371/journal.pone.0119531 (2015).
13. 13.
Prakash, S. et al. Antimicrobial wound dressing. United States patent (2016).
14. 14.
Sultana, S. T., Babauta, J. T. & Beyenal, H. Electrochemical biofilm control: a review. Biofouling 31, 745–758, https://doi.org/10.1080/08927014.2015.1105222 (2015).
15. 15.
Istanbullu, O., Babauta, J., Duc Nguyen, H. & Beyenal, H. Electrochemical biofilm control: mechanism of action. Biofouling 28, 769–778, https://doi.org/10.1080/08927014.2012.707651 (2012).
16. 16.
Davis, C. P., Arnett, D. & Warren, M. M. Iontophoretic killing of Escherichia coli in static fluid and in a model catheter system. Journal of clinical microbiology 15, 891–894 (1982).
17. 17.
Davis, C. P., Weinberg, S., Anderson, M. D., Rao, G. M. & Warren, M. M. Effects of microamperage, medium, and bacterial concentration on iontophoretic killing of bacteria in fluid. Antimicrobial agents and chemotherapy 33, 442–447 (1989).
18. 18.
Davis, C. P., Wagle, N., Anderson, M. D. & Warren, M. M. Iontophoresis generates an antimicrobial effect that remains after iontophoresis ceases. Antimicrobial agents and chemotherapy 36, 2552–2555 (1992).
19. 19.
Wattanakaroon, W. & Stewart, P. S. Electrical enhancement of Streptococcus gordonii biofilm killing by gentamicin. Archives of Oral Biology 45, 167–171 (2000).
20. 20.
Caubet, R. et al. A radio frequency electric current enhances antibiotic efficacy against bacterial biofilms. Antimicrobial agents and chemotherapy 48, 4662–4664 (2004).
21. 21.
Davis, C. P., Wagle, N., Anderson, M. D. & Warren, M. M. Bacterial and fungal killing by iontophoresis with long-lived electrodes. Antimicrobial agents and chemotherapy 35, 2131–2134 (1991).
22. 22.
Khoury, A. E., Lam, K., Ellis, B. & Costerton, J. W. Prevention and control of bacterial infections associated with medical devices. ASAIO journal (American Society for Artificial Internal Organs: 1992) 38, M174–178 (1992).
23. 23.
Sandvik, E. L., McLeod, B. R., Parker, A. E. & Stewart, P. S. Direct electric current treatment under physiologic saline conditions kills Staphylococcus epidermidis biofilms via electrolytic generation of hypochlorous acid. PloS one 8, e55118, https://doi.org/10.1371/journal.pone.0055118 (2013).
24. 24.
Blenkinsopp, S. A., Khoury, A. E. & Costerton, J. W. Electrical enhancement of biocide efficacy against Pseudomonas aeruginosa biofilms. Applied and environmental microbiology 58, 3770–3773 (1992).
25. 25.
Sultana, S. T., Call, D. R. & Beyenal, H. Eradication of Pseudomonas aeruginosa biofilms and persister cells using an electrochemical scaffold and enhanced antibiotic susceptibility. NPJ biofilms and microbiomes 2, 2, https://doi.org/10.1038/s41522-016-0003-0 (2016).
26. 26.
Pattison, D. I., Hawkins, C. L. & Davies, M. J. Hypochlorous acid-mediated protein oxidation: how important are chloramine transfer reactions and protein tertiary structure? Biochemistry 46, 9853–9864 (2007).
27. 27.
Roy, S. et al. P21waf1/cip1/sdi1 as a central regulator of inducible smooth muscle actin expression and differentiation of cardiac fibroblasts to myofibroblasts. Molecular biology of the cell 18, 4837–4846, https://doi.org/10.1091/mbc.E07-03-0270 (2007).
28. 28.
Barki, K. G. et al. Electric Field Based Dressing Disrupts Mixed-Species Bacterial Biofilm Infection and Restores Functional Wound Healing. Ann Surg, https://doi.org/10.1097/sla.0000000000002504 (2017).
29. 29.
Drabik, G. & Naskalski, J. W. Chlorination of N-acetyltyrosine with HOCl, chloramines, and myeloperoxidase-hydrogen peroxide-chloride system. Acta biochimica Polonica 48, 271–275 (2001).
30. 30.
Domigan, N. M., Charlton, T. S., Duncan, M. W., Winterbourn, C. C. & Kettle, A. J. Chlorination of tyrosyl residues in peptides by myeloperoxidase and human neutrophils. The Journal of biological chemistry 270, 16542–16548 (1995).
31. 31.
Xiong, Y. Q., Caillon, J., Drugeon, H., Potel, G. & Baron, D. Influence of pH on adaptive resistance of Pseudomonas aeruginosa to aminoglycosides and their postantibiotic effects. Antimicrobial agents and chemotherapy 40, 35–39 (1996).
32. 32.
Jones, E. M., Cochrane, C. A. & Percival, S. L. The Effect of pH on the Extracellular Matrix and Biofilms. Advances in Wound Care 4, 431–439, https://doi.org/10.1089/wound.2014.0538 (2015).
33. 33.
Ganesh, K. et al. A Wireless Electroceutical Wound Dressing Disrupts Mixed Species Bacterial Biofilm In A Porcine Preclinical Model. Wound Repair and Regeneration 23, A22 (2015).
34. 34.
Doxsee, K., Berthelot, R. & Neethirajan, S. Electroceutical disinfection strategies impair the motility of pathogenic Pseudomonas aeruginosa and Escherichia coli. bioRxiv, 088120 (2016).
35. 35.
Percival, S. L., Bowler, P. G. & Dolman, J. Antimicrobial activity of silver-containing dressings on wound microorganisms using an in vitro biofilm model. International wound journal 4, 186–191, https://doi.org/10.1111/j.1742-481X.2007.00296.x (2007).
36. 36.
O’Neill, M. A. et al. Antimicrobial properties of silver-containing wound dressings: a microcalorimetric study. International journal of pharmaceutics 263, 61–68 (2003).
37. 37.
Brett, D. W. A discussion of silver as an antimicrobial agent: alleviating the confusion. Ostomy/wound management 52, 34–41 (2006).
38. 38.
Pareilleux, A. & Sicard, N. Lethal Effects of Electric Current on Escherichia coli. Applied Microbiology 19, 421–424 (1970).
39. 39.
Matsunaga, T., Nakasono, S. & Masuda, S. Electrochemical sterilization of bacteria absorbed on granular activated carbon. FEMS microbiology letters 72, 255–259 (1992).
40. 40.
Matsunaga, T. et al. Disinfection of drinking water by using a novel electrochemical reactor employing carbon-cloth electrodes. Applied and environmental microbiology 58, 686–689 (1992).
41. 41.
Horváth, A. K. & Nagypál, I. Kinetics and mechanism of the reaction between hypochlorous acid and tetrathionate ion. International Journal of Chemical Kinetics 32, 395–402 (2000).
42. 42.
Pattison, D. I. & Davies, M. J. Absolute rate constants for the reaction of hypochlorous acid with protein side chains and peptide bonds. Chemical research in toxicology 14, 1453–1464 (2001).
43. 43.
De Carvalho Bertozo, L., Morgon, N. H., De Souza, A. R. & Ximenes, V. F. Taurine bromamine: reactivity of an endogenous and exogenous anti-inflammatory and antimicrobial amino acid derivative. Biomolecules 6, 23 (2016).
44. 44.
Gray, M. J., Wholey, W.-Y. & Jakob, U. Bacterial responses to reactive chlorine species. Annual review of microbiology 67, 141–160 (2013).
45. 45.
Brinkman, C. L. et al. Exposure of Bacterial Biofilms to Electrical Current Leads to Cell Death Mediated in Part by Reactive Oxygen Species. PloS one 11, e0168595, https://doi.org/10.1371/journal.pone.0168595 (2016).
46. 46.
Berger, T., Spadaro, J., Bierman, R., Chapin, S. & Becker, R. Antifungal properties of electrically generated metallic ions. Antimicrobial agents and chemotherapy 10, 856–860 (1976).
47. 47.
Berger, T., Spadaro, J., Chapin, S. & Becker, R. Electrically generated silver ions: quantitative effects on bacterial and mammalian cells. Antimicrobial agents and chemotherapy 9, 357 (1976).
48. 48.
Wang, L. et al. Hypochlorous acid as a potential wound care agent: part I. Stabilized hypochlorous acid: a component of the inorganic armamentarium of innate immunity. Journal of burns and wounds 6 (2007).
49. 49.
McKenna, S. M. & Davies, K. The inhibition of bacterial growth by hypochlorous acid. Possible role in the bactericidal activity of phagocytes. Biochemical Journal 254, 685–692 (1988).
50. 50.
Vissers, M. C. & Winterbourn, C. C. Oxidation of intracellular glutathione after exposure of human red blood cells to hypochlorous acid. Biochemical Journal 307, 57–62 (1995).
51. 51.
Kalyanaraman, B. & Sohnle, P. G. Generation of free radical intermediates from foreign compounds by neutrophil-derived oxidants. The Journal of clinical investigation 75, 1618–1622, https://doi.org/10.1172/jci111868 (1985).
52. 52.
Sakarya, S., Gunay, N., Karakulak, M., Ozturk, B. & Ertugrul, B. Hypochlorous Acid: an ideal wound care agent with powerful microbicidal, antibiofilm, and wound healing potency. Wounds: a compendium of clinical research and practice 26, 342–350 (2014).
53. 53.
Chen, X. & Stewart, P. S. Biofilm removal caused by chemical treatments. Water research 34, 4229–4233 (2000).
54. 54.
Babrowski, T. et al. Pseudomonas aeruginosa virulence expression is directly activated by morphine and is capable of causing lethal gut derived sepsis in mice during chronic morphine administration. Annals of Surgery 255, 386–393, https://doi.org/10.1097/SLA.0b013e3182331870 (2012).
55. 55.
Hirt, H. & Gorr, S. U. Antimicrobial peptide GL13K is effective in reducing biofilms of Pseudomonas aeruginosa. Antimicrobial agents and chemotherapy 57, 4903–4910, https://doi.org/10.1128/AAC.00311-13 (2013).
56. 56.
Dusane, D. H. et al. Effects of loading concentration, blood and synovial fluid on antibiotic release and anti-biofilm activity of bone cement beads. J Control Release 248, 24–32, https://doi.org/10.1016/j.jconrel.2017.01.005 (2017).
57. 57.
Fuest, M., Boone, C., Rangharajan, K. K., Conlisk, A. T. & Prakash, S. A three-state nanofluidic field effect switch. Nano letters 15, 2365–2371 (2015).
58. 58.
Fuest, M., Rangharajan, K. K., Boone, C., Conlisk, A. & Prakash, S. Cation Dependent Surface Charge Regulation in Gated Nanofluidic Devices. Analytical Chemistry 89, 1593–1601 (2017).
59. 59.
Kellie, B. M., Silleck, A. C., Bellman, K., Snodgrass, R. & Prakash, S. Deposition of few-layered graphene in a microcombustor on copper and nickel substrates. RSC Advances 3, 7100–7105 (2013).
60. 60.
Das, A., Ganesh, K., Khanna, S., Sen, C. K. & Roy, S. Engulfment of apoptotic cells by macrophages: a role of microRNA-21 in the resolution of wound inflammation. J Immunol 192, 1120–1129, https://doi.org/10.4049/jimmunol.1300613 (2014).
61. 61.
Roy, S., Khanna, S., Nallu, K., Hunt, T. K. & Sen, C. K. Dermal wound healing is subject to redox control. Molecular therapy: the journal of the American Society of Gene Therapy 13, 211–220, https://doi.org/10.1016/j.ymthe.2005.07.684 (2006).
62. 62.
Das, A. et al. Correction of MFG-E8 Resolves Inflammation and Promotes Cutaneous Wound Healing in Diabetes. J Immunol 196, 5089–5100, https://doi.org/10.4049/jimmunol.1502270 (2016).
## Acknowledgements
We thank The Ohio State University Infectious Disease Institute (IDI), Discovery Themes and Public Health Preparedness for Infectious Disease (PHPID) Transdisciplinary Team Grant, Departments of Microbial Infection and Immunity, Mechanical and Aerospace Engineering, and The Center for Regenerative Medicine and Cell-Based Therapies. We acknowledge support from the staff at Nanotech west, campus microscopy and imaging facility (CMIF), and center for electron microscopy and analysis (CEMAS) at The Ohio State University for assistance in various imaging and experimental aspects of this research effort. We also acknowledge partial personnel support from the US Army Research Office through grant number W911NF-16-1-0278 (SP) and the NIH R01 GM124436-01 (PS).
## Author information
Authors
### Contributions
P.S., S.P., D.J.W. and V.S. conceived and supervised the project. D.D., V.L., T.J. and C.W.P. performed the experiments. A.D. performed, analyzed and C.S. supervised and approved the western blot. D.S. performed the biofilm and planktonic MBC experiments. D.D., V.L., T.J., D.J.W., V.S., S.P. and P.S. interpreted the results. D.D. and P.S. wrote the initial drafts of the manuscript and D.J.W., S.P., V.S., S.R. and C.S. reviewed the manuscript. All authors read and approved the manuscript.
### Corresponding authors
Correspondence to Shaurya Prakash or Paul Stoodley.
## Ethics declarations
### Competing Interests
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Dusane, D.H., Lochab, V., Jones, T. et al. Electroceutical Treatment of Pseudomonas aeruginosa Biofilms. Sci Rep 9, 2008 (2019). https://doi.org/10.1038/s41598-018-37891-y
• Accepted:
• Published:
• ### Mechanisms and Control Strategies of Antibiotic Resistance in Pathological Biofilms
• Ying Luo
• , Qianqian Yang
• , Dan Zhang
• & Wei Yan
Journal of Microbiology and Biotechnology (2021)
• ### Frequency Based Control of Antifouling Properties Using Graphene Nanoplatelet/Poly(Lactic-co-Glycolic Acid) Composite Films
• Camilo Zamora-Ledezma
• , Alanis Chicaiza-Zambrano
• , Nelson Santiago Vispo
• , Alexis Debut
• , Karla Vizuete
• , Víctor H. Guerrero
• , Cristina E. Almeida
• & Frank Alexis
Composite Interfaces (2021)
• ### New developments in anti-biofilm intervention towards effective management of orthopedic device related infections (ODRI’s)
• Ping Li
• , Zhenwu Gao
• , Zhenwei Tan
• , Jun Xiao
• , Li Wei
• & Yirui Chen
Biofouling (2021)
• ### Bioelectrical understanding and engineering of cell biology
• Zoe Schofield
• , Gabriel N. Meloni
• , Peter Tran
• , Christian Zerfass
• , Giovanni Sena
• , Yoshikatsu Hayashi
• , Murray Grant
• , Sonia A. Contera
• , Shelley D. Minteer
• , Minsu Kim
• , Arthur Prindle
• , Paulo Rocha
• , Mustafa B. A. Djamgoz
• , Teuta Pilizota
• , Patrick R. Unwin
• , Munehiro Asally
• & Orkun S. Soyer
Journal of The Royal Society Interface (2020)
• ### Ultrastructure imaging of Pseudomonas aeruginosa lawn biofilms and eradication of the tobramycin-resistant variants under in vitro electroceutical treatment
• Varun Lochab
• , Travis H. Jones
• , Devendra H. Dusane
• , Casey W. Peters
• , Paul Stoodley
• , Daniel J. Wozniak
• , Vish V. Subramaniam
• & Shaurya Prakash
Scientific Reports (2020)
|
|
# Replicating computation to avoid communication
Version's name: Replicating computation to avoid communication ; a version of the Communication computation trade-off program.
Repository: [home] and version downloads: [.zip] [.tar.gz] [.tar.bz2] [.tar]
Implemented best practices: Replicating computation to avoid communication (gemm) ·
The best-practice algorithm is
while ( t < t_end ) {
1. send/receive particle positions to/from all other processes
2. determine which non-bonded forces need to be computed
3. compute the force for particles assigned to this processes
4. receive atoms from other processors and compute interactions:
compute force: F[i,j];
compute force: F[j,i] = -F[i,j];
}
Stage 4 in the algorithm is used to guarantee that $${\bf F}_{i,j}$$ is not calculated twice as $${\bf F}_{i,j}=-{\bf F}_{j,i}$$ by replicaing computation rather than comunicating it from/to another MPI process. Because the computation are trivial, the overall result is that this algorithm is faster wrt the pattern one. When computation part is time consuming this algorithm may be inefficent and sharing workload among MPI processes is to be preferred.
Code purpose:
md_mpi_comm_avoid.c can be used to demonstrate the good MPI Transfer efficiency for a large number of particles when using a simple MPI_SEND/MPI_RECV strategy to split the force calculation among processes.
How to use:
The Makefile command make generates an executable file named md_mpi_comm_avoid.exe using the GNU compiler. To run the code, first define the number of time steps and atoms to be used and then launch the application on a specific number of MPI processes, for example replace NUMATOMS, NUMSTEPS and NUMPROC in the following, where these are respectively the number of time steps, atoms, and MPI processes.
mpirun -n <NUMPROC> ./md_mpi_comm_avoid.exe NUMSTEPS NUMATOMS
Default values are assigned if the number of iterations is not provided, or are less than 0.
Screen output will be generated, similar to the following one:
> POP WP7 best-practice
> Version of code: best-practice version without performance bottleneck
> Implements Best-practice: Replicating computation to avoid communication
> Problem size: NUMSTEPS = 10 TOTATOMS = 2000
> Best-practice wall time (integration) = 10.30
|
|
# Row Echelon Form and Reduced Row Echelon Form in finding the rank of a matrix (2).
To find the rank of the matrix why it is enough to find the row echelon form only? why the REF gives us the maximum number of linearly independent row vectors in the matrix? could anyone explain this for me please?
I understood from this question that we can use both RREF and REF: Calculating the rank of a matrix , reduced row echelon or row echelon?
## 2 Answers
The key point is that two vectors like
• $$v_1=(a_1,b_1,c_1,\cdots)$$
• $$v_2=(0,b_2,c_2,\cdots)$$
can't be linearly dependent for $$a_1\neq 0$$ because we can't never obtain the zero vector by linear combinations.
Therefore in the RREF we can show that row vectors are lineraly independent.
• My question is why it is enough to calculate the REF and not the RREF? I see that your are saying I should use RREF. – Mathstupid Oct 2 '19 at 19:15
• @Smart REF suffices for the motivation I've given here. The scaling to obtain 1for the pivot doesn't change the conditions for independence. – user Oct 2 '19 at 19:22
We know that elementary row operations doesn't change the rank of a matrix.
Also, we know that row echelon form / reduced row echelon form can be obtained by finite number of elementary row operations, hence REF and RREF preserve the rank of the original matrix.
Hence, by obtaining the REF or RREF, we can tell the rank of the original matrix, which is equal to the number of non-zero rows.
• My question is why it is enough to calculate the REF and not the RREF? I see that your are saying I should use RREF. – Mathstupid Oct 2 '19 at 19:15
• note that number of non-zero rows for both REF and RREF is the same. And for the purpose of computing rank, it suffices to stop at REF, since the rank is jsut equal to the number of non-zero rows. – Siong Thye Goh Oct 3 '19 at 0:56
|
|
# Derivation Compton Scattering
Level 3 (up to Physics B.Sc.)
Level 3 requires the basics of vector calculus, differential and integral calculus. Suitable for undergraduates and high school students.
Updated by Alexander Fufaev on
In Compton scattering, photons of a certain wavelength $$\lambda$$ are scattered by an electron. The scattered photon then has a different wavelength $$\lambda'$$. Here we want to derive a formula for the wavelength of the scattered photon.
## Collision of a photon with an electron at rest
Here we assume that the electron is at rest. Its momentum is therefore zero: $$\boldsymbol{P} ~=~ 0$$. If the electron is bound in an atom, then it should be very weakly bound. A photon with momentum $$\boldsymbol{p}$$ is scattered at this electron. To investigate this scattering process, we consider energy conservation as well as momentum conservation.
Total momentum before collision:
The total momentum before the collision corresponds only to the momentum of the photon $$\boldsymbol{p} ~+~ \boldsymbol{P} ~=~ \boldsymbol{p}$$, since the electron at rest has no momentum $$\boldsymbol{P}$$ before the collision.
Total momentum after collision:
After the collision the photon has an unknown momentum $$\boldsymbol{p}'$$. The photon collided with the electron, therefore the electron could have got a momentum $$\boldsymbol{P}'$$ too.
Conservation of momentum, which states that the total momentum before the collision must be EQUAL to the total momentum after the collision:
Conservation of momentum
Formula anchor
The energy of the photon before the collision is given by:
Photon energy
Formula anchor
Here $$\lambda$$ is the wavelength of the photon before the collision. We assume the wavelength in the experiment as known because we choose it ourselves.
Total energy before collision:
What about the energy of the electron before the collision? In any case it is NOT zero, although the resting state of the electron may suggest that... According to special relativity, the electron - even at rest - has an energy; a so-called rest energy:
Rest energy of the electron
Formula anchor
Here $$m_{e}$$ is the rest mass of the electron with the value: $$m_{e} ~=~ 9.1 ~\cdot~ 10^{-31} \, \mathrm{kg}$$. The total energy before the collision is thus:
Total energy before collision
Formula anchor
Total energy after collision:
After the collision, the wavelength $$\lambda$$ of the photon may have changed. We refer to the new wavelength of the photon as $$\lambda'$$. A changed wavelength means a changed energy of the photon:
Photon energy after the collision
Formula anchor
The electron has also changed its energy due to the collision. Besides the rest energy 3, which the electron already had before the collision, it may have got an additional kinetic energy, which you can notice if the electron is in motion after the collision.
The formula for classical kinetic energy $$\frac{1}{2} \, m \, v^2$$ is rather inappropriate here, because for the Compton scattering one usually uses photons with very high energy (X-rays and gamma rays). By the collision of the energetic photon and the resting electron, the electron can be brought to very high velocities, so that the formula for classical kinetic energy no longer applies. Therefore you have to calculate relativistically if you want to get useful results for the Compton scattering. That is, instead of using the classical formula, we use the relativistic total energy $$W_{\text e}'$$, which already includes the rest energy and relativistic kinetic energy of the electron:
Relativistic energy-momentum relation for the electron after the collision
Formula anchor
Thus, the total energy of the photon and electron after the collision is the sum of 5 and 6:
Photon-electron total energy after the collision
Formula anchor
According to the conservation of energy, the total energy of the system before the collision must be equal to the total energy after the collision:
Conservation of energy
Formula anchor
The relativistic total energy 7 of the electron gives us also the relationship between its energy and its momentum $$\boldsymbol{P}'$$. In this way, we can combine conservation of momentum with conservation of energy. Rearrange the conservation of momentum 1 for $$\boldsymbol{P}'$$:
Electron momentum after the collision is the difference of photon energies
Formula anchor
Since momentum $$\boldsymbol{P}'^2$$ occurs in total energy 7, we square Eq. 9 to obtain a relation for $$\boldsymbol{P}'^2$$ (we use a binomial formula for this):
Squared electron momentum after the collision
Formula anchor
The last summand contains the scalar product between $$\boldsymbol{p}$$ and $$\boldsymbol{p}'$$. We can write it as follows using the angle $$\theta$$ between $$\boldsymbol{p}$$ and $$\boldsymbol{p}'$$: $$\boldsymbol{p} ~\cdot~ \boldsymbol{p}' ~=~ p \, p' \, \cos(\theta)$$. Here $$p ~=~ |\boldsymbol{p}|$$ and $$p' ~=~ |\boldsymbol{p}|$$ are the magnitudes of the two momentum vectors. Moreover, $$\boldsymbol{P}'^2 ~=~ P'^2$$ holds. Let's use that in Eq. 10:
Squared electron momentum via angle
Formula anchor
Rearrange the total energy 6 of the electron for $$P'^2$$:
Electron momentum after the collision using electron energies
Formula anchor
Substitute the squared momentum 11 into Eq. 12:
Electron momentum inserted into relativistic total energy
Formula anchor
Next, we use photon energies 2 and 5 to replace the photon momentum magnitudes with $$p = \frac{W_{\text p}}{c}$$ and $$p' = \frac{W_{\text p}'}{c}$$:
Total energy before and after collision combined with conservation of momentum
Formula anchor
Let us first multiply Eq. 14 by $$c^2$$, then rearrange the conservation of energy 8 for $$W_{\text e}' = W_{\text p} ~+~ W_{\text e} ~-~ W_{\text p}'$$, and then insert it:
Total energies and angles between scattering vectors
Formula anchor
Multiply out the parenthesis in 15:
Total energies and angles multiplied out
Formula anchor
Some summands in 16 cancel out:
Total energies and angles canceled
Formula anchor
Bring $$2W_{\text p}\,W_{\text p}' = 2W_{\text p}' \, W_{\text p}$$ to the left side and factor it out:
Total energies and angles canceled and factored out
Formula anchor
Divide the whole equation by $$2W_{\text p}' \, W_{\text p} \, W_{\text e}$$:
Reciprocals of the energies and scattering angle
Formula anchor
Then we insert the photon energies 2 and 5. We also insert the energy $$W_{\text e}$$ of the electron before the collision, which corresponds to the rest energy 3:
Relationship between wavelengths and scattering angle
Formula anchor
Let's multiply the equation by the factor $$h \, c$$ and we are done:
Compton formula for wavelengths
Formula anchor
Sometimes the formula is also written with the wavelength difference $$\delta \lambda = \lambda' - \lambda$$ and the Compton wavelength $$\lambda_{\text C} = \frac{h}{m_{e} \, c }$$:
Compton formula using wavelength difference
Formula anchor
## What if the electron is in motion before the collision?
We have assumed in the derivation that the electron is at rest. If it is not at rest at the beginning, the derivation is a little more complicated. But the principle is the same as in derivation of the Compton formula for an electron at rest!
Example of an initial situation: A photon with momentum $$\boldsymbol{p}$$ travels in positive $$x$$ direction, while an electron having momentum $$\boldsymbol{P}$$ before the collision travels in negative $$x$$ direction. First, you set up the equations for energy and momentum and proceed similarly to the derivation above:
Conservation of energy for a moving electron
Formula anchor
Conservation of momentum for a moving electron
Formula anchor
|
|
## Tokyo Journal of Mathematics
### $L^p$ Estimates for Some Schrödinger Type Operators and a Calderóon-Zygmund Operator of Schrödinger Type
Satoko SUGANO
#### Abstract
We consider the Schrödinger and Schrödinger type operators $H_{1}=-\Delta+V$ and $H_2=(-\Delta)^2+V^2$ with non-negative potentials $V$ on $\mathbf{R}^n$. We assume that the potential $V$ belongs to the reverse Hölder class which includes non-negative polynomials. We establish estimates of the fundamental solution for $H_{2}$ and show some $L^p$ estimates for Schrödinger type operators. Moreover, we show that the operator $\nabla^4H_{2}^{-1}$ is a Calderón-Zygmund operator.
#### Article information
Source
Tokyo J. Math., Volume 30, Number 1 (2007), 179-197.
Dates
First available in Project Euclid: 20 July 2007
https://projecteuclid.org/euclid.tjm/1184963655
Digital Object Identifier
doi:10.3836/tjm/1184963655
Mathematical Reviews number (MathSciNet)
MR2328062
Zentralblatt MATH identifier
1207.35112
#### Citation
SUGANO, Satoko. $L^p$ Estimates for Some Schrödinger Type Operators and a Calderóon-Zygmund Operator of Schrödinger Type. Tokyo J. Math. 30 (2007), no. 1, 179--197. doi:10.3836/tjm/1184963655. https://projecteuclid.org/euclid.tjm/1184963655
#### References
• M. Christ, Lectures on Singular Integral Operators, CBMS Regional Conf. Series in Math. 77, Amer. Math. Soc., Providence, RI, 1990.
• C. Fefferman, The uncertainty principle, Bull. Amer. Math. Soc. 9 (1983), 129–206.
• F. Gehring, The $L^p$-integrability of the partial derivatives of a quasi-conformal mapping, Acta Math. 130 (1973), 265–277.
• K. Kurata, An estimate on the heat kernel of magnetic Schrödinger operators and uniformly elliptic operators with non-negative potentials, J. London Math. Soc. (2) 62 (2000), 885–903.
• K. Kurata and S. Sugano, A remark on estimates for uniformly elliptic operators on weighted $L^p$ spaces and Morrey spaces, Math. Nachr. 209 (2000), 137–150.
• K. Kurata and S. Sugano, Estimates of the fundamental solution for magnetic Schrödinger operators and their applications, Tohoku Math. J. 52 (2000), 367–382.
• Z. Shen, $L^p$ estimates for Schrödinger operators with certain potentials, Ann. Inst. Fourier (Grenoble) 45 (1995), 513–546.
• Z. Shen, Estimates in $L^p$ for magnetic Schrödinger operators, Indiana Univ. Math. J. 45 (1996), 817–841.
• Z. Shen, On fundamental solutions of generalized Schrödinger operators, J. Funct. Anal. 167 (1999), 521–564.
• S. Sugano, Estimates for the operators $V^{\alpha}(-\Delta+V)^{-\beta}$ and $V^{\alpha}\nabla(-\Delta+V)^{-\beta}$ with certain non-negative potentials $V$, Tokyo J. Math. 21 (1998), 441–452.
• J. Zhong, Harmonic analysis for some Schrödinger type operators, Ph. D. Thesis, Princeton Univ., 1993.
|
|
PigeonholePrinciple ColoringMethod IMO Intermediate
1964
Problem - 2749
Seventeen people correspond by mail with one another - each one with all the rest. In their letters only three different topics are discussed. Each pair of correspondents deals with only one of these topics. Prove that there are at least three people who write to each other about the same topic.
The solution for this problem is available for $0.99. You can also purchase a pass for all available solutions for$99.
|
|
# The Root is Constantly Changing
A negative sign is not a Peanuts character!
-264.66409225
-462.71021449
-20.60615236
-21.38877504
-92.13888121
-213.77948944
-58.14977536
-276.29420841
-242.07314569
-276.16789489
-373.92343641
Answer is a semi-thematic four word phrase.
Hint 1
What operation(s) do you have to apply to the numbers?
Hint 2 (minor spoiler)
The phrase you get after applying the operation(s) in Hint 1, plus part of the title, can be used to find the knowledge you need to proceed.
Hint 3 (updated)
If you have the right knowledge, the decimal portion has all you need to finish off this puzzle.
• In Australian English, that title sounds dirty. Just sayin'. – Rand al'Thor Nov 20 at 20:39
• Well, I can make a four-word phrase from it, but (1) only by ignoring some things and (2) it doesn't seem like a phrase that makes any sense. [EDITED to add:] More specifically, I need to ignore about 2/3 of the information apparently present in the puzzle. – Gareth McCaughan Nov 20 at 20:42
• what's the question? – balazs.com Nov 20 at 20:46
• The question is "What is the four-word phrase that PiIsNot3 has somehow encoded in these numbers, perhaps in a manner hinted at by the other text here present?". – Gareth McCaughan Nov 20 at 20:46
• @GarethMcCaughan All of the information in this puzzle needs to be used to find the phrase I'm looking for. In particular, if you aren't using the knowledge tag, then you're not on the right track – PiIsNot3 Nov 20 at 20:50
Very partial solution
Since this has been open for quite a while with no obvious progress, I'll post what I've found so far. Maybe someone else will see whatever I'm currently missing.
First of all,
the given numbers are exactly $$-x^2$$ for these values of $$x$$: 16.2685, 21.5107, 4.5394, 4.6248, 9.5989, 14.6212, 7.6256, 16.6221, 15.5587, 16.6183, 19.3371. If we convert the integer parts to letters via A1Z26 we get PUDDINGPOPS. "Pudding Pops", according to Wikipedia, are "frosty ice pop treats originally made and marketed by Jell-O", first sold in the 1970s but since discontinued. So, these are $$\sqrt{-y}$$ where the $$y$$ are the numbers given, which we might prefer to write as $$i\sqrt{y}$$; or, alternatively, the numbers given are $$(ix)^2$$ where the $$x$$ are the numbers I have listed above. This suggests some sort of phrase involving the word "I" along with "SQUARE" or "ROOT" and, of course, "PUDDING POPS" (and perhaps also something like FLOOR or ROUND or INTEGER??), but (1) I can't think of any such thing that makes any kind of sense and (2) those fractional parts still need to be accounted for somehow. I can't get anything out of them with A1Z26, nor as phone keypad codes; considered as 4-digit numbers they don't seem obviously interesting; the main thing that strikes me is that they're almost all a little over 1/2, the two exceptions being instead a little over 1/4 in one case and 1/3 in the other.
Now,
a bit of web searching turns up this peculiar thing. Title: "The square root of minus Garfield"; main content is a Garfield comic (I think a real one) where in the first panel Jon says "The world is constantly changing", nothing at all happens in the second, and in the third Garfield thinks "They haven't stopped making frozen Pudding Pops, have they?". It looks as if we've located the [knowledge] we're supposed to be using.
What I'm meant to do with this is currently beyond me.
It's tempting to take the fractional parts as page numbers on that site, but there aren't that many -- the latest is 3845 -- so almost all of them just yield the last page. I note with interest that mezzacotta.net has in the past run puzzle competitions, but there doesn't seem to be a current one. In any case, heavy dependence on external resources is generally frowned on around here and I'm guessing that solving this puzzle isn't meant to depend on exhaustive knowledge of everything on mezzacotta.net. "Mezzacotta" means "half-cooked" but the only thing I can think of to do with that fact is to halve those four-digit numbers and anagram them :-) which doesn't seem promising, especially as several of them are odd. Or maybe subtract one half from each of those fractional parts that are just over 1/2 and ... then use them as page numbers on TSROMG? Nope, that doesn't produce anything obviously useful.
• You are on the right track! I've added a third hint to hopefully make the next step easier to figure out. – PiIsNot3 Nov 28 at 20:21
|
|
# Percentage Questions
FACTS AND FORMULAE FOR PERCENTAGE QUESTIONS
I.Concept of Percentage : By a certain percent , we mean that many hundredths. Thus x percent means x hundredths, written as x%.
To express x% as a fraction : We have , x% = x/100.
Thus, 20% = 20/100 = 1/5;
48% = 48/100 = 12/25, etc.
To express a/b as a percent : We have, $\frac{a}{b}=\left(\frac{a}{b}×100\right)%$ .
Thus, $\frac{1}{4}=\left(\frac{1}{4}×100\right)%=25%$
II. If the price of a commodity increases by R%, then the reduction in consumption so as not to increase the expenditure is $\left[\frac{R}{\left(100+R\right)}×100\right]%$
If the price of the commodity decreases by R%,then the increase in consumption so as to decrease the expenditure is $\left[\frac{R}{\left(100-R\right)}×100\right]%$
III. Results on Population : Let the population of the town be P now and suppose it increases at the rate of R% per annum, then :
1. Population after n years = $P{\left(1+\frac{R}{100}\right)}^{n}$
2. Population n years ago = $\frac{P}{{\left(1+\frac{R}{100}\right)}^{n}}$
IV. Results on Depreciation : Let the present value of a machine be P. Suppose it depreciates at the rate R% per annum. Then,
1. Value of the machine after n years = $P{\left(1-\frac{R}{100}\right)}^{n}$
2. Value of the machine n years ago = $\frac{P}{{\left(1-\frac{R}{100}\right)}^{n}}$
V. If A is R% more than B, then B is less than A by
$\left[\frac{R}{\left(100+R\right)}×100\right]%$
If A is R% less than B , then B is more than A by
$\left[\frac{R}{\left(100-R\right)}×100\right]%$
Q:
You went to buy 3 articles worth Rs.500 each. However, as per the discount sale going on in the shop, you paid for two and got one free. What is the discount percentage in the deal?
A) 30% B) 33% C) 33.13% D) 33.33%
Explanation:
3 403
Q:
Population of Patna city is 96,000, the decreasing rate of population is 800 per year. Population of Patliputra is 68,000, the increasing rate of population is 1200 per year. In how many years will the population of both places be equal?
A) 10 B) 14 C) 18 D) 22
Explanation:
1 400
Q:
58% of 350 is
A) 217 B) 203 C) 210 D) 196
Explanation:
0 395
Q:
The price of cooking oil increased by 25%. Find by how much percentage a family must reduce its consumption in order to maintain the same budget?
A) 70% B) 80% C) 30% D) 20%
Explanation:
0 383
Q:
The difference between 66 % of a number and 44% of the same number is 275. What is the 64% of that number?
A) 800 B) 840 C) 760 D) 600
Explanation:
0 373
Q:
Jane won a lottery and gets 1/3 of the winning amount and donates Rs.6000 which is 1/6, find how much the lottery was worth.
A) 6000 B) 18000 C) 54000 D) 108000
Explanation:
1 370
Q:
The marked prices of big and small note books are Rs.10 and RS.15 respectively. A student bought 5 dozen small and 10 dozen big note books at a total discount of 5% Find the total discount amount.
A) Rs.100 B) Rs.110 C) Rs.120 D) Rs.130
Explanation:
0 360
Q:
Inclusive of an 18% peak-time surcharge a taxi fare came to Rs. 531. What would the payable amount have been without the surcharge?
A) ₹ 440 B) ₹ 425 C) ₹ 450 D) ₹ 475
|
|
HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Abstract : The precise complexity of complementing \buchi\ automata is an intriguing and long standing problem. While optimal complementation techniques for finite automata are simple -- it suffices to determinize them using a simple subset construction and to dualize the acceptance condition of the resulting automaton -- \buchi\ complementation is more involved. Indeed, the construction of an EXPTIME complementation procedure took a quarter of a century from the introduction of \buchi\ automata in the early $60$s, and stepwise narrowing the gap between the upper and lower bound to a simple exponent (of $(6e)^n$ for \buchi\ automata with $n$ states) took four decades. % While the distance between the known upper ($O\big((0.96\,n)^n\big)$) and lower ($\Omega\big((0.76\,n)^n\big)$) bound on the required number of states has meanwhile been significantly reduced, an exponential factor remains between them. Also, the upper bound on the size of the complement automaton is not linear in the bound of its state space. These gaps are unsatisfactory from a theoretical point of view, but also because \buchi\ complementation is a useful tool in formal verification, in particular for the language containment problem. % This paper proposes a \buchi\ complementation algorithm whose complexity meets, modulo a quadratic ($O(n^2)$) factor, the known lower bound for \buchi\ complementation. It thus improves over previous constructions by an exponential factor and concludes the quest for optimal \buchi\ complementation algorithms.
Keywords :
Document type :
Conference papers
Domain :
Cited literature [17 references]
https://hal.inria.fr/inria-00360108
Contributor : Publications Loria Connect in order to contact the contributor
Submitted on : Tuesday, February 10, 2009 - 12:00:27 PM
Last modification on : Monday, January 15, 2018 - 11:47:14 AM
Long-term archiving on: : Tuesday, June 8, 2010 - 10:10:28 PM
### Files
Schewe_new.pdf
Files produced by the author(s)
### Identifiers
• HAL Id : inria-00360108, version 1
• ARXIV : 0902.2152
### Citation
Sven Schewe. Büchi Complementation Made Tight. 26th International Symposium on Theoretical Aspects of Computer Science - STACS 2009, Feb 2009, Freiburg, Germany. pp.661-672. ⟨inria-00360108⟩
Record views
|
|
# Large Feedback Arc Sets, High Minimum Degree Subgraphs, and Long Cycles in Eulerian Digraphs
@article{Huang2013LargeFA,
title={Large Feedback Arc Sets, High Minimum Degree Subgraphs, and Long Cycles in Eulerian Digraphs},
author={Hao-wei Huang and Jie Ma and Asaf Shapira and Benny Sudakov and Raphael Yuster},
journal={Combinatorics, Probability and Computing},
year={2013},
volume={22},
pages={859 - 873}
}
• Published 13 February 2012
• Mathematics
• Combinatorics, Probability and Computing
A minimum feedback arc set of a directed graph G is a smallest set of arcs whose removal makes G acyclic. Its cardinality is denoted by β(G). We show that a simple Eulerian digraph with n vertices and m arcs has β(G) ≥ m2/2n2+m/2n, and this bound is optimal for infinitely many m, n. Using this result we prove that a simple Eulerian digraph contains a cycle of length at most 6n2/m, and has an Eulerian subgraph with minimum degree at least m2/24n3. Both estimates are tight up to a constant factor…
10 Citations
### Long directed paths in Eulerian digraphs
• Mathematics
• 2021
An old conjecture of Bollobás and Scott asserts that every Eulerian directed graph with average degree d contains a directed cycle of length at least Ω(d). The best known lower bound for this problem
### Exact Localisations of Feedback Sets
• M. Hecht
• Mathematics
Theory of Computing Systems
• 2017
The notion of the essential minor and isolated cycles, which yield a priori problem size reductions and in the special case of so called resolvable graphs an exact solution in 𝓞(|V||E|3)$\mathcal {O}(| V||E |^{3})$ is introduced and weighted versions of the FASP and FVSP possess a Bellman decomposition.
### Note on Long Paths in Eulerian Digraphs
• Mathematics
Electron. J. Comb.
• 2021
This short note shows how to use methods from [Knierim, Larcher, Martinsson, Noever, JCTB 148:125--148] to find paths of length d/(\log d+1) in Eulerian digraphs with average degree d, improving the recent result of $\Omega(d^{1/2+1/40})$.
### Minimum edge cuts of distance-regular and strongly regular digraphs
• Mathematics
• 2017
In this paper, we show that the edge connectivity of a distance-regular digraph $\Gamma$ with valency $k$ is $k$ and for $k>2$, any minimum edge cut of $\Gamma$ is the set of all edges going into (or
### Path Decompositions of Random Directed Graphs
• Mathematics
Trends in Mathematics
• 2021
We consider the problem of decomposing the edges of a directed graph into as few paths as possible. There is a natural lower bound for the number of paths needed in an edge decomposition of a
### Feedback Arc Set Problem and NP-Hardness of Minimum Recurrent Configuration Problem of Chip-Firing Game on Directed Graphs
• Mathematics, Computer Science
• 2013
This paper points out a close relationship between MINREC and the minimum feedback arc set (MINFAS) problem on Eulerian directed graphs, and proves that both problems are NP-hard.
### Decomposing tournaments into paths
• Mathematics
Electron. Notes Discret. Math.
• 2017
This work considers a generalisation of Kelly's conjecture, which is due Alspach, Mason, and Pullman from 1976, and proves many of the cases of the conjecture.
### A study on Euler Graph and it’s applications
Main objective of this paper to study Euler graph and it’s various aspects in the authors' real world by using techniques found in a relatively new area of mathematics.
## References
SHOWING 1-10 OF 19 REFERENCES
### Directed Graphs Without Short Cycles
• Mathematics
Combinatorics, Probability and Computing
• 2009
It is proved that every directed graph whose shortest directed cycle has length at least r ≥ 4 satisfies β(G) ≤ cγ(G)/r2, where c is an absolute constant.
### Cycles in dense digraphs
• Mathematics
Comb.
• 2008
It is proved that in general β(G) ≤ γ(G), and that in two special cases: when V (G) is the union of two cliques when the vertices of G can be arranged in a circle such that if distinct u, v, w are in clockwise order and uw is a (directed) edge, then so are both uv, vw.
### A Proof of a Conjecture of Bondy Concerning Paths in Weighted Digraphs
• Mathematics
J. Comb. Theory, Ser. B
• 1996
Our aim in this note is to prove a conjecture of Bondy, extending a classical theorem of Dirac to edge-weighted digraphs: if every vertex has out-weight at least 1 then the digraph contains a path of
### The Caccetta-Haggkvist conjecture and additive number theory
The Caccetta-Haggkvist conjecture states that if G is a finite directed graph with at least n/k edges going out of each vertex, then G contains a directed cycle of length at most k. Hamidoune used
### Ranking Tournaments
• N. Alon
• Mathematics
SIAM J. Discret. Math.
• 2006
It is shown that the feedback arc set problem for tournaments is NP-hard under randomized reductions, which settles a conjecture of Bang-Jensen and Thomassen.
### The Minimum Feedback Arc Set Problem is NP-Hard for Tournaments
• Mathematics
Combinatorics, Probability and Computing
• 2006
It is proved that the minimum feedback arc set problem is NP-hard for tournaments.
### Retiming synchronous circuitry
• Computer Science
Algorithmica
• 2005
This paper describes a circuit transformation calledretiming in which registers are added at some points in a circuit and removed from others in such a way that the functional behavior of the circuit
### A Summary of Results and Problems Related to the Caccetta-Haggkvist Conjecture
This paper is an attempt to survey the current state of our knowledge on the Caccetta-Häggkvist conjecture and related questions. In January 2006 there was a workshop hosted by the American Institute
### The logical design of operating systems
The logical design of operating systems is discussed in more detail in the second part of this monograph on the design of mobile operating systems.
|
|
In the formula, which symbol specifies the fixed columns or rows?
We will be happy to hear your thoughts
|
|
448 views
The present ages of three persons in proportions $4 : 7 : 9$. Eight years ago, the sum of their ages was $56$. Find their present ages (in years).
1. $8, 20, 28$
2. $16, 28, 36$
3. $20, 35, 45$
4. None of the above options
$\textrm{Assume there age ratio is x}$
$\textrm{then age of 3 person is$4x$,$7x$,$9x$}$
$\textrm{According to the question,8 year ago sum of their age was 56}$
$\Rightarrow$ $\textrm{$(4x-8)$+(7x-8)+(9x-8)=56}$
$\Rightarrow$ $20x-24=56$
$\Rightarrow$ $20x=80$
$\Rightarrow$ $x$ = $4$
$\textrm{Now their present age is16,28,36. Option B.}$
4.7k points
1
731 views
1 vote
|
|
# Simulation and Bayes Rule
## Bayesian Statistics Using Sampling Methods¶
This workbook adds more detail on the theoretical underpinnings of Metropolis Hastings MCMC and slightly tweaks and expounds on some examples from Thomas Wiecki's excellent blogpost on this topic. Of course, all errors are mine.
In order to understand our parameters $\theta$, MH MCMC trys to sample from our parameters' distribution, which is unknown. This distribution is the posterior, or $Pr(\theta|y)$. If we can somehow construct this distribution we can extract information about $\theta$.
To see this, consider the following trivial example. Suppose we have one parameter and are somehow able to construct the posterior distribution. For simplicity, suppose the posterior is distributed $N(0,1)$ (the standard normal distribution). If we take draws from this distribution, we can learn alot about its shape:
In [4]:
%matplotlib inline
import numpy as np
from scipy.stats import norm,uniform,lognorm
import matplotlib.pyplot as plt
import seaborn as sbn
import warnings
warnings.filterwarnings('ignore')
sbn.set_style('white')
sbn.set_context('talk')
In [5]:
# define the posterior distribution
def posterior(mean,std,N):
return norm.rvs(mean,std,N)
In [6]:
# take a random sample from the posterior:
sample = posterior(0,1,100000)
# calculate mean:
print("The mean of theta is ",np.mean(sample))
print("The standard deviation of theta is ", np.std(sample))
print("95% CI of theta is ", np.percentile(sample,[2.5,97.5]))
# plot histogram
plt.title('Posterior Distribution of our Mean')
sbn.distplot(sample,bins=100)
plt.show()
The mean of theta is 0.002552528159105638
The standard deviation of theta is 1.0027777454917184
95% CI of theta is [-1.95823558 1.97137704]
Note, our sample values from the standard normal are exactly as one would expect and we hardly needed to sample from this posterior to know about $\theta$. For more interesting problems, we won't be able to construct (or know the properties of) the posterior as easily, so we need to devise a way to sample from the posterior to uncover information about $\theta$. This is exactly what the Metropolis Hastings Algorithm does.
### The Metropolis-Hastings Random Walk Algorithm¶
The Metropolis-Hastings algorithm is simply a random number generator from our posterior distribution.
The Metroplis-Hastings algorithm has been named one of the top 10 algorithms of the 20th century and appears on both the Math and Computational Sciences lists.
Consider constructing a set or series of potential beta vectors that explore the posterior distribution described above. When selecting beta vectors we would like to explore parameters having higher probability for the posterior ($Prob(\theta|\mathbf{X},\mathbf{Y},\alpha)$) . Essentially we want to construct a series of random draws from the posterior pdf of our parameters (called a chain) that is in regions of the parameter space that are high probability for the posterior. Once the chain is constructed, it reflects the underlying distribution of our parameter estimates.
#### Step 1. Generate proposal¶
We need to add another pdf to the mix: the proposal distribution. The proposal distribution is an exogenously assigned distribution from which we draw proposed values of our parameter vector $\theta$. Suppose we choose a symmetric proposal distribution such that $Prob(\theta_{t+1}|\theta_t) = Prob(\theta_t|\theta_{t+1})$. For example, we might specify that $\theta_{t+1} = \theta_t + N(0,\omega)$, where $N(0,\omega)$ is a random normal variate mean 0 standard deviation $\omega$. Both of the differences $(\theta_{t+1} - \theta_t)$ and $(\theta_{t+1} - \theta_t)$ are distributed normal with mean 0 and standard deviation $\omega$, so we say the proposal distribution is symmetric. Given our symmetric proposal distribution, denote the probability of $\theta^P_{t+1}$ conditional on $\theta_{t}$ as $Prob(\theta^P_{t+1}|\theta_t) = q(\theta^P_{t+1}|\theta_{t})$ Using a proposal distribution of this form leads to the random walk Metropolis-Hastings algorithm, the most common variety of MH in use.
Denote, $\theta_t$ as the current value of our Markov Chain and $\theta^P_{t+1}$ as the proposed value. Metropolis-Hastings lets us define an accept/reject criteria, where accept means that the candidate draw ($\theta_{t+1}$) provides information about $\theta$ in high probability regions, and is therefore a suitable draw from the posterior distribution of $\theta$.
The parameter $\omega$, the standard deviation of the random walk error term is often referred to as the sample width or proposal width parameter and we will turn back to this later in this notebook.
#### Step 2. Incorporate Bayes Rule¶
Recall that we can write Bayes Rule for our model as $$Prob(\theta |\mathbf{y}) = \frac{Prob(\mathbf{y}|\theta,\mathbf{x})Prob(\theta|\mathbf{x}))}{Prob(\mathbf{y}|\mathbf{x})}$$ Using Bayes Rule, we can evaluate how likely our proposed value ($\theta^P_{t+1}$) is relative to the current value in our chain ($\theta_{t+1}$):
$$\frac{Prob(\theta^P_{t+1}|\mathbf{y}) / q(\theta^P_{t+1}|\theta_t)}{Prob(\theta_{t}|\mathbf{y}) / q(\theta_t|\theta^P_{t+1})} = \frac{ \left ( \frac{Prob(\mathbf{y}|\theta^P_{t+1},\mathbf{x})Prob(\theta^P_{t+1}|\mathbf{x}))}{Prob(\mathbf{y}|\mathbf{x})} \right ) / q(\theta^P_{t+1}| \theta_t)} { \left ( \frac{Prob(\mathbf{y}|\theta_{t},\mathbf{x})Prob(\theta_{t}|\mathbf{x}))}{Prob(\mathbf{y}|\mathbf{x})} \right ) / q( \theta_t | \theta^P_{t+1})}$$
Noting that $q( \theta_t | \theta^P_{t+1}) = q(\theta^P_{t+1}| \theta_t)$ and $Prob(\mathbf{y}|\mathbf{x})$ is a normalizing constant irrespective of the value of $\theta$ being considered:
This complicated expression can be simplified to $$\frac{ Prob(\mathbf{y}|\theta^P_{t+1},\mathbf{x})Prob(\theta^P_{t+1}|\mathbf{x}))} { Prob(\mathbf{y}|\theta_{t},\mathbf{x})Prob(\theta_{t}|\mathbf{x}))}$$ this is a nice result, because we don't need to worry about calculating the probability of the evidence for the MH sampling method outlined below.
#### Step 3: Develop accept/reject criteria for $\theta^{P}_{t+1}$¶
Recall, that we want to sample from the posterior, so it should be the case that:
1. We won't likely include low probability proposals: $\theta^P_{t+1}$
2. We won't get stuck at very likely points like MAP
Now that we know the relative likelihood of the proposal value compared to the current value of $\theta$, Metropolis-Hastings showed that an acceptance criteria (sometimes called an accept/reject criteria) should explore the posterior distribution of $\theta$ by sampling points in proportion to how likely they are given the data. MH does this using the following rules:
1. If $\frac{ Prob(\mathbf{y}|\theta^P_{t+1},\mathbf{x})Prob(\theta^P_{t+1}|\mathbf{x}))} { Prob(\mathbf{y}|\theta_{t},\mathbf{x})Prob(\theta_{t}|\mathbf{x}))}$ > 1, then $\theta_{t+1} = \theta^P_{t+1}$
2. Else, $\theta_{t+1} = \theta^P_{t+1}$ if $u \le \frac{ Prob(\mathbf{y}|\theta^P_{t+1},\mathbf{x})Prob(\theta^P_{t+1}|\mathbf{x}))} { Prob(\mathbf{y}|\theta_{t},\mathbf{x})Prob(\theta_{t}|\mathbf{x}))}$
1. Else $\theta_{t+1} = \theta_t$
Notice condition (2) ensures that even inferior values of $\theta$ relative to the current value in the chain will sometimes be accepted in the chain. The acceptance rate for these values will be in proportion to their relative likelihood. The chain never converges, it dances around regions of the posterior where more likely values of $\theta$ reside. There is always a non-zero probability that the chain will deviate (and store values of the estimate of $\theta$ that actually cause the posterior to decline relative to the its value previously in the chain)!
Usually, the conditions outlined above are written in terms of the accept/reject criteria, $c$.
$$c(\theta^P_{t+1},\theta_t) = min \left( 1,\frac{ Prob(\mathbf{y}|\theta^P_{t+1},\mathbf{x})Prob(\theta^P_{t+1}|\mathbf{x}))} { Prob(\mathbf{y}|\theta_{t},\mathbf{x})Prob(\theta_{t}|\mathbf{x}))}\right )$$
Having calculated the accept/reject criteria value $c$, we need to decide whether to include $\theta^P_{t+1}$ in our chain. To do that, draw the random uniform value $u\in[0,1]$. If $c(\theta_{t+1},\theta_t)\ge u$ then add $\theta^P_{t+1}$ to our chain as $\theta_{t+1}$ that will help us characterize the distribution of parameters that most satisfy the posterior probability.
The Condition in Logs
For computational reasons, we usually work in logs, making these conditions: $$c(\theta^P_{t+1},\theta_t) = min \left( 0, log(Prob(\mathbf{y}|\theta^P_{t+1},\mathbf{x}))+log(Prob(\theta^P_{t+1}|\mathbf{x})))) - log( Prob(\mathbf{y}|\theta_{t},\mathbf{x})) - log(Prob(\theta_{t}|\mathbf{x}))) \right )$$ we accept $\theta^P_{t+1}$ when $c(\theta_{t+1},\theta_t)\ge log(u)$
##### Intuition of the MH Algorithm¶
Suppose a drunk lives up the hill from the bar. The drunk is unable to read roadsigns and only knows the house is at the very top of the hill. Like any drunk, this person stumbles and the intended direction (uphill) is usually (but not always) followed. By the time the drunk sobers up, the neighborhood uphill around his house has been thouroughly explored and in fact, the drunk has probably spent alot of time stumbling around in his own yard and even bumped into his front door many times without realizing he was home. The posterior is like the neighborhood uphill from the bar, the location of the house is the parameter representing the maximum posterior value, and the MH algorithm explores the posterior proportionally to the posterior probability values. So more time is spent in high likelihood areas.
#### A simple example¶
Suppose that your data is $y=[5,6]$ and you know that $\sigma_y = 1$. Using MH, let's construct a MCMC of the mean of $y$ of length 1 (not counting our starting value, which we will be arbitrarily set at $\mu_1 = 4$. Our priors are $\mu_0=6$ with standard deviation $\sigma_{\mu} = 1$.
In [7]:
# A simple example of the MH algorithm
y = np.array([5,6]) # data
std_y = 1 # known std dev of y
# priors
mu_0 = 6
std_mu_0 = 1
# proposal_width
omega=1.5
# arbitrary starting value of chain:
previous_val = 4
# notice the proposal distribution is normal, so
# we have the mh random walk as outlined above
mu_proposal = previous_val + omega*norm(0,1).rvs()
print("Previous Value=%2.3f and Proposal=%2.3f"%(previous_val,mu_proposal))
like_proposal = norm(mu_proposal,std_y).pdf(y).prod()
prior_proposal = norm(mu_0,std_mu_0).pdf(mu_proposal)
print("\nLikelihood at proposal: ", like_proposal)
print("Prior at proposal:", prior_proposal)
like_previous = norm(previous_val,std_y).pdf(y).prod()
prior_previous = norm(mu_0,std_mu_0).pdf(previous_val)
print("\nLikelihood at previous value: ", like_previous)
print("Prior at previous value: ", prior_previous)
c = np.min((1,like_proposal*prior_proposal/(like_previous*prior_previous)))
print("\nAccept Reject Criteria is: ", c)
u = uniform.rvs()
print("Uniform Random Draw: ", u)
current_value = (c>=u)*mu_proposal + (1-(c>=u))*previous_val
print("\nNext value in the chain is: %2.3f"% current_value)
Previous Value=4.000 and Proposal=3.307
Likelihood at proposal: 0.00101279640882883
Prior at proposal: 0.010633113876013092
Likelihood at previous value: 0.013064233284684923
Prior at previous value: 0.05399096651318806
Accept Reject Criteria is: 0.015267839605242932
Uniform Random Draw: 0.5980362709417673
Next value in the chain is: 4.000
##### A Metropolis-Hastings Sampler¶
Rather than manually storing values, the following implements a loop that calculates a chain of samples from the posterior of desired length. Let's revisit our example from Analytical Bayes:
In [68]:
sigma = 3. # Note this is the std of data assumed known
data = norm(10,sigma).rvs(10)
mu_prior = 8.
sigma_prior = 1. # Note this is our prior on the std of mu
In [69]:
ax = plt.subplot()
sbn.distplot(data, kde=False, ax=ax,bins=5)
_ = ax.set(title='Histogram of observed data', xlabel='x', ylabel='# observations');
Here are the likelihood and posterior analytical pdf's.
In [10]:
def calc_posterior_analytical(data, x, sigma, mu_0, sigma_0):
n = len(data)
# posterior parameter
mu_post = ( mu_0/(sigma_0**2) + data.sum()/(sigma**2) )*(1. / sigma_0**2 + n / sigma**2)**(-1)
sigma_post = np.sqrt((1. / sigma_0**2 + n / sigma**2)**(-1))
# probabilities
posterior = norm(mu_post,sigma_post).pdf(x)
prior = norm(mu_0,sigma_0).pdf(x)
return posterior,prior,mu_post,sigma_post
The function plot_proposal is rather long and cumbersome, but it is nice for generating plots that show how MH works iteration by iteration.
In [217]:
# allows us to plot what is happening inside the sampler iteration by iteration
def plot_proposal(chain_dict):
# extract info from sampled chain
sample_id = chain_dict['sample_id']
samples = max(sample_id) + 1
# chain parameter values, likelihoods, posteriors, and priors
current_ = chain_dict['current']
proposal_ = chain_dict['proposal']
# whether proposal is accepted
accept = chain_dict['accept']
# extract hyperparameters on priors
mu_prior_mu = chain_dict['priors'][0]
mu_prior_sd = chain_dict['priors'][1]
plt.figure(figsize=(16, 16))
x = np.linspace(3, 15, 1000)
trace = [current_[0, 0]]
subplot_counter = 1
for i in sample_id:
current = current_[i, 0]
prior_current = current_[i, 1]
like_current = current_[i, 2]
post_current = current_[i, 3]
proposal = proposal_[i, 0]
prior_proposal = proposal_[i, 1]
like_proposal = proposal_[i, 2]
post_proposal = proposal_[i, 3]
color = 'g' if accept[i] else 'r'
mu_current = current_[i, 0]
mu_proposal = proposal_[i, 0]
# Plot prior for this sample
plt.subplot(samples, 4, subplot_counter)
prior_current = current_[i, 1]
prior_proposal = proposal_[i, 1]
prior = norm(mu_prior_mu, mu_prior_sd).pdf(x)
plt.plot(x, prior, color='k')
plt.plot([mu_current] * 2, [0, prior_current], marker='o', color='b')
plt.plot([mu_proposal] * 2, [0, prior_proposal], marker='o', color=color)
plt.annotate("", xy=(mu_proposal, 0.2), xytext=(mu_current, 0.2),
arrowprops=dict(arrowstyle="->", lw=2.))
plt.ylabel('Sample %s\nProbability Density'%str(i+1), fontsize=12)
plt.title('current: prior(mu=%.2f) = %.2f\nproposal: prior(mu=%.2f) = %.2f' % (mu_current, prior_current, mu_proposal, prior_proposal),
fontsize=11)
# fill in x labels for last row
if i == max(sample_id):
plt.xlabel(r'$\mu$', fontsize=12)
# Plot likelihood for this sample
plt.subplot(samples, 4, subplot_counter + 1)
y = np.array([norm(loc=i,scale=sigma).pdf(data).prod() for i in x])
plt.plot(x, y, color='k')
plt.plot([mu_current,mu_current],[0,like_current], color='b', marker='o', label='mu_current')
plt.plot([mu_proposal,mu_proposal],[0,like_proposal], color=color, marker='o', label='mu_proposal')
plt.annotate("", xy=(mu_proposal, 0.8*1e-7), xytext=(mu_current, 0.8*1e-7),
arrowprops=dict(arrowstyle="->", lw=2.))
plt.title('likelihood(mu=%.2f) = %.2e\nlikelihood(mu=%.2f) = %.2e' % (mu_current, like_current, mu_proposal, like_proposal),
fontsize=11)
# fill in x labels for last row
if i == max(sample_id):
plt.xlabel(r'$\mu$', fontsize=12)
# Posterior for this sample
plt.subplot(samples, 4, subplot_counter + 2)
posterior_analytical, prior, mu_post, sigma_post = calc_posterior_analytical(data, x, sigma, mu_prior_mu, mu_prior_sd)
plt.plot(x, posterior_analytical,color='k')
posterior_current, prior,mu_post,sigma_post = calc_posterior_analytical(data, mu_current, sigma, mu_prior_mu, mu_prior_sd)
posterior_proposal,prior,mu_post,sigma_post = calc_posterior_analytical(data, mu_proposal, sigma, mu_prior_mu, mu_prior_sd)
plt.plot([mu_current] * 2, [0, posterior_current], marker='o', color='b')
plt.plot([mu_proposal] * 2, [0, posterior_proposal], marker='o', color=color)
plt.annotate("", xy=(mu_proposal, 0.2), xytext=(mu_current, 0.2),
arrowprops=dict(arrowstyle="->", lw=2.))
plt.title('posterior(mu=%.2f) = %.5f\nposterior(mu=%.2f) = %.5f' % (mu_current, post_current, mu_proposal, post_proposal),
fontsize=11)
# fill in x labels for last row
if i == max(sample_id):
plt.xlabel(r'$\mu$', fontsize=12)
# Trace with this additional sample
if accept[i]:
trace.extend([mu_proposal])
else:
trace.extend([mu_current])
plt.subplot(samples, 4, subplot_counter + 3)
plt.plot(trace,color='k')
plt.ylabel(r'$\mu$')
plt.title('Trace', fontsize=12)
# fill in x labels for last row
if i == max(sample_id):
plt.xlabel('Sample', fontsize=12)
subplot_counter += 4
sbn.despine(offset=2.)
plt.tight_layout()
plt.show()
The function sampler below is a simple Metropolis-Hastings sampler for 1 unknown parameter. It can spawn the plotter function to plot each iteration of the sampler, so beware if setting plot=True with large samples.
In [221]:
# a fairly basic mh mcmc random walk sampler:
def sampler(data, samples=4, mu_init=8., sigma= 1, proposal_width=3.,
mu_prior_mu=0., mu_prior_sd=1., plot_data=False):
store_vals = np.zeros((samples, 4, 2)) # first page is current, 2nd proposal
store_accept = np.zeros(samples)
mu_current = mu_init
posterior = [mu_current]
for i in range(samples):
# suggest new position
mu_proposal = norm(mu_current, proposal_width).rvs()
# Compute likelihood by multiplying probabilities of each data point
likelihood_current = norm(mu_current, sigma).pdf(data).prod()
likelihood_proposal = norm(mu_proposal, sigma).pdf(data).prod()
# Compute prior probability of current and proposed mu
prior_current = norm(mu_prior_mu, mu_prior_sd).pdf(mu_current)
prior_proposal = norm(mu_prior_mu, mu_prior_sd).pdf(mu_proposal)
p_current = likelihood_current * prior_current
p_proposal = likelihood_proposal * prior_proposal
# Accept proposal?
c = np.min((1,p_proposal / p_current))
accept = np.random.rand() <= c
# store all values for current and acceptance
store_vals[i, :, 0] = np.c_[mu_current, prior_current, likelihood_current, p_current]
store_vals[i, :, 1] = np.c_[mu_proposal, prior_proposal, likelihood_proposal, p_proposal]
store_accept[i] = accept
if accept:
# Update position
mu_current = mu_proposal
posterior.append(mu_current)
if plot_data:
return np.array(posterior), {'current': store_vals[:, :, 0], 'proposal': store_vals[:, :, 1],
'accept': store_accept,'sample_id': [i for i in range(samples)],
'priors': [mu_prior_mu, mu_prior_sd]}
else:
return np.array(posterior)
In [222]:
# the function needs to be called in order to produce something we can work with:
# here, we are going to only take 6 samples and plot what is happening.
# Note: green proposals are accepted, and red proposals are not accepted
chain_for_pic, plot_data = sampler(data,samples=6,mu_init=8., sigma=sigma, proposal_width=1.,
mu_prior_mu=mu_prior, mu_prior_sd=sigma_prior, plot_data=True)
In [223]:
plot_proposal(plot_data)
In [224]:
# take lots of samples and turn harvesting of plotting data off.
chain = sampler(data,samples=5000,mu_init=0,sigma=3,proposal_width=3,plot_data=False,
mu_prior_mu=mu_prior,mu_prior_sd=sigma_prior)
The following plot, called a traceplot, is probably the first thing you should look at after the markov chain has been constructed:
In [225]:
plt.figure(figsize=(15,6))
plt.plot(chain[500:],lw=.8,alpha=.7)
plt.xlim(0,chain.shape[0]-500)
plt.xlabel('Sample')
plt.ylabel('$\\mu$')
plt.title('Traceplot for $\\mu$ (after 500 samples)')
plt.show()
In [226]:
plt.figure(figsize=(15,6))
plt.plot(chain[:500],lw=.9)
plt.xlim(0,500)
plt.xlabel('Sample')
plt.ylabel('$\\mu$')
plt.title('Traceplot for $\\mu$ (first 500 samples)')
plt.show()
In [227]:
plt.figure(figsize=(15,6))
plt.plot(chain[chain.shape[0]-500:-1],lw=.9)
plt.xlabel('Sample')
plt.ylabel('$\\mu$')
plt.title('Traceplot for $\\mu$ (last 500 samples)')
plt.show();
In [228]:
sbn.distplot(chain, kde=False, bins=50)
plt.xlabel('$\\mu$')
plt.show()
In [229]:
ax = plt.subplot()
sbn.distplot(chain[500:], ax=ax, label='simulated posterior')
x = np.linspace(5, 15, 500)
post,prior,mu_post,sigma_post = calc_posterior_analytical(data, x, sigma, mu_prior, sigma_prior)
ax.plot(x, post, 'g', label='analytic posterior')
_ = ax.set(xlabel='$\\mu$', ylabel='belief');
ax.legend();
### Practical issue #1: Proposal width¶
In all the analysis above, we assumed a proposal width of 3. What is the proposal width and why have we set it at 3? Recall that the proposal width $\omega$ is
$$\theta^P_{t+1} = \theta_t + N(0,\omega)$$
So larger values of the proposal width means that we are jumping further in the parameter space in our exploration of the posterior. Smaller values means we jump less.
Question: Will smaller values of the proposal width lead to lower or higher acceptance rates?
In [231]:
posterior_small = sampler(data,samples=5000,mu_init=6,sigma=3,proposal_width=.05,
mu_prior_mu=mu_prior,mu_prior_sd=sigma_prior, plot_data=False)
posterior_large = sampler(data,samples=5000,mu_init=6,sigma=3,proposal_width=9,
mu_prior_mu=mu_prior,mu_prior_sd=sigma_prior, plot_data=False)
posterior_medium = sampler(data,samples=5000,mu_init=6,sigma=3,proposal_width=3,
mu_prior_mu=mu_prior,mu_prior_sd=sigma_prior, plot_data=False)
In [232]:
plt.figure(figsize=(15,6))
plt.plot(posterior_small[0:],label='Proposal Width Small');
plt.plot(posterior_large[0:],label='Proposal Width Large',lw=1,alpha=.7)
plt.plot(posterior_medium[0:], label='Proposal Width Medium', lw=1, alpha=.7)
plt.xlabel('Sample')
plt.ylabel('$\\mu$')
plt.legend()
plt.show()
In [233]:
plt.figure(figsize=(10,18))
plt.subplot(3,1,1)
plt.title('Proposal Widths and the Analytical Posterior')
sbn.distplot(posterior_small[1000:], label='Proposal Width Small',color='g')
plt.plot(x, post, 'k', label='analytic posterior')
plt.legend()
plt.subplot(3,1,2)
sbn.distplot(posterior_large[1000:], label='Proposal Width Large',color='b')
plt.plot(x, post, 'k', label='analytic posterior')
plt.legend()
plt.subplot(3,1,3)
sbn.distplot(posterior_medium[1000:], label='Proposal Width Medium',color='r')
plt.plot(x, post, 'k', label='analytic posterior')
plt.legend()
plt.show()
### Practical Issue #2: Burn-in and Convergence¶
How do we know when our chain has converged and is oscillating around the posterior in a way that is proportional to the posterior probability? We will turn back to this question later in the course, but to understand the problem, consider this plot:
In [234]:
plt.figure(figsize=(12,10))
ax1 = plt.subplot(321)
ax1.set_title('First 200 Samples')
line1, = ax1.plot(posterior_small[:200],label='Proposal Width Small',c='g')
ax1.set_ylabel('$\\mu$')
ax2 = plt.subplot(322,sharey=ax1)
ax2.set_title('Last 200 Samples')
ax2.plot(posterior_small[-200:],label='Proposal Width Small',c='g')
ax2.get_yaxis().set_visible(False)
ax2.legend(loc = 'lower right')
ax3 = plt.subplot(3,2,3)
line2, = ax3.plot(posterior_large[:200],label='Proposal Width Large',lw=1,alpha=.7,c='b')
ax3.set_ylabel('$\\mu$')
ax4 = plt.subplot(324,sharey=ax3)
ax4.plot(posterior_large[-200:],label='Proposal Width Large',lw=1,alpha=.7,c='b')
ax4.get_yaxis().set_visible(False)
ax4.legend(loc = 'lower right')
ax5 = plt.subplot(3,2,5)
line3, = ax5.plot(posterior_medium[:200], label='Proposal Width Medium', lw=1, alpha=.7,c='r')
ax5.set_ylabel('$\\mu$')
ax6 = plt.subplot(326,sharey=ax5)
ax6.plot(posterior_medium[-200:], label='Proposal Width Medium', lw=1, alpha=.7,c='r')
ax6.get_yaxis().set_visible(False)
ax6.legend(loc = 'lower right')
plt.tight_layout()
plt.show()
|
|
stats...............................................
if the variance of a distribution is 16, the mean is 12, and the number of cases is 24, the standard deviation is?
|
|
Sunday, April 12, 2020
More testing: From concept to implementation
by Ajay Shah.
There are concerns about quantity and quality in Indian Covid-19 data. In terms of the quantity of testing, India is at 120 tests per million of population, which is among the lowest in the world. Countries like Germany, Italy, South Korea, etc. are at values of about 10,000 tests per million people. In addition, in India, there are concerns about the imprecision in measurement of infected persons and deaths.
As cities and states grapple with the challenge of Covid-19 in the coming year, improvements in testing are being envisioned. Everyone wants more testing. What does it mean to do more testing? In this article, we show the four elements of testing, from a public health point of view. At present, only one of these is in play in India. Public health leaders at the city and state level need to think about this full range of possibilities, and put all of them into motion.
Before we get to the interesting public health and public policy questions, let’s review the science and engineering of testing. For people who are infected with Covid-19, the overwhelming majority recover in about 14 days. Most people experience minor symptoms, do not seek health care, and would not know that the Covid-19 infection took place.
There are two kinds of tests: The PCR test and the antibody test.
The PCR test looks for the virus in your body. It delivers a positive verdict when the virus is present at concentrations above a certain threshold. The absence of the virus can either mean that you are not infected or that you have recovered. Generally, by day 10, the PCR test returns a negative result.
The antibody test looks for the presence of antibodies made by the human body in fighting the virus. These antibodies are generally in place from day 7 and are likely to persist for months. It is possible that while the antibodies are present, you are still communicating the disease, which roughly corresponds to the second week of the progress of the infection.
After day 14, generally the antibody test would yield a positive result and the PCR test would yield a negative result.
After the recovery, immunity from the disease is likely. This paves the way for reopening the country. Ideally, we would like to run large scale antibody tests, find the people who have bounced back from the infection (and might not even have known that they were sick), so these persons can get back to normalcy in their lives. While this is a field of active research, the conclusions are not yet in. It is likely that once you recover, you are immune to the disease, for about 2 to 3 years. But Covid-19 is a new disease, and these are not settled questions.
In terms of operationalising the two tests, the PCR test requires more expensive equipment, the test takes more time, and scaling up to millions of tests is hard given global supply shortages of many of the inputs (reagents, primers, positive controls, extraction kits). PCR testing is unlikely to be available in large quantities in India or other countries for the next month or two, which will limit its usage to relatively low volume applications. The antibody test, though not a diagnostic test for health care settings, is more readily rolled out at scale, and most countries are exploring this from the viewpoint of mass deployment.
There is a neat idea called a “Group test”. Imagine pooling the blood sampled from 10 people and doing one PCR test on that pooled sample. If this came out positive, we would conclude that one or more of these 10 persons is infected. This is particularly useful given the limitations on the number of PCR tests that can be done. At the present moment, the precise protocols for pooled PCR testing for Covid-19 are not yet in place. This is also the subject of active research and we expect this problem will be solved soon.
This shows a testing landscape with two tests (PCR and Antibody) and the possibility of doing a group test with either.
With both classes of tests, there are many vendors with products of varying quality. It would be efficient for India to free ride on the state capacity of advanced countries, and accept any product which is approved by a regulator in a country that is a member of the OECD.
The role of testing in public health and health care
When we are admonished to increase testing, what testing is to be done at scale? It is useful to go to first principles, and think about the objectives of public health. This guides us in designing testing strategies. There are four pathways to testing, organised by the objective of the testing, by the question that is being asked.
1. Testing in the context of health care: A person shows up in front of a doctor with certain symptoms, and the doctor commissions a test in order to know whether it is a Covid-19 infection. In this case, the question is: Is this person presently infected with Covid-19?
2. The public health objective of understanding the state of a neighbourhood: When thinking about rules of social isolation in a geographical area, an assessment is required about the state of a neighbourhood. As an example, an airport may be in operation, and we might like to get a daily reading of the state of infection in the airport staff. In these cases, the public health crew cares about the question: Is there an active infection in this group of people?
3. The public health objective of understanding the progress of the epidemic: The public health team in a city needs to have a situational awareness about what is going on in the city. In this, they would ask the question: What is the overall number of infected and recovered people in my city? How are these numbers moving over time? Each city would like to know: Is the active level of infection likely to rise beyond the available health care in the city in the next few weeks? This forecasting can be assisted by statistical epidemiological models, which can be estimated once this data is observed. In addition, understanding the epidemic curve will help guide decisions about escalation or de-escalation of social distancing measures in the city, and across cities.
4. Antibody testing for the purpose of restarting the economy: It is likely that there is immunity against infection, for a few years, after a person has recovered from the infection. The entire course of events, from infection to immune system response, can happen without displaying any symptoms. Hence, counting the number of people who sought health care and then recovered is not useful. The antibody test shows whether a person has such immunity. Such persons are likely to be ready to rejoin the economy, and should particularly be brought into front-line roles. From the viewpoint of society and the economy, the key factor to watch for is the fear; Covid-19 is hacking into our minds, much like a terrorist attack. Each of us would like to know: Have I finished with one bout of Covid-19? Once a person tests positive on the antibody test, the terror would subside, one person at a time.
We see that there is not one concept in testing. Testing technology (PCR, Antibody, Pooling) are technical tools that are synthesised to answer four categories of questions. Each of these objectives is distinct, requires a different mechanism for implementation, and supports decision making in different ways. Let us dive into each of them.
Pathway 1: Testing in the context of health care, “Is this person infected with Covid-19?”
The normal protocols of clinical care will be applied by a doctor, who will trigger a test when certain symptoms come together. The test of choice is the PCR test, because this will report on the presence of the virus in the first week. These results are very interesting for the individual and for the doctor in determining health care for the individual.
The statistics that are produced, out of such testing, are highly sensitive to: (a) Who are the individuals who feel symptoms (most don’t) and who are the individuals that access health care facilities; (b) The protocols and skill of the doctor in deciding to prescribe the test and (c) Capacity constraints in doing the PCR (maybe all the machines in the country max out at N tests/day) and the fact that when there is a backlog, delays in the heat might degrade samples and bias the results in favour of a negative outcome.
A lot of epidemiological research is presently being done using data that is produced from this clinical setting. It is important to be cautious about the extent to which this data can be interpreted. The only thing that we are sure of is that such testing helps in the clinical process. The four kinds of censoring described above (some individuals access health care, the protocols used by the doctor, the limitations of testing, the degradation of samples) are central to the data generating process, and are absent in most models of this data. This limits the usefulness of epidemiological models, when estimated using the existing data, for decision making in public health.
One step away from the clinical setting, and closer to the questions of public health, are PCR tests administered every day to samples of high risk groups (SARI, ILI, health care workers). These could be an early indicator of the spread of the disease and could help containment efforts. Such measurement projects are important and interesting for public health. Fusing this surveillance data into an overall dataset of clinical data, however, induces additional difficulties for epidemiological modelling.
Pathway 2: The public health objective of understanding the state of a neighbourhood, “Is there an active infection in this group of people?”
At the level of a city, there is value in containing outbreaks. In an ideal world, we would have a PCR test result for each person for each day, and this would generate perfect information. However, this is not feasible as PCR testing is slow, expensive and does not scale up readily.
Public health staff face questions such as “Has an outbreak begun in X neighbourhood?” “Has the outbreak in X neighbourhood ended?” or “What is the state of health of the airport staff?”. These questions can often be nicely answered by doing a pooled PCR test [link, link]. The results can be used for modifying social distancing and isolation procedures on a day to day basis.
There is value in establishing the institutional infrastructure through which a civil servant is able to ask this question about a neighbourhood, after which a random sample of N persons is taken, and the pooled PCR test is run. Ideally, the turnaround time from decision to result should be about two days.
There is also value in establishing ongoing monitoring of high risk activities, such as health care workers or the airport crew, who should be sampled every day in this fashion, thus inducing the systematic creation of datasets.
Pathway 3: The public health objective of understanding the progress of the epidemic, “What is the overall number of infected and recovered people in my city?”, “How are these numbers moving over time?”
The holy grail of this field is panel (i.e. longitudinal) data measurement of persons on infection and presence of antibodies. It would be particularly valuable to observe comprehensive socio-economic information about each individual, over and above these two facts. As an early step towards this objective, a recent study in one town in Germany measured 500 persons and found that 2% were infected and 14% were immune.
Suppose we are sampling 1000 persons, and suppose the true positive rate is 2%, and 2% of 1000 is 20 persons. To fix intuition, on average we would see about 20 persons testing positive, and a few sample realisations are : 24, 16, 25, 22, 27, 19, etc. In this, a 95% confidence interval of the estimated rate runs from 1.1% to 2.9%. This suggests that at rates of about 2%, a sample size of 1000 individuals is quite useful. (Lower rates call for larger samples).
With this dataset is in hand, it becomes possible to estimate epidemiological models, understand R0, anticipate the future course of the disease, watch how modifications to social distancing impact upon R0, etc. This can be particularly useful in anticipating surges in health care requirements, building and dismantling temporary hospitals, etc. These models can help improve decisions about social distancing measures in the city taken by the citizenry and public health authorities.
Pathway 4: Antibody testing for the purpose of restarting the economy, “Have I finished with one bout of Covid-19?”
India has young demographics, and many will get the infection and bounce back without noticing it. Individuals should be able to test their own antibody status and put their fears at rest. There is a role for testing at regular intervals for each person, until that person gets the first positive reading.
This is important for restarting the economy. The economy today is hampered by lockdowns and fear. Antibody testing on scale holds the key to ending the fear, one person at a time, and restarting the economy. Persons who have a positive antibody test result should be preferred for front line roles.
Covid-19 has imposed large welfare costs upon humans who have been forced to be away from their loved ones. Widespread antibody testing will help the lucky ones to resume desired human interactions.
Elements of implementation
When we think about these four questions at the level of India, implementations appear implausibly difficult. If we think of creating a panel dataset of 1000 people in Bombay, it is much more tractable when compared with trying to create an all-India panel dataset.
The presence of four pathways to measurement, in any one city or district, creates one immediate advantage: it is then more likely that errors in any one element of the measurement strategy will be detected, and feedback loops established for remedying them.
Some ideas for implementation are sketched below.
1. The first stage lies in establishment of a mechanism to take in data from all over the country, on all the pathways, and make it available as a unified repository to the public. A coalition of researchers should establish the data standards for all data coming into one shared public facility with data, where all incoming data is instantly released into the public domain. The governance of this effort should combine experts on public health, information systems, and civil liberties. Trust in this data will be enhanced if there is a lack of government control of this repository.
2. There should be published protocols that determine when a doctor asks for a Covid-19 test as part of Pathway 1. There should be a single definitive source where these protocols, and all changes in these protocols, are made visible. When the doctor writes a prescription, this would generally be filled by a lab. The lab should get paid by the state in exchange for submission of a few (anonymised) facts about the individual, back into a public data facility.
3. For Pathway 2 (pooled PCR testing in a neighbourhood), a problem that is faced is the stigma and fear that goes with Covid-19 in India today. The city/district government will need a contract with a lab to do the sample collection. Civil servants may need to accompany the staff of the lab in order to make the citizenry comfortable with what is being done. In time, there will be much comfort when people realise that no individual is being identified in the group testing. The contracting framework is required through which a civil servant commissions a test, and in a day or two the answer is obtained. Similarly, the contracting framework is required through which (say) a data point is obtained every day about a random sample of the airport staff. All the data generated here should be anonymised and go to the public data facility.
4. For Pathway 3 (panel measurement), survey organisations partnering with testing labs are required to meet households and obtain samples. There is an important barrier in this, owing to the stigma and fear surrounding Covid-19 in India today, and these problems will need to be overcome. All the data generated here should be anonymised and go to the public data facility.
5. For Pathway 4 (antibody testing for an individual), there is a role for a testing voucher through which each person can get tested every x weeks until a test shows positive. This would kick off a decentralised mechanism where individuals would step forward and get tested. Private labs would be required, as part of the voucher arrangement, to electronically submit anonymised data to the public data facility.
At present in India, most of the work in testing is on Pathway 1. In addition, most of the work in this is being done by government labs. Most of the testing capacity in India is in the private sector, so these pathways need to establish incentive-compatible PPP arrangements through which work is done in private firms with public funding, with release of anonymised data into the public domain. Perhaps a nice split is to have the public sector continue to play an important role in Pathway 1, and establish the additional three pathways in private labs.
There is a lot of concern about the practical problems of organising production at private labs. The private sector is best equipped to understand and solve complex problems of supply chains and organisation. The PPP contracts, that a city or a district gives to multiple labs, should establish frameworks for payment for tests, and also embed real options whereby the private firms will be paid for a certain floor level of testing even if the order flow does not materialise. This will create incentives for private laboratories to build their organisational capabilities and solve problems of production. Private firms will solve problems of organising production better than governments will.
All four lines of work will require time and effort in implementation. The Covid-19 epidemic is a problem that will play out in India over a year or two. It is efficient for each city or district to embark on a three month journey to establish these information systems, so as to exert a beneficial impact upon decision making thereafter.
Conclusion
Everyone agrees that more testing is required. But testing is a means to an end. The purpose of testing is to improve situational awareness. At present, we are in a state of high uncertainty; we do not know what is going on, and this induces greater fear and hampers decision making by private persons and by policy makers. The mere intensification of the existing approach to testing (i.e. testing in a health care context) does not address key objectives in public health and in restarting the economy.
To test is human, to create datasets divine. We should shift focus from the words "number of tests" to the intellectual clarity around the four kinds of datasets, each of which has their role in the overall problem.
Fighting an epidemic is inherently a decentralised problem. This is a battle that is played out at the field level. It is more useful to think about the problem of measurement at the city or district level. These four questions will have to be faced by the public health leadership in each city and each district. As an example, the leadership of Pune should ask themselves: How do we organise these four lines of work? This is far more feasible when compared with solving these problems for 3.3 million square kilometres.
LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
|
|
## Tired..
Not great sleep last night..but I did get things done this morning and was reasonably productive with respect to classes.
Yesterday: achy from 1.6-1.7 on but I finished 2 miles in just under 30 minutes; it was wet. Bleah.
Today:
pull ups went great (10 singles, 3 sets of 10, 2 sets of 5)
Usual hip hikes and rotator cuff
push ups: 4 sets of 25 (23 are shown..bad editing)
bench press: 114: 5, 8..called it off as it didn’t feel right.
Deadlifts: 10 x 134, 10 x 184, 10 x 229 (latter was almost work)
shoulder press: 10 x 70, 75, 75 close grip with multi-grip bar (felt great)
curls: strict; 3 sets of 10 x 50 (went down a bit)
## 1 week out
Ok, I did a lot of grading and all that.
Monday: 2 mile (2.1) in 28:14; 13:59 pace. Some ache right at the end, but mostly tired from deadlifts.
Today: weights: tried to keep it shoulder friendly.
pull ups: 10 singles, 3 sets of 10 (really), then some chins: 4,1, then 5.
push ups: 4 sets of 25; evidently the angle of this push up is similar to decline presses; easier on my shoulder.
bench press (touch the chest) 10 x 44, 10 x 94, 10 x 114, 3 x 134 (didn’t feel right)
rows: 3 sets of 10 x 134
multi-grip shoulder press: 10 x 45, 10 x 65, 10 x 65 close, 10 x 70 close
curls: 3 sets of 10 x 54
This appears to be ok; not much residual pain afterward.
## 5 th day after Moderna 2
I am mostly back to normal; maybe a bit off in terms of extra fatigue.
Deadlifts: I didn’t have the energy I had last Sunday and stopped after my 5 “low handle” worksets…part of it was injury paranoia.
10 x 134, 10 x 184, 3 x 224
Work sets: 3 x 271, 3 x 271, 3 x 271, 3 x 271, 3 x 276 a slight improvement from last week.
No stomach for a heavy single.
Then a 2 mile walk at 13:52; perfect weather. Quit RIGHT at the onset of pain; it was *almost* perfectly pain free.
Very slow off of the ground in the first work set (271)
My final set with 276 was better than my first set.
## Moderna shot 2: working out
Moderna shot 2 on Tuesday.
Wednesday: stretching
Thursday: started to feel better; slow 2 mile walk in the am (15:40 pace)
Friday: MUCH better; 2 mile walk (14:3x pace)
But, I still felt my glute and shoulder firing up (recently dinged.) and some body stiffness and pains; it was as if I had overdone workouts even though I had not lifted since Tuesday.
Today, I decided to see what I could do and to try out my new multi-grip bar.
rotator cuff/hip hikes: the usual.
Pull ups: 5, 5, 10, 6 more sets of 5 (2 chin up style, 1 wide grip)
push ups: 3 sets of 25 (not that bad)
Bench: tried out my new multi-grip bar; was disappointed in that it didn’t help the shoulder discomfort that much (but keep reading)
2 sets of 10 with the bar only (45) touching the chest)
Did some “pin press” reps; 2 sets of 10 with 95, close grips were a no-go with any weight at all, and 1 rep with 135 was painful..
rows: 3 sets of 10 x 134
shoulder press: usual 3 sets of 10 x 94; tried to get arms to parallel; mostly succeeded but didn’t really “feel good.”
curls: one set with the muti-grip bar (good), 2 sets of 54.
shoulder press: multi-grip: 10 x 45 felt GREAT, 10 x 65 didn’t feel that bad; got somewhat better ROM..I think that this is where it will work very well.
I didn’t injure the shoulder on the bench; it was on re-racking the shoulder press a few weeks ago.
First set; didn’t know what to expect.
Pulling motions don’t hurt.
Push ups feel ok
Close grip: no-go.
Shoulder presses: don’t feel great, but ok. Range of motion is limited.
The multi-grip bar works well here; I might stick with this for shoulder presses.
Finishing with curls. My muscles are “impressionistic.”
## Moderna 2: 54 hours later
That’s the arm. It itches a bit. I was tired last night; had the “split sleep” and “second sleep”; walked a bit and then did some work stuff.
Feeling MUCH better right now..perking up.
Earlier today, I had the “I just played football the day before” level of body aches and fatigue; now it is subtle fatigue.
The walk: tingles at about 1.5 and I quit just at the onset of mild pain. I wonder if I can do this every other day, without making things worse.
## Moderna 2 side effects
Workout: LOL, 25 minutes of stretching. I might do a half workout tomorrow if I feel up to it.
Yesterday: kind of allergy eyes, lethargic.
Today: tired, light body aches (mild), firing up of old injuries (slightly); sort of “fell into a well” during the afternoon only to get a second wind. Now I am slowing down again.
Somewhat sore upper arm, very cold hands, lung burn (oh-so-slight) and fatigue. I think shingles 2 and “flu plus shingles” was more tiring. But this is only day 1.
## vaccine 2
Yep, got shot no. 2 today. So far: “allergy eyes”…sort of fatigue.
workout: yesterday’s walk was ugh…
It was painful. Too painful for being that short at that pace.
Today’s pre vaccine workout:
pull ups: 50 reps; sets of 5, singles, etc.
push ups: 3 sets of 20 (some strain)
bench press: 3 sets of 10 x 94 (touch the chest)
rows: 3 sets of 10 x 134
curls: 3 sets of 10 x 54
shoulder presses: working out the proper depth…bar to the eyes seems to be just right. 3 sets of 10 x 94
the press set was set 2; set 3 went better.
Now to examine my piriformis; wonder if shoes is the culprit.
## Marjorie Taylor Greene: D’s best hope for 2022 midterms
Yes, we Democrats are experts at shooting ourselves in the foot.
But, fortunately for us, the Republicans have their version of “super wokeness” which may be just enough to keep our own kooks from ruining our chances.
Please keep her front and center! The more she talks, the less oxygen there is for the liberal kooks, and the better our prospects look in 2022.
## Finally…a long time coming
My “after lifting” walk was ok; had some tingles but quit just before the onset of pain.
I was a bit disgusted that the second mile was as slow as it was but…what the heck.
The deadlifts that came first though: I woke up feeling a bit energetic.
10 x 134, 10 x 184, 3 x 224 …felt easy.
Main work sets: 3 x 270, 3 x 270, 3 x 270, 3 x 270, 3 x 275 none felt that hard.
So…I passed on my high handle work and tried for 300 lb (301 on my weird get up..44 lb bar so: 4 45 plates, 2 25 plates, 2 11 lb plates, 2 2.5 lb. plates.)
Miss at 301…got it ABOVE my knees but lost balance on the handle and had to set it down.
I bailed on the second attempt as I didn’t have my grip in the right position.
3rd attempt was the charm: ground it up oh-so-slowly and locked it out.
The first: didn’t lock out..no lift.
Shown: 3 x 275, miss at 301, make at 301
Bottom line: this is the fist “honest” 300 I got since 1995…26 years, and I was 35 lb heavier then. I did get 410 at this bodyweight but that was 1978-1979. Will I get there again?
Back to today: I did it AFTER my work sets; the 5 sets of 3 at 270+ is also a 60+ age group best for me too.
I still remember the humiliation of being unable to get 225 off of the ground back in August 2019. And yeah, I know how weak I am; I’d get laughed at on Instagram, etc. This is a personal victory even if it is only a “70’th percentile” age group wise, etc.
2021
2021 current milestones week by week:
5 x 255 Feb 21, 2021
4 x 255, 1 x 260
4 x 260, 1 x 265
3 x 260, 2 x 265
3 x 260, 2 x 265
1 x 260, 2 x 265, 2 x 270
3 x 265, 2 x 270
2 x 265, 3 x 270
4 x 270, 1 x 275, single 301 (April 18)
Max (bar or low handle only)
Aug 2019: 200
Nov 4 x 225
Dec 230, 233, 235, 240
Jan 2020: 245, 255, 260
Feb 260, 266
Aug: 274
Dec: 284
April 2021: 301
2021 Progression (since I started the “5 sets of 3”)
Jan 23: High handle: 5 sets of 3 x 274, 1 x 301, Low handle: 10 x 224.
Jan 31: high handle: 5 sets of 3 x 279, 1 set of 1 x 301 low handle 10 x 224
Feb 7: missed 319 badly (high handle); too beat up to do much but 5 sets of 3 x 274 high handle , 10 x 224 low.
Feb 14 regression; found that 260 low was difficult; retreated to 3 sets of 5 x 244 low, 1 set of 5 x 274 high.
Feb 21: 5 sets of 3 x 255, high: 3 x 279, 1 x 301
Feb 28: 3 x 255, 3 x 255, 3 x 255, 3 x 255, 3 x 260, high: 3 x 279, 3 x 279
March 7: 3 x 260, 3 x 260, 3 x 260, 3 x 260, 3 x 265, high: 3 x 285, 3 x 301
March 14: 3 x 260, 3 x 260, 3 x 260, 3 x 265, 3 x 265, high: 3 x 285, 3 x 301
March 21: 3 sets of 3 x 260, 2 sets of 3 x 265, high handle singles: 285, 301, 285
March 28: 3 x 260, 3 x 265, 3 x 270, 3 x 270, 3 x 265 (heaviest set so far)
high handle: 3 x 285 (not that bad), 1 x 301 (struggle…cut it there)
April 4: 3 x 265, 3 x 265, 3 x 265, 3 x 270, 3 x 270 (maybe a bounced rep in the last set) high: 3 x 285, 3 x 285
April 11: 2 sets of 3 x 265, 3 sets of 3 x 270, High: single 290, single 300
April 18: 4 sets of 3 x 270, 1 set of 3 x 275, single with 301 (low handle)
Prior:
August 5, 2019: managed a High Handle hex dead 225 (1 rep)
August: typical sets were 4 x 175, 4 x 195 (handles up)
November: got 4 x 225 hex dead, handles DOWN
December: got 1 x 225 bar dead, then 4 x 225
December 20, 2019: got 1 x 230 bar deadlift
DEcember 25: got 1 x 233 bar deadlift (home)
Dec 27: got 1 x 235 dead
Dec 29: got 1 x 240 dead (bar)
2020
Jan 11: got 245 hex dead (handles down)
Jan 25: got 255, then 260 conventional dead
Feb 1: got 255, 260 Sumo, 6 x 233 conventional
Feb 8: got 266 conventional, 5 x 238 Sumo.
April 5, 5 x 244
April 18: 5 x 244
Gap for sore back: switched to low handle trap deads
May 23: 10 x 206
June 7: 10 x 217
June 20 1 x 250 trap
July 4: 10 x 222
August 1: 4 x 250 (trap, low handle)
Gap for Virtual ultra (24 hour)
August 22: 4 x 254 (trap..low handle)
August 29 1 x 274 (trap, low handle)
September 5: switched to full stop deadlifts: sets with 244
September 12: 10 x 222 (full stop)
September 27 4 x 254 5 x 244 (full stop)
October 4deficit: 10 x 134, 10 x 184, 5 x 224
regular: 2 x 259; didn’t try a 3’rd rep as I felt a tug in my right hamstring (high)high handles: 10 x 224:
October 17: High handle: 5 x 254, 5 x 264 (fairly easy)Low handle, neutral: 5 x 244
October 24: High handle: 5 x 264, 5 x 274, low handle: 10 x 224
October 31: 1 x 254, High Handle 1 x 284, 1 x 274
November 7: High Handle: 5 x 274, Low Handle: 5 x 249
November 14: Low Handle: 1 x 254, High Handle: 1 x 289, 1 x 294, Low Handle: 10 x 224
November 18; High handle 10 x 234 (by accident; added an extra 5 to each side)
November 21: Low Handle deficit: 1 x 254. High handle 1 x 301, 10 x 244. Low Handle: 10 x 224.
November 28 Low handle 5 x 254 (no bounce)
December 5 Low Handle 1 x 284, 10 x 229. (no bounce)
December 12 High handle: 1 x 274, 1 x 304 (easier than the 301 2 weeks ago), 1 x 274, 10 x 244
Low handle: 10 x 224
December 19: 5 x 274 high handle, 5 x 254 low handle. (missed 301 low handle 4 times)
December 26: High handle: 1 x 314, 5 x 279, low handle: 5 x 254
2021
Jan 2: 5 x 279 high, 4 x 259 low5 x 234 deficit (lots of misses with 319 high)
Jan. 9: High handle: 5 x 279, 5 x 279 (effort) Low handle 5 x 255
Jan. 16: High handle: 5 x 285, 1 x 285, 5 x 274, Low handle: 10 x 224.
Jan 23: High handle: 5 sets of 3 x 274, 1 x 301, Low handle: 10 x 224.
Jan 31: high handle: 5 sets of 3 x 279, 1 set of 1 x 301 low handle 10 x 224
Feb 7: missed 319 badly (high handle); too beat up to do much but 5 sets of 3 x 274 high handle , 10 x 224 low.
Feb 14 regression; found that 260 low was difficult; retreated to 3 sets of 5 x 244 low, 1 set of 5 x 274 high.
## I argue with Bill Maher (about COVID issues)
Yes, the video is worth watching, and yes, I like Bill Maher. But I will push back versus *some* of what he says here (I agree with some of it).
1. The wiping down stuff was not “bullshit.” Yes, it turns out the early guidance was overkill with respect to surface transmission. But, as I said at the time, some of what we were doing early would prove to be unnecessary…at that time, we didn’t know what would be necessary because we were still very much in the learning stage.
2. Yes, liberals tended to overestimate the “hospitalization rate”, but in general, people are bad at numbers. Still, 1-5 percent is high for a disease that spreads exponentially. Think of it this way: I get it, spread it to, say, 3 others, who in turn spread it to 3 more: $3^5 = 243$ so just 5 people removed, you’ll have your 2 to 10 in the hospital.
3. Yes, obesity is a risk factor..and we appear to be getting at the reasons. But yes, even the non obese can get very sick. An accomplished ultramarathon runner I know ended up in the hospital for 22 days. A current NBA player recovered and is still playing..but needs an inhaler before games.
4. About morbidity: many who don’t end up in the hospital still have long haul problems..some of the conditions can be chronic. About 10 percent become long haulers. Many others have symptoms that last for months.
5. School: yes, look at the potential for the kids that get it to have long term issues. But …also, what about the teachers and staff? So much depends on community spread, conditions of the schools, etc.
6. Yes, Texas isn’t doing that badly as a state, but reports are that city people are still masking up. But there are some hot-spots in rural areas:
And yes, I know that my area has no room to throw shade at anyone else.
I am embarrassed to say that we are one of the worst metro areas in the country, and the worst one outside of Michigan.
But back to Maher: he did get some things right. Open air stuff is probably ok; I agree with going for that swim, walk, hike, run, etc. And yes, though I don’t like the Florida governor and wonder how many are taking infections from his states to other states (they are mid-pack in COVID performance), I do think that liberals underestimate him. And as far as the NY governor …well…liberals aren’t always right. Rodger that.
Workout notes I gave into it: I did take Naproxen this morning…pain last night was NOT severe but I was a little achy.
First I walked…I made sure to quit when it was “only mild tingles” and I never got to “ache.” It is a fine line between doing enough to heal and doing too much to where one doesn’t heal up.
I did a ton of prep work, including resistance band work to get ready (20 minutes?)
Then weights; I changed things up.
Pull ups: 10 singles, 10 reps (shown), 10 reps, 5, 5, 5, 5 with 2 of the latter 5 sets being chin up grips.
push ups: (shown) I noticed the handles give me a friendlier grip than the bar..range of motion was mostly pain free.
shoulder presses: 3 sets of 10 x 94 “mostly” pain free..decided the “nose” was a good range of motion limit for me. I might use the spotter bars there and do a “touch the bar” as a guide.
curls: 3 sets of 10 x 54.
I didn’t do some other stuff…trying for a nice deadlift session tomorrow.
Pull ups:
Push ups, as a proxy for bench press
Limited ROM shoulder press:
|
|
# zbMATH — the first resource for mathematics
Interpolation and approximation by piecewise quadratic $$C^1$$-functions of two variables. (English) Zbl 0424.41020
Multivariate approximation theory, Proc. Conf. math. Res. Inst., Oberwolfach 1979, ISNM Vol. 51, 146-161 (1979).
##### MSC:
41A30 Approximation by other special function classes 41A05 Interpolation in approximation theory 51N10 Affine analytic geometry
|
|
How to tell if a BJT is PNP or NPN by looking at the circuit? [closed]
Is it possible to tell if a BJT circuit is PNP or NPN simply by looking at the circuit?
For example, for the circuits shown, how can we tell if they're PNP or NPN?
closed as off-topic by Andy aka, Eugene Sh., Oldfart, Edgar Brown, Nick Alexeev♦Feb 28 at 0:27
• This question does not appear to be about electronics design within the scope defined in the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.
• Yes. Look at the location/direction of the arrow. That is really a question for your textbook or a search engine. – Eugene Sh. Feb 27 at 16:11
• I'm voting to close this question as off-topic because there is no evidence of effort to research this answer at all. – Andy aka Feb 27 at 16:19
• A memory aid: emmiter arrow on an NPN is Not Pointing iN – Peter Bennett Feb 27 at 16:42
• Another memory aid that has stuck with me for a while: PNP is Pointing iNward Proudly – Andrey Akhmetov Feb 27 at 17:30
• Still another memory aid: PNP is Pee iN the Pot. This works really well if you rotate the symbol 90 degrees counter-clockwise thus making the collector the lid. – Glen Yates Feb 27 at 23:04
Quite simply, look at the arrow on the leg.
The arrow will always be on the emitter. Look at where the current flows. The arrow will always follow the current flow.
When the arrow is pointing away from the base, it is an NPN. When pointing towards the base, it is a PNP.
If the arrow points towards the base it is a PNP, if it points away from the base it is an NPN.
Further, the arrow will typically be on the more positive leg for PNP and negative for NPN.
• Thank you. So would those all be PNP? – Jimmy Vailer Feb 27 at 16:13
• No. Look carefully. – Eugene Sh. Feb 27 at 16:13
• All but one.... – evildemonic Feb 27 at 16:14
• Oh so c) (the first one) would be NPN because the arrow above the current source is pointing away? – Jimmy Vailer Feb 27 at 16:15
• Oh ok thank you. So d) since the arrow is away from the base. – Jimmy Vailer Feb 27 at 16:17
This diagram should explain everything
• Your PNP diagram has two emitters. Ic should be labelled "collector". – Brian Rogers Feb 27 at 20:50
|
|
# How can I formally justify the description of an electronic excitation that includes only the relevant orbitals?
I am writing a small program to animate electronic transitions.
To do this, I am animating a Rabi cycle for a single electron as it moves from an occupied into a virtual orbital.
The on-resonance excitation process during the interaction of a molecule with a coherent laser is modelled by the Rabi formula: $$\Psi = \cos(\Omega_R)\Psi_g + \sin(\Omega_R) \Psi_e e^{-i\omega t},$$
where $$\Omega_R$$ is the Rabi frequency and $$\omega = (E_e - E_g)/\hbar$$.
As a minimal example, I will consider the $$\ce{H2}$$ molecule. The first electronic excitation is $$\sigma \rightarrow \sigma^*$$.
Formally, the wave function of the ground state is: $$\Psi_g = \det|\sigma_{\alpha}\sigma_{\beta}|$$, and the excited state is: $$\Psi_e = \det|\sigma^*_{\alpha}\sigma_{\beta}|$$.
Intuitively, I think that the description of the excitation process can be understood by considering only the wave function of an isolated electron jumping from a $$\sigma_\alpha$$ into a $$\sigma_{\alpha}^{*}$$ orbital, rather than wave function as a whole. In this case, I express the transition state as follows:
$$\Psi = \cos(\Omega_R) \sigma_{\alpha} + \sin(\Omega_R) \sigma^*_{\alpha} e^{-i\omega t}$$
from which an animation of the changing electron density can be constructed by computing isosurfaces of $$|\Psi(t)^{2}|$$, and assigning the color depending on the phase of the underlying wavefunction at that point:
In a more complicated molecule, the same simplification might correspond to expressing the excitation process as follows:
$$\Psi = \cos(\Omega_R) \psi_{\text{HOMO}(\alpha)} + \sin(\Omega_R) \psi_{\text{LUMO}(\alpha)} e^{-i\omega t}$$
Which, for an s-tetrazine derivative leads to the following animation for the first excited state (side and top view):
My issue is that I am not 100% certain about whether there is a formal procedure that allows me make the claim that these animations provide me with a physically valuable description of what happens during an excitation. It is clear to me that these animations do allow us to understand the symmetry rules and the direction of the transition dipole moment. I am looking for some help in finding the formal way of justifying this approach. Could the "1 particle reduced density matrix" formalism help me here?
• Welcome! May I suggest checking out FAQ: How can I format math/chemistry expressions on Chemistry Stack Exchange?. You obviously know the syntax already (most of it is identical to LaTeX); but there are some things which you will find useful, e.g. using \cos{...} instead of just cos ... to correctly typeset cosine and other operators. Nov 2 at 13:23
• Aren't you running (besides details which I might not understand) in the problem to give physical sense to the orbitals, the most usual way to do it being its interpretation as a probability distribution function? Nov 2 at 13:56
• @Alchimista, I should have been a bit more specific about the final step, I will make an edit to clarify. What I animate is the wave function squared, where the wave function is the time-dependent wave function during the Rabi cycle. So, the physical interpretation would indeed be the changing probability distribution of a single electron during the excitation process. I think that this makes intuitive sense. If only one electron changes orbital, then we only have to consider the changing probabilities for that single electron. But I do not know how to justify this assertion formally! Nov 2 at 14:12
• @MaxParadiz I see now. You are looking for something like a energy bases criterion of separation. I am sure specialist will help you. Otherwise try finding done examples, if the final rendering is your main goal. Nov 2 at 14:56
• I'm not sure you can formally justify this approach. Electrons are indistinguishable and therefore it is formally incorrect to localize one electron to one orbital and the other to the other orbital. The full wave function of the system includes contributions from each electron in both orbitals. But there's still no reason why you can't make an animation of only one orbital of the excited state. Nov 2 at 19:30
The answer here depends on what you mean by "formal procedure" and "physically valuable".
The potential flaw in your approach is the assumption that
$$\Psi=\cos(\Omega_R)\sigma_\alpha + \sin(\Omega_R)\sigma^*_\alpha e^{−i\omega t}$$
is a valid description. Formally, the wave function of a two-electron system cannot be split into two independent one-electron wave functions. This is perhaps more clear if you write out the full determinants in the wave functions, which are $$\Psi_g=\begin{vmatrix}\sigma_\alpha(1)&\sigma_\beta(1)\\\sigma_\alpha(2)&\sigma_\beta(2)\end{vmatrix}$$ and (for a singlet excited state) $$\Psi_e=\begin{vmatrix}\sigma_\alpha(1)&\sigma^*_\beta(1)\\\sigma_\alpha(2)&\sigma^*_\beta(2)\end{vmatrix}$$
where (1) and (2) are the two electrons and $$\alpha$$ and $$\beta$$ are spin states.
That is, each wave function necessarily consists of contributions from each possible electron-orbital pairing rather than just assigning one electron to each orbital. (For a triplet excited state, there are actually two additional wave functions to account for the spin possibilities, but that's not necessary to consider for this discussion.)
That said, almost all computational approaches to dealing with such systems apply this invalid approach in order to make the mathematics tractable. The Hartree-Fock method is a very common starting point. By adding various terms to correct for the indistinguishability and interactions of the electrons, the results of these methods can approach the results we would expect to get from a formally correct description of the system (ie solving the Schrodinger equation for a single multi-electron wave function).
Furthermore, treating the total electron density of a molecule (ie the square of the multi-electron wave-function) as the sum of densities of multiple discrete orbitals is a standard pedagogical approach, and predicting the properties and reactions of molecules based on a subset of these discrete orbitals (valence or frontier orbitals usually) works quite well at a qualitative and semi-quantitative level. Whether this pedagaogical value counts as "physically valuable" depends on what you mean by that.
I'll conclude by invoking George Box's maxim that "All models are wrong. Some are useful." While your animations are not defensible on theoretical physics grounds, that does not mean that they are not reasonable and useful representations for pedagogical purposes.
|
|
3 added 90 characters in body
Well, let me say what I know so far.
For monic quadratic polynomials it's necessary and sufficient that both roots be real and one be positive with absolute value at least the other. This requires no complicated argument: the characteristic polynomial of [a b] [c d] is x^2 - (a + d)x + (ad - bc). Since a, d 0 it's necessary that at one root has positive real part at least as large as the absolute value of the real part of the other, and since ab, d c ≥ 0 it's necessary that (a + d)^2 ≥ 4ad ≥ 4(ad - bc). On the other hand, this This is sufficient because we can set c = 1. The rest is the quadratic formula.
For general polynomials, I believe a theorem of Berstel implies that 1) the radius of convergence of 1/x^n P(1/x) must occur as a positive real pole r, and 2) any other pole s with |s| = r has the property that s/r is a root of unity. On the other hand polynomials such as the polynomial with roots 5, 5, 3 + 4i, 3 - 4i don't have this property even though they satisfy the non-negativity condition.
2 added 94 characters in body
Well, let me say what I know so far.
For monic quadratic polynomials it's necessary and sufficient that both roots be real and one be positive with absolute value at least the other. This requires no complicated argument: the characteristic polynomial of [a b] [c d] is x^2 - (a + d)x + (ad - bc), and since a, d ≥ 0 it's necessary that (a + d)^2 ≥ 4ad ≥ 4(ad - bc). On the other hand, this is sufficient because we can set c = 1. The rest is the quadratic formula.
For general polynomials, I believe a theorem of Berstel implies that 1) the radius of convergence of 1/x^n P(1/x) must occur as a positive real pole r, and 2) any other pole s with |s| = r has the property that s/r is a root of unity. On the other hand polynomials such as the polynomial with roots 5, 5, 3 + 4i, 3 - 4i don't have this property even though they satisfy the non-negativity condition.
1
Well, let me say what I know so far.
For monic quadratic polynomials it's necessary and sufficient that both roots be real. This requires no complicated argument: the characteristic polynomial of [a b] [c d] is x^2 - (a + d)x + (ad - bc), and since a, d ≥ 0 it's necessary that (a + d)^2 ≥ 4ad ≥ 4(ad - bc). On the other hand, this is sufficient because we can set c = 1.
For general polynomials, I believe a theorem of Berstel implies that 1) the radius of convergence of 1/x^n P(1/x) must occur as a positive real pole r, and 2) any other pole s with |s| = r has the property that s/r is a root of unity. On the other hand polynomials such as the polynomial with roots 5, 5, 3 + 4i, 3 - 4i don't have this property even though they satisfy the non-negativity condition.
|
|
# All Questions
0answers
33 views
### How to solve two non-linear equation simultaneously? [closed]
I want to solve two non-linear equation at same time with some condition, the complete description of the problem is attached, any comments? Thanks
0answers
57 views
### Academic citations network [closed]
Do you know if there is curated data that generates a network of academic citations? Something like http://well-formed.eigenfactor.org/radial.html or http://www.eigenfactor.org/map/maps.php I could ...
1answer
30 views
### NDSolve with arrays and Tables of Equations and WhenEvent
I've been toying around with NDSolve for a while, and read through the website. By doing so I discovered that I could use it for vectors and arrays with much pleasure. So I wanted to write a simple ...
2answers
62 views
### Placing some controls above and some below plot
The code below is something I wrote quickly to simulate the problem I'm having with more complex code. I use Row (or Grid) to get the first 2 manipulate variables (a and b) on one line above the plot ...
3answers
115 views
### Generating a list of questions (simultaneous equations)
I am trying to create a list of simultaneous equations showing steps, where I only need to change a, b and the question number, and maybe a couple of minor things about q (where q is the factored ...
3answers
125 views
### Plot parametric solutions using Manipulate [duplicate]
I want to plot the parametric diagram of {y[t],z[t]}, in which I manipulate a parameter, ω, from $[1, 10 \pi]$. Here is what I have attempted, but I can't get a ...
0answers
44 views
### Multiple NIntegral with singularity
I need to compute the following integral FF[0]. When evaluating FF[0] Mathematica simply takes too much time. The problem may ...
0answers
52 views
### Open a Mathematica 9 notebook in Mathematica 8
I have the following question: Can I open a Mathematica 9 notebook in Mathematica 8? Are there compatibility issues?
2answers
308 views
### How to visualize slope fields of differential equations without vectors?
I'm looking to visualize slope fields of differential equations for my differential equations course. Every example I see draws them as vectors, adding unnecessary "arrows" that, to me, are visually ...
2answers
24 views
### Combine two functions in mathematica to get one image with two elements
I am trying combine this two functions into one piece in grid. ...
3answers
135 views
### Colorize a 3D distribution of points according to density
Suppose we have some random 3D distribution of points. I'll use the spherical ball defined below as an example and a starting point for the discussion here : ...
1answer
41 views
### Simplify rearranges equations?
I've been using Simplify/FullSimplify to simplify some nasty equations, and by and large it does well but sometimes it rearranges equations in an unhelpful way. ...
0answers
100 views
### A* algorithm for finding shortest path in a graph
FindShortestPath finds the shortest path between two vertices in an edge-weighted graph, allowing a choice between the Dijkstra and Bellman-Ford algorithms. In my (limited) understanding, both of ...
0answers
71 views
### How to increase processing speed? [closed]
Well, let's first begin with the specs of my computer: Win 8 i7-3537U CPU@2.00GHz 2.50GHz 4GB of RAM 64-Bit And using Mathematica 8 I have to create some programmes to solve statistical problems, and ...
1answer
46 views
### Can I make a function which depends on several arguments which are not independent?
Here's a simple example Suppose have the permanent relation that z is proportional to xy^2, for real numbers s. z = s x y^2 Now, we are interested in ...
1answer
73 views
### Initial and boundary value errors
NDSolve::ivone: Boundary values may only be specified for one independent variable. Initial values may only be specified at one value of the other independent variable. >> I keep getting this, ...
2answers
51 views
### How to divide a notebook into sections? [closed]
I've seen some very nicely formatted Mathematica notebooks with sections, explanatory text, etc., but I can't figure out how to do something similar. The documentation is not especially helpful here ...
1answer
43 views
### Make duplicate string then join two strings together in a table
I have two lists of data, dataA and dataB. ...
0answers
57 views
### How to save the output file and make it run without mathematica? [closed]
I want to save a file, as the picture shows, and make it run without mathematica. I have to move this file to another computer which dose not install mathematica and run this file with all functions, ...
2answers
45 views
Let be given the complex number $z$ satisfying the condition $|z-2+2i|=2\sqrt{2}$. I want to find the complex numbers $z$ so that their modul obtain least and greatest value. I tried. Put z = x + y ... 2answers 121 views ### TrackedSymbols affected by CurrentValue. Another problem with SetDelayed(:=)+OwnVales in Initialization It is MWE, only a Disk[] that should print "Click" when clicked. Also "date... updating" ... 1answer 31 views ### Summation for a function having singularity I need to do summation of this function with Mathematica. For[t=0,t<500,t++;Sum[-Q*Log[Q],{k,-Infinity,Infinity},{n,-Infinity,Infinity}]] Q= Q1+Q2+Q3 where ... 0answers 109 views ### Every third element in a list [closed] EDIT - Just wanted to change wording Okay, so I didn't find an exact answer to this online so here I am. I have a range of integers from 1-200 ... 0answers 26 views ### Problem with slot in If statement [duplicate] I am struggling with slots, In this case I have a function int which gives either true or false. When it gives true, I want ... 1answer 58 views ### How to Solve/NSolve/Reduce this rationalized equation In this problem I don't know why the system is not solvable. I have just one unknown K0. ... 1answer 43 views ### Extracting second value of each sublist [duplicate] How can one get the second value of every array inside the outside array? I have the following data data = {{10, 20}, {30, 40}, {50, 60}}; I want to extract ... 0answers 31 views ### Changing the tick marks on the y-axis [closed] I know that Ticks allows me to scale the x-axis in terms of pi for example. How would I do so for the y-axis? EDIT: I've just realised how to do so: Plot[Sin[x], {x,0,10} , Ticks -> {Automatic, ... 1answer 77 views ### Extracting Imported Information What I want to do is to compare the relationship between the consumption of say beer and wine using scatter plots where each point designates the particular country (preferably named). I have tried to ... 0answers 80 views ### Fourier transformation of HeavisideTheta functions I want to find 2D-Fourier transformation of the function given below f = HeavisideTheta[y1]*HeavisideTheta[y2 - y1] For the purpose, I use built-in function in ... 2answers 102 views ### Permutations with restrictions I have a problem that can be modeled as: there are 8 boxes in total, and 60 different items. I want to put all items into boxes (all of them can be put into a single box). Now I want to find: 1) ... 1answer 44 views ### Choose partition of a set with Manipulate Suppose I have a set represented by an array of strings, like set = { "A", "B", "C", "D", "E", "F" } I would like to create a Manipulate which allows the user to build repartitions of the set. ... 1answer 36 views ### ContourListPlot not evaluating all of interpolated area I am trying to plot a contour for a matrix, but it seems not to be evaluating all the points. Here is the interpolated figure, followed by the MatrixPlot version itself. I used this colormap because ... 1answer 32 views ### Slow evaluation of complicated Table I have a system of recursive equations x[t+1]:=f[x[t],p[t-1],a[t],a[t-1]]; p[t+1]:=g[p[t],x[t-1],a[t],a[t-1]]; a[t+1]:=h[x[t],p[t],a[t],a[t-1]]; I want to ... 2answers 154 views ### extract line from position data I got a picture like this, and what I want to do is to extract lines from the picture. what I have done now is ... 1answer 76 views ### How to handle keyboard events for unselected item in CDF? How can I force the keyboard input to work for this cell without clicking on it after I evaluate or showing a blinking selection cursor? This is for running CDF games in a browser. ... 0answers 74 views ### Using NVIDIA 3D Shutter Glasses with Mathematica Mathematica has great capabilities to display 3D Graphics rendered to the 2D screen. This should make it perfectly suited to use the Graphics3D system together with an NVIDIA Graphics card, a 3D ... 0answers 50 views ### Constructing lists of evaluated functions inside Module [closed] This is my code: ... 0answers 60 views ### Playing around with “spherical k-ellispe” on the sphere surface [closed] It the Spherical K-ellipse I am looking for. What I mean is something like this but just with the two red curve and the sphere, idealy play with 2 parameters(number of foci and angle of the cone ... 0answers 43 views ### Solving for a variable within a definite integral using FindRoot I want to solve for P in this equation: Integrate[1/((P/(M*V)) - r*g - c*A*(V^2)*p/(2 M)), {V, 0, 26.82}] = s , where all constants are given except P. I've ... 1answer 66 views ### RandomSample Bug? I was using RandomSample with very skewed weights. I am asking for a random sample of 6 numbers 1 through 100 without replacement. In the screen shown, the weights for 1 through 6 are 100 000, with ... 0answers 39 views ### Type Vietnamese in Mathematica I type Vietnamese using UTF-8 Encoding but Text Display got error. How to type Vietnamese, Chinese in Mathematica. Please help me. 1answer 45 views ### Optimise CPU-intensive list combining function I have a list of numbers list1 from which I want to create another list (list2) consisting of only those numbers that are equal ... 0answers 21 views ### Set Image Size and Page Orientation When Exporting to PDF I want to export my plots (plot1 and plot2) as a pdf file. Since plot1 contains 4 figures in matrix form, so it will not fit in one page. ... 2answers 58 views ### Integrating a Simple Piecewise Function If I define f[x_]:=If[x<0,-1,1] and then integrate,say Integrate[f[x],{x,-3,7}] I get what I expect, namely 4. But ... 3answers 94 views ### Gather lists according to another list I have a first list of pairs like this one: ... 1answer 63 views ### How to plot a decahedron with Mathematica? I wrote Graphics3D[{Opacity[.9], FaceForm[Yellow], PolyhedronData["Dipyramid", "Faces"]}] and I get this picture: I would like to plot a decahedron without ... 0answers 42 views ### Why has my syntax coloring changed? Local variables in Module are no longer specially colored for me, but other syntax coloring remains, for example iterators in a ... 1answer 35 views ### Find the coefficients of a set of expressions in partial-derivatives [closed] Is there a way to find the coefficients of these (terms within the square brackets) expressions using mathematica? \begin{align} \hat{l}^2=&\sum\limits_{a=1}^{4}\left[\left(\frac{\partial ... 0answers 22 views ### Derivative with respect to certain variable. [closed] How would anyone calculate this derivative using mathmatica?? More specific with respect to dln(alfa) 0answers 30 views ### Generating the set of all three-dimensional binary arrays distinct under rotation and reflection? How can I automatically generate all possible three-dimensional binary arrays of size(x_1 \times x_2 \times x_3\$) where each array is distinct from all others with respect to rotation and reflection ...
15 30 50 per page
|
|
1. Oct 30, 2014
### Bashyboy
Here is a link to a proof which I am trying to understand.
http://groupprops.subwiki.org/wiki/Left_cosets_partition_a_group
The claim I am referring to is number 4, which is
Any two left cosets of a subgroup either do not intersect, or are equal.
Assuming that I am skeptical, then for all I know there are three cases: (1) $aH \cap bH \ne \emptyset$ but $aH \ne bH$; (2) $aH = bH$; or (3) $aH \cap bH = \emptyset$.
The truthfulness of (2) and (3) is made reasonable by simple examples and calculations involving certain groups, such as $D_4$. However, working with these examples, it is not clear that (1) is true or false; consequently, it remains as a possibility.
In the proof given in the link, they start out by supposing that $aH$ and $bH$ are not disjoint, that they could have some elements in common. Continuing on in the proof, we see that by supposing this is true, this undoubtedly leads to the cosets being equal; in doing this, they also show that it is not possible for $aH \cap bH \ne \emptyset$ but $aH \ne bH$.
But they don't treat whether it is possible for $aH \cap bH = \emptyset$ to be true. Why is that?
2. Oct 31, 2014
### jbunniii
It is certainly possible for $aH \cap bH = \emptyset$ to be true. This occurs if and only if $a$ and $b$ are in different cosets of $H$.
3. Nov 1, 2014
### Fredrik
Staff Emeritus
Because it's obvious that if they're disjoint, they're not equal. (This is true for any two non-empty sets, and we have $a\in aH$, $b\in bH$).
The goal is to prove that $aH$ and $bH$ are either disjoint or equal. They are clearly either disjoint or not disjoint, so it's sufficient to prove that if they're not disjoint, they're equal.
4. Nov 1, 2014
### Bashyboy
But how do I know if it is possible that they are disjoint? I understand that a disjunctive statement $p \vee q$ is true when at least one of the simple statements is true.
Is the only way to know that it is possible for two cosets to be disjoint is to work with simple examples, as I have already done?
5. Nov 1, 2014
### Fredrik
Staff Emeritus
The theorem holds for all groups $G,H$ such that $H$ is a subgroup of $G$. The proof doesn't rely on the existence of $a,b\in G$ such that $aH$ and $bH$ are disjoint. So the proof even holds for the case $H=G$.
Suppose that no such $a,b$ exist. Then $aH=H$ for all $a\in G$. (This follows from the theorem and the fact that $eH=H$). Let $a\in G$ be arbitrary. We have $a=ae\in aH=H$. So $G\subseteq H$. Since $H\subseteq G$ by assumption, this implies that $H=G$.
6. Nov 1, 2014
### jbunniii
If $H$ is a proper subgroup of $G$, then take any element $a \in G$ which is not in $H$. Which coset of $H$ contains $a$? Certainly not $H$ itself. Therefore $aH$, the coset containing $a$, is not the same as $H$. By the theorem, this forces $aH \cap H = \emptyset$.
|
|
# Tag Info
19
Let me first list all of the possibilities I considered that I later rejected. This is far from exhaustive, and I'm looking forward to seeing other people's creativity. Bad Ideas Sit on a tire swing with the fan pointing to the side. Point the fan up, measure speed of rotation of the system on the tire swing. Get a laser or collimated flashlight. Point ...
10
There are no "other" examples. The condition that $\vec \omega$ and $$\vec L = I_{\rm tensor} \cdot \vec \omega$$ point to the same direction i.e. $$(\vec L=) I_{\rm tensor} \cdot \vec \omega = k \vec \omega$$ where $k$ is a real number (and no longer a tensor) is a definition of an eigenvector of $I_{\rm tensor}$: both $\vec \omega$ and $\vec L$ are ...
10
This is a note on why angular velocities are vectors, to complement Matt and David's excellent explanations of why rotations are not. When we say something has a certain angular velocity $\vec{\omega_1}$, we mean that each part of the thing has a position-dependent velocity $\vec{v_1}(\vec{r}) = \vec{\omega_1} \times \vec{r}$. We might consider another ...
7
There are actually several different ways to interpret that question, depending on what you mean by "vector" and "rotation". But here's a sense that I've often wondered about myself: in introductory physics, the velocity vector is defined as the time derivative of the position vector (relative to some fixed point). Why is the same not true of angular ...
7
There is indeed a term involving the time derivative of the changing coupling between the masses. First, let's derive the equation for a single mass. $$L = \frac{1}{2} I\, \dot{\theta}^2 - V(\theta)$$ $$\frac{\partial L}{\partial \dot{\theta}} = I\, \dot{\theta}$$ $$\frac{\partial L}{\partial \theta} = -\frac{dV}{d\theta} = \tau$$ $$\tau = \frac{d}{dt} ... 6 Defining properties of vectors are that you can add them and multiply them by constants. These both make sense for angular velocities. On the other hand, adding rotations doesn't make sense. What you can do with two rotations is compose them: first rotate one way, then rotate another. This operation doesn't look like addition of any sort. For one thing, it ... 6 You made a mistake in assuming that the angular acceleration (\alpha) is equal to v^2/r which actually is the centripetal acceleration. In simple words, angular acceleration is the rate of change of angular velocity, which further is the rate of change of the angle \theta. This is very similar to how the linear acceleration is defined. ... 6 Here's a straightforward but somewhat computational way. There are two steps. (1) Show how to define the angular velocity vector in terms of rotation matrices. (2) Write a general rotation in terms of Euler angles. (3) Combine (1) and (2) to get an expression for the angular velocity vector in terms of Euler angles. Step 1. Recall that if \mathbf x(t) is ... 5 I will attempt to answer this question with some basic dynamics and some contact mechanics. There are two special cases here. a) There is sufficient friction to keep the base of the pin A fixed (imparting a reaction impulse J_A when hit by the ball, or b) The floor is smooth and the pin will translate and rotate at the same time with J_A=0. There is ... 5 The proper derivation of the centripetal acceleration—without assuming any kinematic variables are constant—requires a solid understanding of both the stationary Cartesian unit vectors \hat{i} and \hat{j} as well as the rotating polar unit vectors \hat{e}_r and \hat{e}_\theta. The Cartesian unit vectors \hat{i} and \hat{j} are stationary and ... 5 a_c = \frac{v^2}{r} isn't angular acceleration. It's the magnitude of the linear acceleration towards the centre of an object following a circular path at constant angular velocity. Angular acceleration is the derivative of angular velocity, and the analogue of Newton's second law is that angular acceleration equals torque divided by moment of inertia. 5 angular speed is the rate of change of the angle (in radians) with time, and it has units radians/s, while tangential speed is the speed of a point on the surface of the spinning object, which is the angular speed times the distance from the point to the axis of rotation. 5 Yes there will be a drag torque opposite the direction of spin. The name for this seems to be viscous torque. See e.g. this paper. Now according to this paper, the viscous torque on a spinning sphere of radius R in a fluid with viscosity \eta spinning with constant angular velocity \vec\Omega is$$ \vec\tau = -8\pi R^3\eta\vec\Omega $$The paper ... 5 two concentric and counterrotating flywheels preclude all precession forces regardless of which plane the axis is rotated in. this is assuming the connection between the two flywheels is sufficiently strong--it make break from tension/compression due to each flywheel experiencing its own forces. refer to the diagram i just drew up. the black rectangles ... 4 Assuming your rotating object (e.g. the Earth) is rotating at a steady speed the only way to change it's apparent speed of rotation is if you're rotating around it. You give the example of a geostationary satellite. This rotates around the Earth at the same angular velocity as the Earth rotates, so the Earth appears to be stationary (hence the name ... 4 For the person not to slip, there must be a centripetal force of mv^2/r = m r \omega^2 towards the centre. Since v varies with r while \omega is fixed (v=r\omega), it is probably easier to take the second form, in which case this force has to increase as r increases. This forces comes from friction since there are no other forces in the plane ... 4 I have reproduced your calculation. If \theta is the angle from the central vertical axis of the hemisphere to the ball, the tangential downward force is g \sin \theta = g\frac rR The tangential upward force due to rotation is \omega ^2 r \cos \theta=\omega ^2 r \sqrt {1-\frac {r^2}{R^2}} When \omega \lt \sqrt{\frac gR} the downward force is ... 4 I assume you know about rotation matrices, and so for a sequence rotations about Z-X-Z with angles \phi, \theta and \psi repsectively you have$$ \vec{\omega} = \dot{\phi} \hat{z} + T_1 \left( \dot{\theta} \hat{x} + T_2 \left( \dot{\psi} \hat{z} \right) \right) $$The logic here is apply a local spin of \dot{\phi}, \dot{\theta} and \dot{\psi} ... 4 The moment of inertia tensor is not constant in the external reference frame (http://en.wikipedia.org/wiki/Precession#Torque-free ) 4 High angular momentum presents a barrier preventing collapse to a black hole (at least until this angular momentum is radiated away). The parameter on which the formation of black hole depends is the ratio q of angular momentum (J) to the square of mass (M). If q=J/M^2 < 1 (in relativistic units with G=1, c=1), then the black hole ... 3 In the basic discussion of angular momentum where something is rotating around a fixed symmetrical axis \vec{L}=\vec{r}\times\vec{p} reduces to \vec{L}=I*\vec{\omega} Like in this animation where each vector is colored appropriately: However angular velocity and angular momentum can have different directions in two cases: If the axis of ... 3 I'm outlining this and stating the final result so that the OP gets the fun of figuring this out themselves. Future responders, please don't work this out All you have to do is allow \omega(t) to be a function of time. You'll get extra {\dot \omega} = \alpha terms in your equation, and you'll get a final result that says that$${\vec a} = {\vec ...
3
If want to describe the dynamics of the ball, you need to use the SO(3) matrix which describes the ball's orientation. This is a 3 by 3 matrix whose transpose is its inverse. These may be parametrized by Euler angles, and most of the literature on rigid rotating bodies uses this convention, but I think it is best just to use the matrix entries themselves ...
3
There are lots of different examples of oscillatory systems that have essentially the same mathematical form. Let's start by just looking at one type of differential equation: $a = \frac{d^2 x}{dt^2} = -\omega^2 x$ This equation has a general solution (you can check this) $x(t) = A \sin (\omega t + \phi)$ which oscillates with a period of ...
3
The g force is a unit of acceleration. 1 g is equal to 9.80665 m s-2. So the correct formula is $$\text{G force} = \frac{\text{Acceleration in m s}^{-2}}{9.8}.$$ However, when describing uniform circular motion (i.e. $\boldsymbol\omega$ is constant) in free space, the only acceleration felt by the person rotating (in their frame of reference) is the ...
3
The condition for staying in a circular orbit is the requirement for the centripetal force to be equal in magnitude to the gravitational pull. To be precise: $$F_g=F_c,$$ $$mg=\frac{mv^2}{r},$$ where $F_g$ is the absolute value of gravitational force, $F_c$ the absolute value of centripetal force, $g$ the gravitational acceleration, $m$ the mass of the ...
2
An alternative approach: the orbit has two constants of motion: conservation of energy and conservation of angular momentum. The energy is the sum of the potential and kinetic energy \begin{align} E &= -\frac{GMm}{r} + \frac{1}{2}mv^2\\ &= -\frac{GMm}{r} + \frac{1}{2}mv_r^2 + \frac{1}{2}mv_T^2,\\ \end{align} where $v_r$ is the radial velocity ...
2
In this case, fluid molecules near the surface of the ball, to good approximation, are essentially dragged along with the local surface motion. The reason the fluid is dragged along is that the ball surface on a molecular level is not perfectly smooth. You can read about Brownian motion if you want to understand this more in detail. Near the surface, in ...
2
Yes. Consider a system of $N$ particles with masses $m_i$ and positions $\mathbf x_i(t)$. The momentum of each particle is defined is $\mathbf p _i =m \dot{\mathbf x}_i$. The angular momentum of each particle is defined as \begin{align} \mathbf L_i = \mathbf x_i\times\mathbf p_i \end{align} and the total angular momentum of the system is defined as the ...
2
Extending on the answer of SimpleLikeAnEgg; Magnus effect(http:/en.m.wikipedia.org/wiki/Magnus_effect) would work for rotating body and like friction the viscous force (also a type of fricton) would cause the effect of curve ball dragging force. What happens is that the part of the ball having velocity(tange bntial velocity due to rotation) in direction of ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
Lung cancer claims more lives than any other cancer in the world and remains difficult to diagnose in the early stages. This article examines the current state of lung cancer detection and screening via low-dose computed tomography (LDCT) in Alaska and considers potential opportunities for occupational therapy practitioners in primary care settings. Medicare requires at least one documented shared decision-making encounter between provider and patient before LDCT lung cancer screening occurs. As a result of time constraints, documentation requirements, and the plethora of preventive health services they provide, primary care physicians often lack the time and training to conduct this essential service. This provides an opportunity for occupational therapy practitioners to perform these services as part of their practice and to play a role in this area as patient educators and prevention specialists in primary care settings.
What This Article Adds: This article explores the national health crisis of lung cancer and describes how occupational therapists can participate in providing care in primary care settings.
Lung cancer claims more lives than any other cancer in the world. In 2018, an estimated 142,670 people in the United States died from lung cancer, with an estimated 228,150 new lung cancer diagnoses (Centers for Medicare & Medicaid Services [CMS], 2020). The incidence of lung cancer is predicted to increase in developed countries, especially among women; lung cancer now claims more lives than breast cancer (Martín-Sánchez et al., 2018).
Late diagnosis of lung cancer is a global concern that transcends disciplines and health care systems. Late diagnoses are attributed to many causes, such as late presentation by the patient to providers, imprecise screening tools, lack of provider knowledge about lung cancer, and gaps in health care systems with respect to referrals and follow-up care (Singh et al., 2010, 2012; Wagland et al., 2017). The current literature highlights a need for research from a variety of perspectives and professions to fully address the issue (Salomaa et al., 2005; Singh et al., 2007). To promote early detection of lung cancer and decrease mortality, global experts recommend more research and policies targeting early detection and diagnostic methods, including the multidimensional factors associated with lung cancer detection (Andreano et al., 2018; Wong, 2018). Late diagnosis is especially disheartening for lung cancer researchers because survival rates for people diagnosed with Stage III or IV lung cancer have increased only modestly in the past 40 yr (Johnson et al., 2014). More research, especially in geographic areas that lack access to health care, is necessary to understand the barriers to lung cancer detection in community settings (Rai et al., 2019).
Many screening options, such as sputum cytology, chest radiography, and low-dose computed tomography (LDCT), have been trialed to diagnose lung cancer and reduce mortality. In 2013, the U.S. Preventive Services Task Force (USPSTF) recommended lung cancer screening by LDCT for high-risk patients (Moyer, 2014). The USPSTF LDCT lung cancer screening guidelines define eligible patients as asymptomatic adults ages 55 to 80 yr who have a 30 pack-year smoking history and currently smoke or have quit within the past 15 yr (Moyer, 2014). This recommendation was based on the National Lung Screening Trial, which took place from August 2002 through April 2004 (National Lung Screening Trial Research Team [NLSTRT], 2011).
This randomized trial of screening examined more than 50,000 participants who were separated into two groups to receive three screenings at 1-yr intervals. The treatment group received screening via LDCT over 3 yr, and the control group received screening via chest radiography (NLSTRT, 2011). Chest radiography was chosen for the control group because this is common practice in community health centers. The trial found a 20% reduction in mortality in the LDCT group compared with the radiography group, with nearly identical participant demographics in both groups. It also reported a higher incidence of Stage I and II cancers detected with LDCT compared with chest radiography. Limitations of this trial include bias from the healthy volunteer effect, lack of participant diversity, use of less advanced scanners in the trial as compared with general use (potentially leading to more false-positive results), and participation by more advanced radiology and detection experts than many communities have access to (NLSTRT, 2011; Rai et al., 2019).
More recently, Usman Ali et al. (2016) conducted a systematic review and meta-analysis of lung cancer screening studies in Europe, the United States, and Canada over the past 50 yr. Results indicated that LDCT lung screening is the preferred method, whereas chest radiography, with or without sputum cytology, is not recommended (Usman Ali et al., 2016). Despite the favorable results recommending LDCT lung cancer screening reported in this meta-analysis, which included the National Lung Screening Trial, Jemal and Fedewa (2017) found minimal change in screening rates following publication of the USPSTF guideline. Rates of LDCT screening, obtained by means of the National Health Interview Survey, remained similarly low in 2015 compared with 2010 (increasing from 3.3% to 3.9% of the percentage of eligible smokers who reported LCDT screening), demonstrating that the available screening is underused despite the approval of governing entities (Jemal & Fedewa, 2017). More than half of those eligible for the screening were either uninsured or received Medicaid, with coverage varying by state (Jemal & Fedewa, 2017).
Implementation of the USPSTF LDCT lung cancer screening guidelines has proven difficult because of barriers to meeting the recommendations, in both access and education. Potential harms of the LDCT screening include false-positive results, radiation exposure, high cost to the patient, and overdiagnosis (Moyer, 2014; Rai et al., 2019; Usman Ali et al., 2016). The high prevalence of false positives or detection of benign nodules, leading to further testing and potentially invasive procedures, remains a large deterrent to the uptake of LDCT screening (Bach et al., 2012; Lewis et al., 2015; Rai et al., 2019). These harms may be even more prevalent in community settings if the screening is not appropriately implemented (Rai et al., 2019). In addition, as a result of the stage of cancer, only one-third of newly diagnosed lung cancers are expected to be curable through treatment, which raises questions regarding the clinical value of screening (Barnes et al., 2016).
The NLSTRT (2011) found that complications from LDCT lung cancer screening were uncommon and rare. Despite this finding, studies have demonstrated providers’ fears regarding this issue and their belief that the risks outweigh the potential benefits, which may limit the adoption of LDCT for lung cancer screening by referring providers (Henderson et al., 2018; Zeliadt et al., 2018). Moreover, research has shown that only half of primary care providers (PCPs) are aware of the USPSTF lung cancer screening recommendations (Li et al., 2018), making use of LDCT for lung cancer screening difficult. Fewer than half of PCPs at an academic center were aware of three or more of the six USPSTF LDCT guideline components for lung cancer screening (Lewis et al., 2015).
Cost, access, and providers’ perceptions of lung cancer screening were identified as primary barriers to recommending LDCT (Henderson et al., 2018; Raz et al., 2016; Zeliadt et al., 2018). Private insurers are required to cover the cost of LDCT without cost sharing; Medicare Part B added annual LDCT lung screening as a preventive benefit at approved facilities, yet Medicaid coverage varies by state (American Lung Association, 2019a; CMS, 2016; Rai et al., 2019). In Alaska, for example, the cost of LDCT for lung cancer screening is $375 for the technical imaging regardless of insurance, plus$175 to \$200 for the radiologist’s interpretation (Providence Health and Services, personal communication, March 28, 2019). Providers may be unaware of the actual coverage details, and the long-term financial costs of implementation remain unknown (Rai et al., 2019). In a survey study of medical directors, only 6.4% of respondents indicated that no barriers exist to providing the recommended screening (Zeliadt et al., 2018). Despite barriers to and lack of awareness of LDCT lung cancer screening, though, Zeliadt et al. (2018) also found that most respondents believed lung cancer screening should be provided by their particular organization for it to be a leader in cancer prevention.
CMS requires shared decision making (SDM) between provider and patient to discuss the benefits and harms of LDCT lung cancer screening (Barnes et al., 2016; Rai et al., 2019), and this counseling visit is billable under Medicare’s guidelines (American College of Radiology, 2019; Rai et al., 2019). Patients who participate in SDM indicate higher confidence and less regret regarding their decision (Søndergaard et al., 2019). Unfortunately, evidence shows that the SDM in LDCT lung cancer screening conversations is poor (Brenner et al., 2018). Limited time with patients to discuss the screening was considered a barrier by both attending and resident primary care physicians (Henderson et al., 2018). Brenner et al. (2018) reported an average visit length of 13:07 min with people who meet the criteria for LDCT lung cancer screening, with 59 s devoted to discussing the implications of the screening, warranting a score of only 6 of 100 on the Observing Patient Involvement in Decision Making scale (Elwyn et al., 2005). These findings imply that when lung cancer screening via LDCT was mentioned in an office visit, minimal discussion occurred regarding its potential, either harmful or beneficial (Brenner et al., 2018). When considering SDM conversations between patients and providers, it is important to consider health literacy. Low health literacy has been associated with low cancer screening uptake (Humphrys et al., 2019); the combination of low health literacy and the complexity of screenings may decrease patients’ engagement in the discussion (Amalraj et al., 2009).
Research conducted in Alaska to assess lung cancer detection throughout the patient–provider–system continuum (Kowalski, 2020) has indicated that occupational therapy practitioners can play a role in facilitating lung cancer detection and in aiding education on LDCT lung cancer screening. This research included a mixed methods survey study of providers, a case study of participants with late-stage lung cancer, and an assessment of opportunities to improve lung cancer detection in Alaska (Kowalski Frank, 2020). This research indicates that policy changes require attention in three areas: systemic change, prevention, and education.
### Systemic Change
Opportunities for systemic transformation to promote the earlier detection of lung cancer include policy changes and adjustments to the health care system. Policy changes could include universal health care to ensure screening coverage. In Alaska, LDCT lung cancer screening is not paid for by Medicaid regardless of a patient’s historical eligibility (American Lung Association, 2019b); therefore, the cost burden falls on patients. Medicaid recipients may choose to forgo the screening or pay for it out of pocket. People who are diagnosed with late-stage lung cancer are typically current or former smokers, belong to a lower socioeconomic class, are unmarried men, and are diagnosed with other comorbidities (Hansen et al., 2008; Smith et al., 2009). Screening may have a ripple effect, requiring time off work, travel from remote areas, and diminished income if paying privately. These are all barriers to promoting policy change, given the lack of voice possessed by people of lower socioeconomic classes, as well as the stigma surrounding smokers.
Large-scale system factors meant to promote LDCT screening include increasing physicians’ time with patients, altering documentation requirements, and abolishing the prescription requirement for LDCT screening. As mentioned earlier, insurance reimbursement for LDCT lung screening requires SDM. This requirement adds to an already overwhelming documentation system; 45% of family medicine physicians’ workday (4.5 hr) is spent on electronic health record (EHR) tasks, which can infringe on their personal lives and lead to burnout (Arndt et al., 2017). Another study found a 2:1 ratio of documentation time to patient care time in ambulatory care disciplines (Sinsky et al., 2016). Current documentation requirements include, but are not limited to, prescription management, order entry, billing and coding, phone calls, and letter generation, all of which law or insurance companies typically require physicians to complete and cannot be delegated (Arndt et al., 2017). Methods to decrease time spent on EHR tasks require study in their own right. Possible solutions are group LDCT lung cancer screening education classes taught by designated clinical specialists (Sakoda et al., 2019) and telephone-based decision counseling (Fagan et al., 2020). Although occupational therapy practitioners cannot currently bill for LDCT lung cancer screening, they have the potential, with proper education in such screening, to become designated clinical specialists to carry out SDM conversations.
Finally, LDCT lung cancer screening requires a provider’s prescription (American College of Radiology, 2019). Typically, PCPs write the prescription, yet some facilities require a pulmonologist’s prescription, thus creating another stop along the diagnostic pathway. The participants in Kowalski’s (2020) survey study advocated for walk-in screening capabilities, similar to mammography, in which patients can request the screening on their own without a prescription (CMS, 2020). That may be available at some point, but it will apply only to people under specific insurance plans; Medicare’s SDM requirements necessitate at least two appointments with a PCP.
### Prevention
Cancer prevention in primary care often includes treatment and education in the following areas: smoking cessation, diet and exercise, sun exposure, screening, and general health (DePue et al., 2008). In addition to the patient’s medical concerns, this education and treatment often fall on the shoulders of PCPs, as illustrated by the response of one provider to Kowalski’s (2020) survey study:
In the amount of time allotted for docs to see a patient (for routine medical follow-up and/or preventive services), there are a myriad of recommended preventive services he/she is encouraged to provide, as well as complex ongoing medical treatment to optimize. Under such limitations, the use of a relatively young screening paradigm may be deprioritized (versus other tasks that deserve to get done) during the visit. (p. 83)
Research has also indicated that time constraints limit physicians’ ability to comply with preventive services recommendations (Yarnall et al., 2003), leaving a gap in care that needs to be filled. An emphasis on prevention also presents an opportunity to prevent or diagnose lung cancer earlier by increasing the capacity of the primary care workforce by including additional health care professionals to create a medical home model of care. Primary care should not be limited solely to medical providers; rather, health care providers should represent the broad spectrum of health care (Berwick et al., 2008; Pershing & Fuchs, 2013). Given this, a need exists for a variety of health care providers to practice in primary care in order to promote collaboration and interprofessionalism and to meet the intention of primary care. We advocate for occupational therapy’s involvement in primary care to address issues of task delegation and prevention practices.
Occupational therapy practitioners have an opportunity to meet the needs of the population via preventive care services in primary care. Several studies have examined occupational therapists’ efficacy as preventive care providers in primary care settings (Chamberlain et al., 2019; Fleischer & Fleischer, 2019; Pape & Muir, 2019). By involving occupational therapy practitioners in primary care, more people may receive the care they need to live healthful and functional lives and decrease the impact of chronic conditions, such as cancer. Allowing providers to maximize their training improves interprofessional collaboration and productivity (Butala et al., 2019); integrating occupational therapy practitioners into primary care may therefore promote preventive and early detection opportunities.
### Education
A lack of awareness exists regarding the symptoms of lung cancer and awareness of screening options. Simmons et al. (2017) reported a lack of LDCT lung cancer screening knowledge among PCPs and high-risk individuals. This decreased awareness may result in fear or resistance, which providers cite as barriers to diagnosing lung cancer.
Increasing awareness of lung cancer symptoms and LDCT lung cancer screening may alleviate fear and resistance once patients have an understanding of the issue. Other researchers have reported similar conclusions, indicating a widespread need for marketing campaigns and patient education (Ersek et al., 2016; Li et al., 2018). Media marketing, mentioned in Kowalski Frank’s (2020) survey results as “mass advertising for screening tests” and “more public service announcements educating people that screening is an option targeted at the right demographic” (p. 84), may not be enough to expand awareness, because patients prefer to learn about the screening from their medical team (Simmons et al., 2017). Expanding occupational therapy practitioners’ roles on the health care team may decrease the burden on PCPs while providing necessary education to patients.
Given the relative newness of LDCT lung cancer screening, providers also require education regarding the USPSTF LDCT guidelines. Kowalski Frank (2020) confirmed the findings of other studies (Ersek et al., 2016; Kanodra et al., 2016; McDonnell et al., 2019) reporting that physicians and nurses need such education. Educational opportunities may target various disciplines that provide primary care, such as alternative care providers, as well as providers who function as PCPs, such as women’s health providers. Kowalski’s survey study found that women’s health providers were those least likely to be familiar with LDCT lung cancer screening guidelines and that they lacked confidence in assessing for lung cancer, with 60% agreement. Write-in responses also indicated a need to receive “buy-in by the alternative therapy community—DC [Doctors of Chiropractic], ND [Doctors of Naturopathy], etc.” (p. 87). In addition, the survey was completed by providers in fields such as ophthalmology and oncology that require awareness of LDCT screening, especially among asymptomatic patients, to at least recommend consultation with the patients’ PCP.
The American Lung Association (2020) reported that rates for newly diagnosed lung cancer cases in Alaska are equivalent with the national average. Kowalski’s (2020) study provides a baseline for future research on the implementation of lung cancer detection measures across the United States, including LDCT lung cancer screening, educational tools, and policy changes. In addition, two new questions emerged from Kowalski’s (2020) research: (1) Can fatigue be objectively measured over time to assess a change, and (2) how does functional decline reflect cancer stage? The creation of objective assessment tools for the nonspecific symptoms of lung cancer may facilitate diagnostic connections between daily life activities and body functions and structures. The Occupational Therapy Practice Framework: Domain and Process (4th ed.; American Occupational Therapy Association, 2020) articulated the interconnections among daily life occupations, contexts, performance patterns, performance skills, and client factors. To answer these questions, future research should focus on simple functional assessments for use in primary care, a risk stratification protocol, and inclusion of other health care disciplines in primary care to provide preventive care. In addition, the results indicate the need for governing bodies to provide assistance with implementation along with screening guidelines to promote uptake of the recommendations.
We have summarized the policy issues regarding lung cancer screening via LDCT. Occupational therapy practitioners may not be aware of LDCT lung cancer screening, but the potential exists for them to participate in such screening. Opportunities in this area include educational initiatives for clients regarding lung cancer symptoms and screening and for providers regarding LDCT lung cancer screening guidelines and methods to ascertain patients’ functional status and non–disease-specific symptoms. Education plays an important role in the outcome of each patient encounter, whether it relates to patients’ health literacy or providers’ expertise, both of which can promote or decrease diagnostic accuracy. This article also suggests the need to increase the capacity of the primary care workforce by including additional health care professionals to meet the growing demand for preventive education and treatment. Finally, this article underscores the need for systemic change to facilitate improved care by increasing time with patients, simplifying and streamlining documentation, and creating direct patient access to LDCT lung cancer screening.
Lesleigh Kowalski wrote this article on the basis of work completed during her doctoral dissertation and acknowledges Jennifer Antick, Nancy Krusen, Aaron Kusano, Andrew Saultz, and Matthew Thompson for their guidance.
Amalraj
,
S.
,
Starkweather
,
C.
,
Nguyen
,
C.
, &
Naeim
,
A.
(
2009
).
Health literacy, communication, and treatment decision-making in older cancer patients
.
Oncology
,
23
,
369
375
.
American College of Radiology
. (
2019
).
Low-dose CT lung cancer screening FAQ
. https://www.acr.org/Clinical-Resources/Lung-Cancer-Screening-Resources/FAQ
American Lung Association
. (
2019a
).
American Lung Association
. (
2019b
).
Lung cancer screening coverage
. https://www.lung.org/getmedia/3229a1fe-b419-40e3-bb93-58acf60496e5/lung-cancer-screening-coverage.pdf
American Lung Association
. (
2020
).
American Occupational Therapy Association
. (
2020
).
Occupational therapy practice framework: Domain and process (4th ed.)
.
American Journal of Occupational Therapy
,
74
(
Suppl. 2
),
74112410010
.
Andreano
,
A.
,
Peake
,
M. D.
,
Janes
,
S. M.
,
Valsecchi
,
M. G.
,
Pritchard-Jones
,
K.
,
Hoag
,
J. R.
, &
Gross
,
C. P.
(
2018
).
The care and outcomes of older persons with lung cancer in England and the United States, 2008–2012
.
Journal of Thoracic Oncology
,
13
,
904
914
.
Arndt
,
B. G.
,
Beasley
,
J. W.
,
Watkinson
,
M. D.
,
Temte
,
J. L.
,
Tuan
,
W.-J.
,
Sinsky
,
C. A.
, &
Gilchrist
,
V. J.
(
2017
).
Tethered to the EHR: Primary care physician workload assessment using EHR event log data and time-motion observations
.
Annals of Family Medicine
,
15
,
419
426
.
Bach
,
P. B.
,
Mirkin
,
J. N.
,
Oliver
,
T. K.
,
Azzoli
,
C. G.
,
Berry
,
D. A.
,
Brawley
,
O. W.
, . . .
Detterbeck
,
F. C.
(
2012
).
Benefits and harms of CT screening for lung cancer: A systematic review
.
JAMA
,
307
,
2418
2429
.
Barnes
,
A. J.
,
Groskaufmanis
,
L.
, &
Thomson
,
N. B.
, 3rd
. (
2016
).
Promising approaches from behavioral economics to improve patient lung cancer screening decisions
.
Journal of the American College of Radiology
,
13
,
1566
1570
.
Berwick
,
D. M.
,
Nolan
,
T. W.
, &
Whittington
,
J.
(
2008
).
The triple aim: Care, health, and cost
.
Health Affairs
,
27
,
759
769
.
Brenner
,
A. T.
,
Malo
,
T. L.
,
Margolis
,
M.
,
Elston Lafata
,
J.
,
James
,
S.
,
Vu
,
M. B.
, &
Reuland
,
D. S.
(
2018
).
Evaluating shared decision making for lung cancer screening
.
JAMA Internal Medicine
,
178
,
1311
1316
.
Butala
,
N. M.
,
Hidrue
,
M. K.
,
Swersey
,
A. J.
,
Singh
,
J. P.
,
Weilburg
,
J. B.
,
Ferris
,
T. G.
, . . .
Wasfy
,
J. H.
(
2019
).
Measuring individual physician clinical productivity in an era of consolidated group practices
.
Health Care
,
7
,
S2213-0764(18)30051-4
.
Centers for Medicare & Medicaid Services
. (
2016
).
Lung cancer screening registries
. https://www.cms.gov/Medicare/Medicare-General-Information/MedicareApprovedFacilitie/Lung-Cancer-Screening-Registries
Chamberlain
,
E.
,
Truman
,
J.
,
Scallan
,
S.
,
Pike
,
A.
, &
Lyon-Maris
,
J.
(
2019
).
Occupational therapy in primary care: Exploring the role of occupational therapy from a primary care perspective
.
British Journal of General Practice
,
69
,
575
576
.
DePue
,
J. D.
,
Goldstein
,
M. G.
,
Redding
,
C. A.
,
Velicer
,
W. F.
,
Sun
,
X.
,
Fava
,
J.
,
Kazura
,
A.
,
Rakowski
,
W.
(
2008
).
Cancer prevention in primary care: Predictors of patient counseling across four risk behaviors over 24 months
.
Preventive Medicine
,
46
,
252
259
.
Elwyn
,
G.
,
Hutchings
,
H.
,
Edwards
,
A.
,
Rapport
,
F.
,
Wensing
,
M.
,
Cheung
,
W. Y.
, &
Grol
,
R.
(
2005
).
The OPTION scale: Measuring the extent that clinicians involve patients in decision-making tasks
.
Health Expectations
,
8
(
1
),
34
42
.
Ersek
,
J. L.
,
Eberth
,
J. M.
,
McDonnell
,
K. K.
,
Strayer
,
S. M.
,
Sercy
,
E.
,
Cartmell
,
K. B.
, &
Friedman
,
D. B.
(
2016
).
Knowledge of, attitudes toward, and use of low-dose computed tomography for lung cancer screening among family physicians
.
Cancer
,
122
,
2324
2331
.
Fagan
,
H. B.
,
Fournakis
,
N. A.
,
Jurkovitz
,
C.
,
Petrich
,
A. M.
,
Zhang
,
Z.
,
Katurakes
,
N.
, &
Myers
,
R. E.
(
2020
).
Telephone-based shared decision-making for lung cancer screening in primary care
.
Journal of Cancer Education
,
35
,
766
773
.
Fleischer
,
A.
, &
Fleischer
,
A.
, Jr.
(
2019
).
Understanding ambulatory care physician OT referral patterns to integrate OT within primary care
.
American Journal of Occupational Therapy
,
73
,
7311510236
.
Hansen
,
R. P.
,
Olesen
,
F.
,
Sørensen
,
H. T.
,
Sokolowski
,
I.
, &
Søndergaard
,
J.
(
2008
).
Socioeconomic patient characteristics predict delay in cancer diagnosis: A Danish cohort study
.
BMC Health Services Research
,
8
,
49
.
Henderson
,
L. M.
,
Jones
,
L. M.
,
Marsh
,
M. W.
,
Brenner
,
A. T.
,
Goldstein
,
A. O.
,
Benefield
,
T. S.
, . . .
Reuland
,
D. S.
(
2018
).
Opinions, practice patterns, and perceived barriers to lung cancer screening among attending and resident primary care physicians
.
Risk Management and Healthcare Policy
,
10
,
189
195
.
Humphrys
,
E.
,
Burt
,
J.
,
Rubin
,
G.
,
Emery
,
J. D.
, &
Walter
,
F. M.
(
2019
).
The influence of health literacy on the timely diagnosis of symptomatic cancer: A systematic review
.
European Journal of Cancer Care
,
28
,
e12920
e12929
.
Jemal
,
A.
, &
Fedewa
,
S. A.
(
2017
).
Lung cancer screening with low-dose computed tomography in the United States—2010 to 2015
.
JAMA Oncology
,
3
,
1278
1281
.
Johnson
,
D. H.
,
Schiller
,
J. H.
, &
Bunn
,
P. A.
, Jr
. (
2014
).
Recent clinical advances in lung cancer management
.
Journal of Clinical Oncology
,
32
,
973
982
.
Kanodra
,
N. M.
,
Pope
,
C.
,
Halbert
,
C. H.
,
Silvestri
,
G. A.
,
Rice
,
L. J.
, &
Tanner
,
N. T.
(
2016
).
Primary care provider and patient perspective on lung cancer screening: A qualitative study
.
Annals of the American Thoracic Society
,
13
,
1977
1982
.
Kowalski
,
L.
(
2020
).
Lung cancer detection through the patient, provider, and system continuum [Unpublished doctoral dissertation]. Pacific University
.
Lewis
,
J. A.
,
Petty
,
W. J.
,
Tooze
,
J. A.
,
Miller
,
D. P.
,
Chiles
,
C.
,
Miller
,
A. A.
, . . .
Weaver
,
K. E.
(
2015
).
Low-dose CT lung cancer screening practices and attitudes among primary care providers at an academic medical center
.
Cancer Epidemiology, Biomarkers and Prevention
,
24
,
664
670
.
Li
,
J.
,
Chung
,
S.
,
Wei
,
E. K.
, &
Luft
,
H. S.
(
2018
).
New recommendation and coverage of low-dose computed tomography for lung cancer screening: Uptake has increased but is still low
.
BMC Health Services Research
,
18
,
525
.
Martín-Sánchez
,
J. C.
,
Lunet
,
N.
,
González-Marrón
,
A.
,
Lidón-Moyano
,
C.
,
Matilla-Santander
,
N.
,
Clèries
,
R.
, . . .
Martínez-Sánchez
,
J. M.
(
2018
).
Projections in breast and lung cancer mortality among women: A Bayesian analysis of 52 countries worldwide
.
Cancer Research
,
78
,
4436
4442
.
McDonnell
,
K. K.
,
Owens
,
O. L.
,
Hilfinger Messias
,
D. K.
,
Heiney
,
S. P.
,
Friedman
,
D. B.
,
Campbell
,
C.
, &
Webb
,
L. A.
(
2019
).
Health behavior changes in African American family members facing lung cancer: Tensions and compromises
.
European Journal of Oncology Nursing
,
38
,
57
64
.
Moyer
,
V. A.
;
U.S. Preventive Services Task Force
. (
2014
).
Screening for lung cancer: U.S. Preventive Services Task Force recommendation statement
.
Annals of Internal Medicine
,
160
,
330
338
.
National Lung Screening Trial Research Team
. (
2011
).
Reduced lung-cancer mortality with low-dose computed tomographic screening
.
New England Journal of Medicine
,
365
,
395
409
.
Pape
,
S. B.
, &
Muir
,
S.
(
2019
).
Primary care occupational therapy: How can we get there? Remaining challenges in patient-centered medical homes
.
American Journal of Occupational Therapy
,
73
, 7305090010.
Pershing
,
S.
, &
Fuchs
,
V. R.
(
2013
).
Restructuring medical education to meet current and future health care needs
.
Academic Medicine
,
88
,
1798
1801
.
Rai
,
A.
,
Doria-Rose
,
V. P.
,
Silvestri
,
G. A.
, &
Yabroff
,
K. R.
(
2019
).
Evaluating lung cancer screening uptake, outcomes, and costs in the United States: Challenges with existing data and recommendations for improvement
.
Journal of the National Cancer Institute
,
111
,
342
349
.
Raz
,
D. J.
,
Wu
,
G. X.
,
Consunji
,
M.
,
Nelson
,
R.
,
Sun
,
C.
,
Erhunmwunsee
,
L.
, . . .
Kim
,
J. Y.
(
2016
).
Perceptions and utilization of lung cancer screening among primary care physicians
.
Journal of Thoracic Oncology
,
11
,
1856
1862
.
Sakoda
,
L. C.
,
Meyer
,
M. A.
,
Chawla
,
N.
,
Sanchez
,
M. A.
,
Blatchins
,
M. A.
,
Nayak
,
S.
, . . .
Minowada
,
G.
(
2019
).
Effectiveness of a patient education class to enhance knowledge about lung cancer screenings: A quality improvement evaluation
.
Journal of Cancer Education
,
35
,
897
904
.
Salomaa
,
E. R.
,
Sällinen
,
S.
,
Hiekkanen
,
H.
, &
Liippo
,
K.
(
2005
).
Delays in the diagnosis and treatment of lung cancer
.
Chest
,
128
,
2282
2288
.
Simmons
,
V. N.
,
Gray
,
J. E.
,
Schabath
,
M. B.
,
Wilson
,
L. E.
, &
Quinn
,
G. P.
(
2017
).
High-risk community and primary care providers knowledge about and barriers to low-dose computed topography lung cancer screening
.
Lung Cancer
,
106
,
42
49
.
Singh
,
H.
,
Giardina
,
T. D.
,
Petersen
,
L. A.
,
Smith
,
M. W.
,
Paul
,
L. W.
,
Dismukes
,
K.
, . . .
Thomas
,
E. J.
(
2012
).
Exploring situational awareness in diagnostic errors in primary care
.
BMJ Quality and Safety
,
21
,
30
38
.
Singh
,
H.
,
Hirani
,
K.
,
Kadiyala
,
H.
,
Rudomiotov
,
O.
,
Davis
,
T.
,
Khan
,
M. M.
, &
Wahls
,
T. L.
(
2010
).
Characteristics and predictors of missed opportunities in lung cancer diagnosis: An electronic health record-based study
.
Journal of Clinical Oncology
,
28
,
3307
3315
.
Singh
,
H.
,
Sethi
,
S.
,
Raber
,
M.
, &
Petersen
,
L. A.
(
2007
).
Errors in cancer diagnosis: Current understanding and future directions
.
Journal of Clinical Oncology
,
25
,
5009
5018
.
Sinsky
,
C.
,
Colligan
,
L.
,
Li
,
L.
,
Prgomet
,
M.
,
Reynolds
,
S.
,
Goeders
,
L.
, . . .
Blike
,
G.
(
2016
).
Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties
.
Annals of Internal Medicine
,
165
,
753
760
.
Smith
,
S. M.
,
Campbell
,
N. C.
,
MacLeod
,
U.
,
Lee
,
A. J.
,
Raja
,
A.
,
Wyke
,
S.
, . . .
Nicolson
,
M. C.
(
2009
).
Factors contributing to the time taken to consult with symptoms of lung cancer: A cross-sectional study
.
Thorax
,
64
,
523
531
.
Søndergaard
,
S. R.
,
Madsen
,
P. H.
,
Hilberg
,
O.
,
Jensen
,
K. M.
,
Olling
,
K.
, &
Steffensen
,
K. D.
(
2019
).
A prospective cohort study of shared decision making in lung cancer diagnostics: Impact of using a patient decision aid
.
Patient Education and Counseling
,
102
,
1961
1968
.
Usman Ali
,
M.
,
Miller
,
J.
,
Peirson
,
L.
,
Fitzpatrick-Lewis
,
D.
,
Kenny
,
M.
,
Sherifali
,
D.
, &
Raina
,
P.
(
2016
).
Screening for lung cancer: A systematic review and meta-analysis
.
Preventive Medicine
,
89
,
301
314
.
Wagland
,
R.
,
Brindle
,
L.
,
James
,
E.
,
Moore
,
M.
,
Esqueda
,
A. I.
, &
Corner
,
J.
(
2017
).
Facilitating early diagnosis of lung cancer amongst primary care patients: The views of GPs
.
European Journal of Cancer Care
,
26
,
e12704
.
Wong
,
M. C. S.
(
2018
).
Health behavioral models to find reasons for low rates of lung cancer screening by low-dose computed tomography
.
JAMA Oncology
,
4
,
425
.
Yarnall
,
K. S. H.
,
Pollak
,
K. I.
,
Østbye
,
T.
,
Krause
,
K. M.
, &
Michener
,
J. L.
(
2003
).
Primary care: Is there enough time for prevention
?
American Journal of Public Health
,
93
,
635
641
.
Zeliadt
,
S. B.
,
Hoffman
,
R. M.
,
Birkby
,
G.
,
Eberth
,
J. M.
,
Brenner
,
A. T.
,
Reuland
,
D. S.
, &
Flocke
,
S. A.
(
2018
).
Challenges implementing lung cancer screening in federally qualified health centers
.
American Journal of Preventive Medicine
,
54
,
568
575
.
|
|
How do you integrate int e^xsinx by integration by parts method?
Oct 17, 2016
$\int {e}^{x} \sin x \mathrm{dx} = \frac{1}{2} {e}^{x} \left(\sin \left(x\right) - \cos \left(x\right)\right) + C$
Explanation:
Integration by parts can be expressed:
$\int u \left(x\right) v ' \left(x\right) \mathrm{dx} = u \left(x\right) v \left(x\right) - \int v \left(x\right) u ' \left(x\right) \mathrm{dx}$
$\textcolor{w h i t e}{}$
Let $u \left(x\right) = {e}^{x}$, $v \left(x\right) = - \cos \left(x\right)$
Then $u ' \left(x\right) = {e}^{x}$, $v ' \left(x\right) = \sin \left(x\right)$
and we find:
$\int {e}^{x} \sin x \mathrm{dx} = - {e}^{x} \cos \left(x\right) + \int {e}^{x} \cos \left(x\right) \mathrm{dx} + {C}_{1}$
$\textcolor{w h i t e}{}$
Let $u \left(x\right) = {e}^{x}$, $v \left(x\right) = \sin \left(x\right)$
Then $u ' \left(x\right) = {e}^{x}$, $v ' \left(x\right) = \cos \left(x\right)$
and we find:
$\int {e}^{x} \cos \left(x\right) \mathrm{dx} = {e}^{x} \sin \left(x\right) - \int {e}^{x} \sin \left(x\right) \mathrm{dx} + {C}_{2}$
$\textcolor{w h i t e}{}$
Combining these two results, we find:
$\int {e}^{x} \sin x \mathrm{dx} = - {e}^{x} \cos \left(x\right) + {e}^{x} \sin \left(x\right) - \int {e}^{x} \sin x \mathrm{dx} + \left({C}_{1} + {C}_{2}\right)$
and hence:
$\int {e}^{x} \sin x \mathrm{dx} = \frac{1}{2} {e}^{x} \left(\sin \left(x\right) - \cos \left(x\right)\right) + C$
$\textcolor{w h i t e}{}$
Footnote
Integration by parts is very useful, but can end up leading you down a rabbit hole if you do not choose the parts appropriately.
In the example above, I would instead tend to find the integral by seeing what happens when you differentiate ${e}^{x} \sin \left(x\right)$ and ${e}^{x} \cos \left(x\right)$ then combine the results:
$\frac{d}{\mathrm{dx}} {e}^{x} \sin \left(x\right) = {e}^{x} \sin \left(x\right) + {e}^{x} \cos \left(x\right)$
$\frac{d}{\mathrm{dx}} {e}^{x} \cos \left(x\right) = {e}^{x} \cos \left(x\right) - {e}^{x} \sin \left(x\right)$
So by subtracting the second from the first of these, we find:
$\frac{d}{\mathrm{dx}} \left({e}^{x} \left(\sin \left(x\right) - \cos \left(x\right)\right)\right) = {e}^{x} \sin \left(x\right) + \textcolor{red}{\cancel{\textcolor{b l a c k}{{e}^{x} \left(\cos \left(x\right)\right)}}} - \textcolor{red}{\cancel{\textcolor{b l a c k}{{e}^{x} \left(\cos \left(x\right)\right)}}} + {e}^{x} \sin \left(x\right)$
$\textcolor{w h i t e}{\frac{d}{\mathrm{dx}} \left({e}^{x} \left(\sin \left(x\right) - \cos \left(x\right)\right)\right)} = 2 {e}^{x} \sin \left(x\right)$
Hence:
$\int {e}^{x} \sin \left(x\right) \mathrm{dx} = \frac{1}{2} {e}^{x} \left(\sin \left(x\right) - \cos \left(x\right)\right) + C$
|
|
# Constrained Optimization of a function of two variables.
I was given the following tutorial problem, and I'm having a bit of trouble seeing how it works.
I've been asked to find the four critical points of this system, with two of these being degenerate points, one being a maximum, and one being a minimum;
$$f(x_1, x_2) = x_1^3 + x_2^3 + 3x_1^2 - 3x_2^2 - 8$$ subject to $g(x_1, x_2) = x_1^2 + x_2^2 - 16 = 0$.
First, I constructed a Lagrangian;
$$L = x_1^3 + x_2^3 + 3x_1^2 - 3x_2^2 - 8 + \lambda(x_1^2 + x_2^2 - 16)$$
Then, taking the gradient of $L$, we get two equations;
$$x_1 (3x_1 + 2(\lambda + 3)) = 0$$ $$x_2 (3x_2 + 2(\lambda - 3)) = 0$$
For the first equation to be satisfied, we have either $x_1 = 0$ or $3x_1 + 2(\lambda + 3) = 0$. In the case that $x_1 = 0$, we have that $x_2 = \pm 4$, due to our initial constraint. If $x_2 = 4$, we get that $\lambda = -3$, and if $x_2 = -4$, we get that $\lambda = 9$.
In a similar fashion, if $x_2 = 0$, $x_1 = \pm 4$. If $x_1 = 4$, $\lambda = -9$, and if $x_1 = -4$, $\lambda = 3$.
Thus, we have four critical points; $$(0,4) , \lambda = -3$$ $$(0,-4) , \lambda = 9$$ $$(4,0) , \lambda = -9$$ $$(-4,0) , \lambda = 3$$
Now, I then computed my Hessian matrix; $$H =\left( \begin{array}{ccc} 6x_1 + 6 + 2\lambda & 0 \\ 0 & 6x_2 - 6 + 2\lambda \\ \end{array} \right)$$
Then, I just plug in all my critical points (with their respective $\lambda$ values), and look at both the determinant and the principal minor of the Hessian.
Clearly, both $(0,4)$ and $(-4,0)$ are degenerate points, so we've satisfied the first criteria.
Looking at the Hessians for $(0,-4)$ and $(4, 0)$, I get that the principal minors for each are both positive, but each of the determinants are negative. For the point $(4,0)$, the principal minor is positive, while the determinant is negative, which implies that the point is a maximum. Similarly, $(0,-4)$ also appears to be a maximum.
Now, having verified my working on Wolframalpha, I've correctly identified $(4,0)$ as a local maximum, but it's also telling me that $(0,-4)$ is a local minimum.
From all the working I've done, I can't really see how $(0,-4)$ could be a minimum?? Could someone have a look at my working, and see where I've made my mistake??
|
|
# Time series forecast by Principal Component Analysis
Suppose that I have a series of $M$ time-observations of $N$ "quantities" $z_1(t_1),...,z_1(t_M)$, ..., $z_N(t_1),...,z_N(t_M)$. I want to estimate the values of $z_1(t_{M+1}),...,z_N(t_{M+1})$. This a problem of interest, for example, in stock asset prediction. I want to use Principal Component Analysis (PCA) performed by a Singular Value Decomposition (SVD).
My questions:
1. What is the physical meaning of the first singular vector (namely, that one corresponding to the largest singular value)? What is the physical meaning of the remaining singular vectors? I understand that the singular vectors provide uncorrelated linear combinations of the above random variables.
2. What is the physical meaning of the singular values? As long as I know, they are related to the variances associated to the singular vectors. But then, does it mean that the most probable value of the quantities at time $t_{M+1}$ is due to the least singular values (least variances)?
3. How do I use the singular values and vectors of PCA to predict the value of the quantitites at time $t_{M+1}$? Could you let me understand the idea behind?
Thank you very much in advance.
I think your intuition is in the right direction, but what you're missing is that it is customary to subtract the mean smaple vector before performing PCA. The PCA vectors then represent deviations from the mean. The importance of the components in PCA analysis is always from the one with highest value/score to the one with lowest value/score. Depending on the particular application, the vectors with lowest associated values may even be omitted. The intuition is that to understand the variations in the data between samples it is often easier to think in terms of "independently occurring" (more precisely, uncorrelated) variations rather than variations of each specific element, if the elements actually vary together (and in many problems areas, financial assets included, they tend to vary together).
Principal component analysis performs high-dimensional rotation of the source data to maximize its variance. The singular vectors represent the basis of the rotated coordinate system and the principal components are the source data projected onto this basis. The singular values represent the variances of the principal components.
I'd say that PCA-based prediction should look like extrapolating the principal components and projecting them back to the original coordinate system. I don't know PCA-based prediction methods exactly, but I can point you to two related (via SVD) prediction methods:
1. Kumaresan-Tufts linear prediction (ref.)
2. SSA forecasting (N. Golyandina et al.)
• @Nick, thanks for your amendments (shame on me (facepalm)). – werediver Apr 8 '14 at 10:32
• Ha! No shame in this; they were minor edits compared to many others I've made here. Welcome to CV BTW! – Nick Stauner Apr 8 '14 at 10:41
You mention stock returns. In that case the first few eigenvalues/singular values of the PCA turn out to have an explicit interpretation:
1. The first singular value is closely related to the market portfolio: the eigenvector is a linear combination of the $z_i(t)$ with positive coefficients
2. The next couple (say 5) of eigenvalues are typically related to style or sector portfolios, such as defensive vs. cyclical stocks. These stocks often go in and out of favor as a group, which shows up in a principal component decomposition.
Beyond that there is noise, so the last singular value (your question #2) will not tell you anything.
As for your question #3, you can for example try out a predictive regression, as discussed in this paper by Stambaugh
|
|
In the biography of S. Ramanujan “The Man Who Knew Infinity: A Life of the Genius Ramanujan” (excellent, by the way), the author Robert Kanigel describes how Ramanujan’s first published work was a little problem he posed for other mathematicians. In it, Ramanujan asks for the value this series takes:
$\sqrt{1 + 2\sqrt{1 + 3\sqrt{1 + \dots}}}$
Nobody found a solution for six months, so he supplied a more general solution himself (see here and here) and the answer is 3.
Ramanujan filled his notebooks with numerical computations and some of his statements he did not prove, but were intuitions gained from his computational experiments. So, if we didn’t know the solution, could we simulate this series to get an idea of what it’s up to?
I think this naturally lend itself to recursion, so a function that calls itself. In Matlab (repo):
And call it with:
This converges fast:
For every square root, we call iter_sqrt once. So the runtime is linear in $$N$$, $$\mathcal{O}(N)$$.
After writing this I saw that John D. Cook had written a similar post. And I found a flaw in my thinking. Because the cost for computing every element of the series is linear, but computing every consecutive element is more costly. The first element takes one calculation step, the second two and so on, so $$1 + 2 + \dots N = \frac{N(N+1)}{2}$$. Which means the full computation takes $$\mathcal{O}(N^2)$$. Cook writes:
I don’t see how to write a different algorithm that would let you compute f(n+1) taking advantage of having computed f(n). Do you? Can you think of a way to evaluate f(1), f(2), f(3), … f(N) in less than O(N$$^2$$) time?
I can think of two approaches to make this faster, but both fail. First, we cannot use f(1) to calculate f(2), f(3), …, because the true value of f(1) depends on what comes after. Instead f(1) is a crude approximation to f($$\infty$$).
Second, I thought, maybe we can flip this around. If we cannot find an algorithm which uses f(n) to calculate f(n+1), then maybe we can find an algorithm that just finds f(n) brute force and then infers what the previous values were. It is indeed possible to get the value of all the nested radicals from calculating f(n). But calculating f(1), f(2), …, f(n-1) from this still takes the same number of steps as before, so nothing is gained.
|
|
## Physics & Astronomy Faculty Publications
#### Title
Contribution of Dark Matter Annihilation to Gamma-Ray Burst Afterglows near Massive Galaxy Centers
Article
11-18-2020
#### Publication Title
Astrophysical Journal
904
1
1
6
#### Abstract
Gamma-ray bursts (GRBs) are believed to be powered by ultrarelativistic jets. If these jets encounter and accelerate excess electrons and positrons produced by particle dark matter (DM) annihilation, the observed electromagnetic radiation would be enhanced. In this paper, we study GRB afterglow emission with the presence of abundant DM under the weakly interacting massive particle annihilation conditions. We calculate the light curves and spectra of the GRB afterglows with different parameters, i.e., DM density, particle DM mass, annihilation channel, and electron density of the interstellar medium. We find that the effect of DM may become noticeable in the afterglow spectra if the circumburst has a low electron number density ($n\lesssim 0.1\,{\mathrm{cm}}^{-3}$) and if the DM has a high number density (${\rho }_{\chi }\gtrsim {10}^{3}\,\mathrm{GeV}\,{\mathrm{cm}}^{-3}$). According to the standard galaxy DM density profile, GRB afterglows with DM contribution might occur at distances of several to tens of parsecs from the centers of massive galaxies... (See full abstract in article).
#### Disciplines
Stars, Interstellar Medium and the Galaxy
English
COinS
|
|
PROGRAMS
export
Date Published: January 3, 2014 Last Modified: January 3, 2014
The export command is used to make available variables and associated values for other programs called after the export command. When this happens the variable is called an environment variable.
Create New
For example, to make available to following programs the variable myVar, with a value of 5, you would do the following:
# One-liner
$export MY_VAR=5 ## Or another way MY_VAR=5 export MY_VAR To verify the variable exists, you can then type echo$MY_VAR. Note: You cannot put a space before or after the equals sign!
Append To
You can append to environment variables. This is useful, for example, when adding more directories to the $PATH variable. You could append a directory to the $PATH variable using the export command in the following manner:
$export PATH=$PATH:/my/new/dir
Permanent Environment Variables
To make variables permanently available you can put export commands in the ~/.bash_profile file.
See Current Variables
To see a list of all currently exported variables and functions, use the command:
$export -p Some standard environment variables include: $PATH
$USER$USERNAME
Like this page? Upvote with shurikens!
Tags:
comments powered by Disqus
|
|
# Construct a DAG from given multiple topological orderings
I need to construct a DAG, from its given topological orderings (i.e. the graph $$G$$ created must have all the orderings given as its topological orderings). For simplicity, the vertices are labeled as first $$n$$ natural numbers. (Note that once created, the graph $$G$$ may have more topological orderings apart from the ones given.)
The following constraints should be met:
1. The maximum outdegree of every node should be 1
2. The number of nodes having indegree 0 should be minimum.
There can be multiple solutions, any of them should work.
What I have tried. (Approach 1 which is wrong as it fails one example given in the answer. Please scroll to the second approach, which still I'm not able to find a mistake.)
Approach 1
From all the given orderings, first, create a directed graph by creating a directed edge from every node to its next node. This means if the orderings are:
$$1, 3, 2, 5, 4, 6$$ $$3, 1, 5, 2, 4, 6$$ $$1, 5, 3, 2, 4, 6$$
I will create a directed graph with a directed edge from $$1$$ to $$3, 3$$ to $$2, 2$$ to $$5$$ and so forth.
This ensures that the number of nodes with indegree 0 remains minimum. Now, I'll remove all the cycles and make sure all the orderings are valid and finally, eliminate all the extra edges any node has. WHile doing so, I'll make sure that if there are two nodes whose edge is coming from the same node, I'll remove the edge from the node having a higher indegree, so that condition 2 is met. The graph constructed then, should look like:
This DAG created, follows both the constraints, and IMO has the minimum number of nodes of indegree as 0, although, it's not proven.
I have coded the approach and it is giving expected results for the use cases I supply, but I know that it is wrong. What am I missing here? Can anyone provide an alternative use case, which fails the above approach?
Approach 2
I create a directed graph $$G$$ by creating an edge from $$a_i$$ to $$a_j$$ for all $$j > i$$ in all the orderings given. So, for the orderings:
$$1, 2, 3, 4, 5$$ $$2, 4, 1, 5, 3$$
I will create the following graph:
The first step after this is to validate all the orderings. Removing cycles separately is not required as they will be removed by this step itself.
For any ordering $$a_1, a_2, a_3, a_4, ... a_n$$ I will check if there exists any edge from $$a_j$$ to $$a_i$$ where $$j > i$$, I'll remove that edge.
Doing so, will give the following graph:
The last step is to remove extra edges from all nodes as maximum outdegree of any node can be $$1$$ max. I'll remove the outgoing edges such that the number of nodes having indegree $$0$$ is minimum. First I calculate the indegree of each node. Then for each node which has more than 1 outgoing edges, I'll remove all the edges except the one with the minimum indegree.
The final graph $$G$$ will look like:
This graph satisfies both the constraints. But I know this approach is wrong! Can anyone help to find why is this wrong?
• Your conditions seem to be completely unrelated to the topological orderings. A path on $n$ nodes meets both conditions and minimizes the number of nodes with indegree 0. – Steven Mar 11 '20 at 22:08
• @KunalGupta But every node in a cycle can be $a_1$. In addition, what is the order of the cycles you deal with? – xskxzr Mar 12 '20 at 15:33
• What do you mean by "The DAG constructed should have all the listed orderings"? It is clearly impossible to get a graph $G$ such that all input orderings are valid topological orderings, in general. As an example consider your first two orderings, in the first one $1$ must precede $3$, in the second one $3$ must precede $1$. Can you give a formal definition of the graph you're looking for? – Steven Mar 12 '20 at 17:06
• @KunalGupta. You are right. Your problem always admits at least one solution, i.e., the empty graph on $n$ nodes. I think you should add as a condition that all topological orderings need to be valid orderings for the sought graph $G$. This is not required right now, as stated. You just say that you want to compute $G$ "using" the topological orderings. – Steven Mar 12 '20 at 18:46
• IIUC, another way of stating your problem is: Given a family of $k$ total orders on $n$ objects, construct an $n$-vertex forest of rooted trees (with edges directed towards the root in each tree) in which "$u$ is an ancestor of $v$" implies that $v < u$ in every order, and such that the number of leaves is minimised. If so, you can find all feasible edges (an infeasible edge cannot be in any valid solution) in $O(kn^2)$ time: $(v_i, v_j)$ is feasible iff the statement $v_i < v_j$ holds in all $k$ orders. – j_random_hacker Mar 12 '20 at 22:20
# Approach 1
For example, consider the two orders: 1, 2, 3, 4, 5 and 2, 4, 1, 5, 3.
According to your approach, we will get a cycle 1->2->3->4->1. We then remove 3->4 and 4->1, and obtain a graph:
______
/ \
1->2->4->5->3
\______/
Now 5->3 and 1->2 respectively violate the first and the second orders, so we remove them, and get
______
/ \
1 2->4->5 3
\______/
Now node 2 has 2 outgoing edges. Removing either one makes a final graph where 3 nodes (1, 2, 3 or 1, 2, 4) have in-degree 0.
However, there exists a graph
1->3 2->4->5
where both orders are satisfied but only 2 nodes have in-degree 0.
So Approach 1 is incorrect.
# Approach 2
Consider an optimal solution. Every edge in this optimal solution must be the form $$(a_i,a_j)$$ where $$i in each order. This means all edges in the optimal solution are contained in the intermediate graph. So if you then remove the outgoing edges such that the number of nodes having in-degree 0 is minimum, you will get a correct optimal solution.
However, your approach that trying to make the number of nodes having in-degree 0 minimum is incorrect.
For example, consider the three orders: \begin{align} 1, 2, 3, 4, 5, 6, 7, 8 \\ 5, 1, 6, 3, 8, 2, 4, 7 \\ 2, 7, 3, 8, 1, 4, 5, 6 \end{align}
First we can obtain the following intermediate graph:
_____
/ ____\__
/ / ____\_\__
/ / / \ \ \
1 2 3->4 5->6 7 8
\ \__/
\__/
Applying your approach, we will remove 1->4, 2->4 and 3->4 (or 3->8), and there are 5 nodes with in-degree 0: 1, 2, 3, 4 (or 8), 5. However, the optimal solution would be
_______
/ ____ _\__
/ / \ \
1 2 3 4 5->6 7 8
\___/
where only 4 nodes have in-degree 0: 1, 2, 3, 5.
• Yes, but I added after that I ensure that after removing cycles, all the logical orderings still hold as valid topological orderings for the graph. So, if there's still any violation, I'll remove the edges. I think, the approach is correct, maybe I am making some error in coding the algorithm. I may be wrong about this, there may be some flaw in the algorithm as well. – Kunal Gupta Mar 13 '20 at 22:06
• @KunalGupta Please see my edit. – xskxzr Mar 14 '20 at 1:37
• I understood the flaw. I have edited the question to add one more approach, which is simpler, and gives the correct answer for the example you gave. Still, it is wrong .Can you help to find the flaw in that? Thanks! – Kunal Gupta Mar 14 '20 at 11:43
• @KunalGupta Please don't edit the question to change its meaning, otherwise I have to edit my answer again and again, or even worse, if I don't know the answer to your new question, I have to delete my answer. Instead, please post a new question. – xskxzr Mar 14 '20 at 11:47
• I have added another approach in addition to the one already given and acknowledged the answer. You don't need to change it. – Kunal Gupta Mar 14 '20 at 12:14
|
|
# seaneberhard/latex2wp
Tool for converting LaTeX-prepared documents to Wordpress-ready HTML
Switch branches/tags
Nothing to show
Fetching latest commit…
Cannot retrieve the latest commit at this time.
### How to install
You need python 3.6+. You can install latex2wp using pip:
pip install git+https://github.com/seaneberhard/latex2wp.
### How to use
Use template/post-template.tex as a starting point, writing your text between the \begin{document} and \end{document}. In the directory of your tex file, do
latex2wp yourlatexfile.tex.
This will produce a file called yourlatexfile.html, which is ready to be cut and pasted into WordPress. Pure HTML can be produced by adding the --html option.
latex2wp --help.
### What works and what doesn't
See the file latex2wp/test/resources/example.tex to see how to import figures, have text appear in different colors, have links to URLs, and enter the "more" command of WordPress.
Anything between \iftex...\fi is compiled in LaTeX but ignored in the conversion to WordPress; anything between \ifblog...\fi is converted to WordPress but ignored by LaTeX. Anything between \iffalse...\fi is ignored by both.
There is very limited support for macros. All macros must be defined with \def, they must have no arguments, they must be independent, and they must appear outside the main \begin{document}...\end{document} block.
Several theorem-like numbered environments are supported, such as theorem, lemma, proposition, conjecture, remark, corollary, example, and exercise. In addition, there is the proof environment.
You can use the inline math environment $...$ and the displayed math environments $$...$$, $...$, $$...$$. WordPress has some limitations to the kind of latex equations it can display. As a consequence, align and eqnarray are not supported. You can, however, use eqnarray*, and you can use array inside a math environment.
The tabular environment works.
The reference commands \label{}, \eqref{}, and \ref{} work in the standard way.
There is no support for \medskip, \bigskip and other such formatting commands. The return command \\ is recognized.
\section, \section*, \subsection and \subsection* are supported, but not \subsubsection and so on.
There is no support for bibliographic references or footnotes.
### How to customize
The file latex2wp/style.py can be easily modified to add to add new theorem-like environments or change their numbering conventions, or to change the typesetting design of theorem-like environments and other details. To do this you will need to clone the git repo, and pip install your local version by doing pip install . in your checkout.
For example, suppose you want to change the formatting of theorem-like environments. The string beginthm specifies what to do at the beginning of a theorem-like environment. In the string, _ThmType_ will be replaced by the type of theorem (e.g. Theorem, or Lemma, or Corollary, etc.) and _ThmNumb_ will be replaced by the theorem number. So the standard setting
beginthm= "\n<blockquote><b>_ThmType_ _ThmNumb_</b> <em>"
will start a blockquote environment, write in boldface something like "Theorem 3", and then start an emphasized environment. beginnamedthm specifies what to do at the beginning of a theorem-like environment declared by something like
\begin{theorem}[Fundamental Theorem of Calculus]...
the string _ThmName_ holds the content of the text in square brackets in the original LaTeX. endthm specifies what to do at the end of a theorem-like environment.
If you do make changes that you think would be useful to other people, please contribute! Github makes contributing easy. If you need any help just get in touch.
### Changelog
Version 1.0, 2018-09-30
• Migrate project to github
• Migrate to Python 3
• Enable installation by pip
• Package up and add commandline entrypoint latex2wp
Version 0.6.2, 2009-05-06
• Additional support for accented characters
• Convert '>' and '<' to HTML codes
• Changed to handling of & and % in math mode to reflect different WordPress treatment of them
Version 0.6.1, 2009-02-23
• Simplified format of latex2wpstyle.py (by Radu Grigore)
• Allow nesting of font styles such as \bf and \em (by Radu Grigore)
• Allow escaped symbols such as \$ in math mode
• LaTeX macros are correctly "tokenized"
• Support eqnarray* environment
Version 0.6, 2009-02-21 - First release
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.